id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
6026
Countable set
Mathematical set that can be enumerated In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is "countable" if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements. In more technical terms, assuming the axiom of countable choice, a set is "countable" if its cardinality (the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite. The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers. A note on terminology. Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal. An alternative style uses "countable" to mean what is here called countably infinite, and "at most countable" to mean what is here called countable. The terms "enumerable" and denumerable may also be used, e.g. referring to countable and countably infinite respectively, definitions vary and care is needed respecting the difference with recursively enumerable. Definition. A set formula_0 is "countable" if: All of these definitions are equivalent. A set formula_0 is "countably infinite" if: A set is "uncountable" if it is not countable, i.e. its cardinality is greater than formula_2. History. In 1874, in his first set theory article, Cantor proved that the set of real numbers is uncountable, thus showing that not all infinite sets are countable. In 1878, he used one-to-one correspondences to define and compare cardinalities. In 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities. Introduction. A "set" is a collection of "elements", and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted formula_9, called roster form. This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example, formula_10 presumably denotes the set of integers from 1 to 100. Even in this case, however, it is still "possible" to list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up to formula_11, this gives us the usual definition of "sets of size formula_11". Some sets are "infinite"; these sets have more than formula_11 elements where formula_11 is any integer that can be specified. (No matter how large the specified integer formula_11 is, such as formula_12, infinite sets have more than formula_11 elements.) For example, the set of natural numbers, denotable by formula_13, has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer: formula_14 or, more generally, formula_15 (see picture). What we have done here is arrange the integers and the even integers into a "one-to-one correspondence" (or "bijection"), which is a function that maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integers "countably infinite" and say they have cardinality formula_2. Georg Cantor showed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable. Formal overview. By definition, a set formula_0 is "countable" if there exists a bijection between formula_0 and a subset of the natural numbers formula_16. For example, define the correspondence formula_17 Since every element of formula_18 is paired with "precisely one" element of formula_19, "and" vice versa, this defines a bijection, and shows that formula_0 is countable. Similarly we can show all finite sets are countable. As for the case of infinite sets, a set formula_0 is countably infinite if there is a bijection between formula_0 and all of formula_3. As examples, consider the sets formula_20, the set of positive integers, and formula_21, the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignments formula_22 and formula_23, so that formula_24 Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally: <templatestyles src="Math_theorem/styles.css" /> Theorem — A subset of a countable set is countable. The set of all ordered pairs of natural numbers (the Cartesian product of two sets of natural numbers, formula_25 is countably infinite, as can be seen by following a path like the one in the picture: The resulting mapping proceeds as follows: formula_26 This mapping covers all such ordered pairs. This form of triangular mapping recursively generalizes to formula_11-tuples of natural numbers, i.e., formula_27 where formula_6 and formula_11 are natural numbers, by repeatedly mapping the first two elements of an formula_11-tuple to a natural number. For example, formula_28 can be written as formula_29. Then formula_30 maps to 5 so formula_29 maps to formula_31, then formula_31 maps to 39. Since a different 2-tuple, that is a pair such as formula_32, maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set of formula_11-tuples to the set of natural numbers formula_3 is proved. For the set of formula_11-tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem. <templatestyles src="Math_theorem/styles.css" /> Theorem —  The Cartesian product of finitely many countable sets is countable. The set of all integers formula_35 and the set of all rational numbers formula_36 may intuitively seem much bigger than formula_3. But looks can be deceiving. If a pair is treated as the numerator and denominator of a vulgar fraction (a fraction in the form of formula_37 where formula_38 and formula_39 are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural number formula_11 is also a fraction formula_40. So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below. <templatestyles src="Math_theorem/styles.css" /> Theorem — formula_35 (the set of all integers) and formula_36 (the set of all rational numbers) are countable. In a similar manner, the set of algebraic numbers is countable. Sometimes more than one mapping is useful: a set formula_33 to be shown as countable is one-to-one mapped (injection) to another set formula_34, then formula_33 is proved as countable if formula_34 is one-to-one mapped to the set of natural numbers. For example, the set of positive rational numbers can easily be one-to-one mapped to the set of natural number pairs (2-tuples) because formula_41 maps to formula_42. Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable. <templatestyles src="Math_theorem/styles.css" /> Theorem —  Any finite union of countable sets is countable. With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so. <templatestyles src="Math_theorem/styles.css" /> Theorem —  (Assuming the axiom of countable choice) The union of countably many countable sets is countable. For example, given countable sets formula_43, we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above: formula_44 We need the axiom of countable choice to index "all" the sets formula_43 simultaneously. <templatestyles src="Math_theorem/styles.css" /> Theorem —  The set of all finite-length sequences of natural numbers is countable. This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, each of which is a countable set (finite Cartesian product). So we are talking about a countable union of countable sets, which is countable by the previous theorem. <templatestyles src="Math_theorem/styles.css" /> Theorem —  The set of all finite subsets of the natural numbers is countable. The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets. <templatestyles src="Math_theorem/styles.css" /> Theorem — Let formula_0 and formula_45 be sets. These follow from the definitions of countable set as injective / surjective functions. Cantor's theorem asserts that if formula_33 is a set and formula_48 is its power set, i.e. the set of all subsets of formula_33, then there is no surjective function from formula_33 to formula_48. A proof is given in the article Cantor's theorem. As an immediate consequence of this and the Basic Theorem above we have: <templatestyles src="Math_theorem/styles.css" /> Proposition —  The set formula_49 is not countable; i.e. it is uncountable. For an elaboration of this result see Cantor's diagonal argument. The set of real numbers is uncountable, and so is the set of all infinite sequences of natural numbers. Minimal model of set theory is countable. If there is a set that is a standard model (see inner model) of ZFC set theory, then there is a minimal standard model (see Constructible universe). The Löwenheim–Skolem theorem can be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this model "M" contains elements that are: was seen as paradoxical in the early days of set theory; see Skolem's paradox for more. The minimal standard model includes all the algebraic numbers and all effectively computable transcendental numbers, as well as many other kinds of numbers. Total orders. Countable sets can be totally ordered in various ways, for example: In both examples of well orders here, any subset has a "least element"; and in both examples of non-well orders, "some" subsets do not have a "least element". This is the key definition that determines whether a total order is also a well order. Notes. <templatestyles src="Reflist/styles.css" /> Citations. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "|S|" }, { "math_id": 2, "text": "\\aleph_0" }, { "math_id": 3, "text": "\\N" }, { "math_id": 4, "text": "|S|<\\aleph_0" }, { "math_id": 5, "text": "a_0, a_1, a_2, \\ldots" }, { "math_id": 6, "text": "a_i" }, { "math_id": 7, "text": "a_j" }, { "math_id": 8, "text": "i\\neq j" }, { "math_id": 9, "text": "\\{3, 4, 5\\}" }, { "math_id": 10, "text": "\\{1, 2, 3, \\dots, 100\\}" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "n=10^{1000}" }, { "math_id": 13, "text": "\\{0, 1, 2, 3, 4, 5,\\dots\\}" }, { "math_id": 14, "text": "\\ldots \\, -\\! 2\\! \\rightarrow \\! - \\! 4, \\, -\\! 1\\! \\rightarrow \\! - \\! 2, \\, 0\\! \\rightarrow \\! 0, \\, 1\\! \\rightarrow \\! 2, \\, 2\\! \\rightarrow \\! 4 \\, \\cdots" }, { "math_id": 15, "text": "n \\rightarrow 2n" }, { "math_id": 16, "text": "\\N=\\{0,1,2,\\dots\\}" }, { "math_id": 17, "text": "\na \\leftrightarrow 1,\\ b \\leftrightarrow 2,\\ c \\leftrightarrow 3\n" }, { "math_id": 18, "text": "S=\\{a,b,c\\}" }, { "math_id": 19, "text": "\\{1,2,3\\}" }, { "math_id": 20, "text": "A=\\{1,2,3,\\dots\\}" }, { "math_id": 21, "text": "B=\\{0,2,4,6,\\dots\\}" }, { "math_id": 22, "text": "n \\leftrightarrow n+1" }, { "math_id": 23, "text": "n \\leftrightarrow 2n" }, { "math_id": 24, "text": "\\begin{matrix}\n0 \\leftrightarrow 1, & 1 \\leftrightarrow 2, & 2 \\leftrightarrow 3, & 3 \\leftrightarrow 4, & 4 \\leftrightarrow 5, & \\ldots \\\\[6pt]\n0 \\leftrightarrow 0, & 1 \\leftrightarrow 2, & 2 \\leftrightarrow 4, & 3 \\leftrightarrow 6, & 4 \\leftrightarrow 8, & \\ldots\n\\end{matrix}" }, { "math_id": 25, "text": "\\N\\times\\N" }, { "math_id": 26, "text": "\n0 \\leftrightarrow (0, 0), 1 \\leftrightarrow (1, 0), 2 \\leftrightarrow (0, 1), 3 \\leftrightarrow (2, 0), 4 \\leftrightarrow (1, 1), 5 \\leftrightarrow (0, 2), 6 \\leftrightarrow (3, 0), \\ldots\n" }, { "math_id": 27, "text": "(a_1,a_2,a_3,\\dots,a_n)" }, { "math_id": 28, "text": "(0, 2, 3)" }, { "math_id": 29, "text": "((0, 2), 3)" }, { "math_id": 30, "text": "(0, 2)" }, { "math_id": 31, "text": "(5, 3)" }, { "math_id": 32, "text": "(a,b)" }, { "math_id": 33, "text": "A" }, { "math_id": 34, "text": "B" }, { "math_id": 35, "text": "\\Z" }, { "math_id": 36, "text": "\\Q" }, { "math_id": 37, "text": "a/b" }, { "math_id": 38, "text": "a" }, { "math_id": 39, "text": "b\\neq 0" }, { "math_id": 40, "text": "n/1" }, { "math_id": 41, "text": "p/q" }, { "math_id": 42, "text": "(p,q)" }, { "math_id": 43, "text": "\\textbf{a},\\textbf{b},\\textbf{c},\\dots" }, { "math_id": 44, "text": "\n\\begin{array}{ c|c|c } \n\\text{Index} & \\text{Tuple} & \\text {Element} \\\\ \\hline\n0 & (0,0) & \\textbf{a}_0 \\\\\n1 & (0,1) & \\textbf{a}_1 \\\\\n2 & (1,0) & \\textbf{b}_0 \\\\\n3 & (0,2) & \\textbf{a}_2 \\\\\n4 & (1,1) & \\textbf{b}_1 \\\\\n5 & (2,0) & \\textbf{c}_0 \\\\\n6 & (0,3) & \\textbf{a}_3 \\\\\n7 & (1,2) & \\textbf{b}_2 \\\\\n8 & (2,1) & \\textbf{c}_1 \\\\\n9 & (3,0) & \\textbf{d}_0 \\\\\n10 & (0,4) & \\textbf{a}_4 \\\\\n\\vdots & &\n\\end{array}\n" }, { "math_id": 45, "text": "T" }, { "math_id": 46, "text": "f:S\\to T" }, { "math_id": 47, "text": "g:S\\to T" }, { "math_id": 48, "text": "\\mathcal{P}(A)" }, { "math_id": 49, "text": "\\mathcal{P}(\\N)" } ]
https://en.wikipedia.org/wiki?curid=6026
6026106
Homogeneous (large cardinal property)
In set theory and in the context of a large cardinal property, a subset, "S", of "D" is homogeneous for a function formula_0 if "f" is constant on size-formula_1 subsets of "S".p. 72 More precisely, given a set "D", let formula_2 be the set of all size-formula_1 subsets of formula_3 (see ) and let formula_4 be a function defined in this set. Then formula_5 is homogeneous for formula_3 if formula_6.p. 72p. 1 Ramsey's theorem can be stated as for all functions formula_7, there is an infinite set formula_8 which is homogeneous for formula_9.p. 1 Partitions of finite subsets. Given a set "D", let formula_10 be the set of all finite subsets of formula_3 (see ) and let formula_11 be a function defined in this set. On these conditions, "S" is homogeneous for "f" if, for every natural number "n", "f" is constant in the set formula_12. That is, "f" is constant on the unordered "n"-tuples of elements of "S". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f:[D]^n\\to\\lambda" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\mathcal{P}_n(D)" }, { "math_id": 3, "text": "D" }, { "math_id": 4, "text": "f: \\mathcal{P}_n(D) \\to B" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "\\vert f''([S]^n)\\vert=1" }, { "math_id": 7, "text": "f:\\mathbb N^m\\to n" }, { "math_id": 8, "text": "H\\subseteq\\mathbb N" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "\\mathcal{P}_{<\\omega}(D)" }, { "math_id": 11, "text": "f: \\mathcal{P}_{<\\omega}(D) \\to B" }, { "math_id": 12, "text": "\\mathcal{P}_{n}(S)" } ]
https://en.wikipedia.org/wiki?curid=6026106
6026198
Monty Hall problem
Probability puzzle The Monty Hall problem is a brain teaser, in the form of a probability puzzle, based nominally on the American television game show "Let's Make a Deal" and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the "American Statistician" in 1975. It became famous as a question from reader Craig F. Whitaker's letter quoted in Marilyn vos Savant's "Ask Marilyn" column in "Parade" magazine in 1990: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? Savant's response was that the contestant should switch to the other door. By the standard assumptions, the switching strategy has a probability of winning the car, while the strategy of keeping the initial choice has only a probability. When the player first makes their choice, there is a chance that the car is behind one of the doors not chosen. This probability does not change after the host reveals a goat behind one of the unchosen doors. When the host provides information about the two unchosen doors (revealing that one of them does not have the car behind it), the chance of the car being behind one of the unchosen doors rests on the unchosen and unrevealed door, as opposed to the chance of the car being behind the door the contestant chose initially. The given probabilities depend on specific assumptions about how the host and contestant choose their doors. An important insight is that, with these standard conditions, there is more information about doors 2 and 3 than was available at the beginning of the game when door 1 was chosen by the player: the host's action adds value to the door not eliminated, but not to the one chosen by the contestant originally. Another insight is that switching doors is a different action from choosing between the two remaining doors at random, as the former action uses the previous information and the latter does not. Other possible behaviors of the host than the one described can reveal different additional information, or none at all, and yield different probabilities. Many readers of Savant's column refused to believe switching is beneficial and rejected her explanation. After the problem appeared in "Parade", approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them calling Savant wrong. Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy. Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating Savant's predicted result. The problem is a paradox of the "veridical" type, because the solution is so counterintuitive it can seem absurd but is nevertheless demonstrably true. The Monty Hall problem is mathematically related closely to the earlier three prisoners problem and to the much older Bertrand's box paradox. Paradox. Steve Selvin wrote a letter to the "American Statistician" in 1975, describing a problem based on the game show "Let's Make a Deal", dubbing it the "Monty Hall problem" in a subsequent letter. The problem is equivalent mathematically to the Three Prisoners problem described in Martin Gardner's "Mathematical Games" column in "Scientific American" in 1959 and the Three Shells Problem described in Gardner's book "Aha Gotcha". Standard assumptions. By the standard assumptions, the probability of winning the car after switching is . This solution is due to the behavior of the host. Ambiguities in the "Parade" version do not explicitly define the protocol of the host. However, Marilyn vos Savant's solution printed alongside Whitaker's question implies, and both Selvin and Savant explicitly define, the role of the host as follows: When any of these assumptions is varied, it can change the probability of winning by switching doors as detailed in the section below. It is also typically presumed that the car is initially hidden randomly behind the doors and that, if the player initially chooses the car, then the host's choice of which goat-hiding door to open is random. Some authors, independently or inclusively, assume that the player's initial choice is random as well. Simple solutions. The solution presented by Savant in "Parade" shows the three possible arrangements of one car and two goats behind three doors and the result of staying or switching after initially picking door 1 in each case: A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. An intuitive explanation is that, if the contestant initially picks a goat (2 of 3 doors), the contestant "will" win the car by switching because the other goat can no longer be picked – the host had to reveal its location – whereas if the contestant initially picks the car (1 of 3 doors), the contestant "will not" win the car by switching. Using the switching strategy, winning or losing thus only depends on whether the contestant has initially chosen a goat ( probability) or the car ( probability). The fact that the host subsequently reveals a goat in one of the unchosen doors changes nothing about the initial probability. Most people conclude that switching does not matter, because there would be a 50% chance of finding the car behind either of the two unopened doors. This would be true if the host selected a door to open at random, but this is not the case. The host-opened door depends on the player's initial choice, so the assumption of independence does not hold. Before the host opens a door, there is a probability that the car is behind each door. If the car is behind door 1, the host can open either door 2 or door 3, so the probability that the car is behind door 1 "and" the host opens door 3 is × = . If the car is behind door 2 – with the player having picked door 1 – the host "must" open door 3, such the probability that the car is behind door 2 "and" the host opens door 3 is × 1 = . These are the only cases where the host opens door 3, so if the player has picked door 1 and the host opens door 3, the car is twice as likely to be behind door 2 as door 1. The key is that if the car is behind door 2 the host "must" open door 3, but if the car is behind door 1 the host can open either door. Another way to understand the solution is to consider together the two doors initially unchosen by the player. As Cecil Adams puts it, "Monty is saying in effect: you can keep your one door or you can have the other two doors". The chance of finding the car has not been changed by the opening of one of these doors because Monty, knowing the location of the car, is certain to reveal a goat. The player's choice after the host opens a door is no different than if the host offered the player the option to switch from the original chosen door to the set of "both" remaining doors. The switch in this case clearly gives the player a probability of choosing the car. As Keith Devlin says, "By opening his door, Monty is saying to the contestant 'There are two doors you did not choose, and the probability that the prize is behind one of them is . I'll help you by using my knowledge of where the prize is to open one of those two doors to show you that it does not hide the prize. You can now take advantage of this additional information. Your choice of door A has a chance of 1 in 3 of being the winner. I have not changed that. But by eliminating door C, I have shown you that the probability that door B hides the prize is 2 in 3.'" Savant suggests that the solution will be more intuitive with 1,000,000 doors rather than 3. In this case, there are 999,999 doors with goats behind them and one door with a prize. After the player picks a door, the host opens 999,998 of the remaining doors. On average, in 999,999 times out of 1,000,000, the remaining door will contain the prize. Intuitively, the player should ask how likely it is that, given a million doors, they managed to pick the right one initially. Stibel et al. proposed that working memory demand is taxed during the Monty Hall problem and that this forces people to "collapse" their choices into two equally probable options. They report that when the number of options is increased to more than 7 people tend to switch more often; however, most contestants still incorrectly judge the probability of success to be 50%. Savant and the media furor. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; You blew it, and you blew it big! Since you seem to have difficulty grasping the basic principle at work here, I'll explain. After the host reveals a goat, you now have a one-in-two chance of being correct. Whether you change your selection or not, the odds are the same. There is enough mathematical illiteracy in this country, and we don't need the world's highest IQ propagating more. Shame! Scott Smith, University of Florida Savant wrote in her first column on the Monty Hall problem that the player should switch. She received thousands of letters from her readers – the vast majority of which, including many from readers with PhDs, disagreed with her answer. During 1990–1991, three more of her columns in "Parade" were devoted to the paradox. Numerous examples of letters from readers of Savant's columns are presented and discussed in "The Monty Hall Dilemma: A Cognitive Illusion Par Excellence". The discussion was replayed in other venues (e.g., in Cecil Adams' "The Straight Dope" newspaper column) and reported in major newspapers such as "The New York Times". In an attempt to clarify her answer, she proposed a shell game to illustrate: "You look away, and I put a pea under one of three shells. Then I ask you to put your finger on a shell. The odds that your choice contains a pea are , agreed? Then I simply lift up an empty shell from the remaining other two. As I can (and will) do this regardless of what you've chosen, we've learned nothing to allow us to revise the odds on the shell under your finger." She also proposed a similar simulation with three playing cards. Savant commented that, though some confusion was caused by "some" readers' not realizing they were supposed to assume that the host must always reveal a goat, almost all her numerous correspondents had correctly understood the problem assumptions, and were still initially convinced that Savant's answer ("switch") was wrong. Confusion and criticism. Sources of confusion. When first presented with the Monty Hall problem, an overwhelming majority of people assume that each door has an equal probability and conclude that switching does not matter. Out of 228 subjects in one study, only 13% chose to switch. In his book "The Power of Logical Thinking", cognitive psychologist Massimo Piattelli Palmarini writes: "No other statistical puzzle comes so close to fooling all the people all the time [and] even Nobel physicists systematically give the wrong answer, and that they "insist" on it, and they are ready to berate in print those who propose the right answer". Pigeons repeatedly exposed to the problem show that they rapidly learn to always switch, unlike humans. Most statements of the problem, notably the one in "Parade", do not match the rules of the actual game show and do not fully specify the host's behavior or that the car's location is randomly selected. However, Krauss and Wang argue that people make the standard assumptions even if they are not explicitly stated. Although these issues are mathematically significant, even when controlling for these factors, nearly all people still think each of the two unopened doors has an equal probability and conclude that switching does not matter. This "equal probability" assumption is a deeply rooted intuition. People strongly tend to think probability is evenly distributed across as many unknowns as are present, whether it is or not. The problem continues to attract the attention of cognitive psychologists. The typical behavior of the majority, i.e., not switching, may be explained by phenomena known in the psychological literature as: Experimental evidence confirms that these are plausible explanations that do not depend on probability intuition. Another possibility is that people's intuition simply does not deal with the textbook version of the problem, but with a real game show setting. There, the possibility exists that the show master plays deceitfully by opening other doors only if a door with the car was initially chosen. A show master playing deceitfully half of the times modifies the winning chances in case one is offered to switch to "equal probability". Criticism of the simple solutions. As already remarked, most sources in the topic of probability, including many introductory probability textbooks, solve the problem by showing the conditional probabilities that the car is behind door 1 and door 2 are and (not and ) given that the contestant initially picks door 1 and the host opens door 3; various ways to derive and understand this result were given in the previous subsections. Among these sources are several that explicitly criticize the popularly presented "simple" solutions, saying these solutions are "correct but ... shaky", or do not "address the problem posed", or are "incomplete", or are "unconvincing and misleading", or are (most bluntly) "false". Sasha Volokh (2015) wrote that "any explanation that says something like 'the probability of door 1 was , and nothing can change that...' is automatically fishy: probabilities are expressions of our ignorance about the world, and new information can change the extent of our ignorance." Some say that these solutions answer a slightly different question – one phrasing is "you have to announce "before a door has been opened" whether you plan to switch". The simple solutions show in various ways that a contestant who is determined to switch will win the car with probability , and hence that switching is the winning strategy, if the player has to choose in advance between "always switching", and "always staying". However, the probability of winning by "always" switching is a logically distinct concept from the probability of winning by switching "given that the player has picked door 1 and the host has opened door 3". As one source says, "the distinction between [these questions] seems to confound many". The fact that these are different can be shown by varying the problem so that these two probabilities have different numeric values. For example, assume the contestant knows that Monty does not open the second door randomly among all legal alternatives but instead, when given an opportunity to choose between two losing doors, Monty will open the one on the right. In this situation, the following two questions have different answers: The answer to the first question is , as is shown correctly by the "simple" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is . This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability ) or if the car is behind door 2 (also originally with probability ). For this variation, the two questions yield different answers. This is partially because the assumed condition of the second question (that the host opens door 3) would only occur in this variant with probability . However, as long as the initial probability the car is behind each door is , it is never to the contestant's disadvantage to switch, as the conditional probability of winning by switching is always at least . In Morgan "et al.", four university professors published an article in "The American Statistician" claiming that Savant gave the correct advice but the wrong argument. They believed the question asked for the chance of the car behind door 2 "given" the player's initial choice of door 1 and the game host opening door 3, and they showed this chance was anything between and 1 depending on the host's decision process given the choice. Only when the decision is completely randomized is the chance . In an invited comment and in subsequent letters to the editor, Morgan "et al" were supported by some writers, criticized by others; in each case a response by Morgan "et al" is published alongside the letter or comment in "The American Statistician". In particular, Savant defended herself vigorously. Morgan "et al" complained in their response to Savant that Savant still had not actually responded to their own main point. Later in their response to Hogbin and Nijdam, they did agree that it was natural to suppose that the host chooses a door to open completely at random when he does have a choice, and hence that the conditional probability of winning by switching (i.e., conditional given the situation the player is in when he has to make his choice) has the same value, , as the unconditional probability of winning by switching (i.e., averaged over all possible situations). This equality was already emphasized by Bell (1992), who suggested that Morgan "et al"'s mathematically-involved solution would appeal only to statisticians, whereas the equivalence of the conditional and unconditional solutions in the case of symmetry was intuitively obvious. There is disagreement in the literature regarding whether Savant's formulation of the problem, as presented in "Parade", is asking the first or second question, and whether this difference is significant. Behrends concludes that "One must consider the matter with care to see that both analyses are correct", which is not to say that they are the same. Several critics of the paper by Morgan "et al.", whose contributions were published along with the original paper, criticized the authors for altering Savant's wording and misinterpreting her intention. One discussant (William Bell) considered it a matter of taste whether one explicitly mentions that (by the standard conditions) "which" door is opened by the host is independent of whether one should want to switch. Among the simple solutions, the "combined doors solution" comes closest to a conditional solution, as we saw in the discussion of methods using the concept of odds and Bayes' theorem. It is based on the deeply rooted intuition that "revealing information that is already known does not affect probabilities". But, knowing that the host can open one of the two unchosen doors to show a goat does not mean that opening a specific door would not affect the probability that the car is behind the door chosen initially. The point is, though we know in advance that the host will open a door and reveal a goat, we do not know "which" door he will open. If the host chooses uniformly at random between doors hiding a goat (as is the case in the standard interpretation), this probability indeed remains unchanged, but if the host can choose non-randomly between such doors, then the specific door that the host opens reveals additional information. The host can always open a door revealing a goat "and" (in the standard interpretation of the problem) the probability that the car is behind the initially chosen door does not change, but it is "not because" of the former that the latter is true. Solutions based on the assertion that the host's actions cannot affect the probability that the car is behind the initially chosen appear persuasive, but the assertion is simply untrue unless both of the host's two choices are equally likely, if he has a choice. The assertion therefore needs to be justified; without justification being given, the solution is at best incomplete. It can be the case that the answer is correct but the reasoning used to justify it is defective. Solutions using conditional probability and other solutions. The simple solutions above show that a player with a strategy of switching wins the car with overall probability , i.e., without taking account of which door was opened by the host. In accordance with this, most sources for the topic of probability calculate the conditional probabilities that the car is behind door 1 and door 2 to be and respectively given the contestant initially picks door 1 and the host opens door 3. The solutions in this section consider just those cases in which the player picked door 1 and the host opened door 3. Refining the simple solution. If we assume that the host opens a door at random, when given a choice, then which door the host opens gives us no information at all as to whether or not the car is behind door 1. In the simple solutions, we have already observed that the probability that the car is behind door 1, the door initially chosen by the player, is initially . Moreover, the host is certainly going to open "a" (different) door, so opening "a" door ("which" door is unspecified) does not change this. must be the average of: the probability that the car is behind door 1, given that the host picked door 2, and the probability of car behind door 1, given the host picked door 3: this is because these are the only two possibilities. But, these two probabilities are the same. Therefore, they are both equal to . This shows that the chance that the car is behind door 1, given that the player initially chose this door and given that the host opened door 3, is , and it follows that the chance that the car is behind door 2, given that the player initially chose door 1 and the host opened door 3, is . The analysis also shows that the overall success rate of , achieved by "always switching", cannot be improved, and underlines what already may well have been intuitively obvious: the choice facing the player is that between the door initially chosen, and the other door left closed by the host, the specific numbers on these doors are irrelevant. Conditional probability by direct calculation. By definition, the conditional probability of winning by switching given the contestant initially picks door 1 and the host opens door 3 is the probability for the event "car is behind door 2 and host opens door 3" divided by the probability for "host opens door 3". These probabilities can be determined referring to the conditional probability table below, or to an equivalent decision tree. The conditional probability of winning by switching is , which is . The conditional probability table below shows how 300 cases, in all of which the player initially chooses door 1, would be split up, on average, according to the location of the car and the choice of door to open by the host. Bayes' theorem. Many probability text books and articles in the field of probability theory derive the conditional probability solution through a formal application of Bayes' theorem; among them books by Gill and Henze. Use of the odds form of Bayes' theorem, often called Bayes' rule, makes such a derivation more transparent. Initially, the car is equally likely to be behind any of the three doors: the odds on door 1, door 2, and door 3 are 1 : 1 : 1. This remains the case after the player has chosen door 1, by independence. According to Bayes' rule, the posterior odds on the location of the car, given that the host opens door 3, are equal to the prior odds multiplied by the Bayes factor or likelihood, which is, by definition, the probability of the new piece of information (host opens door 3) under each of the hypotheses considered (location of the car). Now, since the player initially chose door 1, the chance that the host opens door 3 is 50% if the car is behind door 1, 100% if the car is behind door 2, 0% if the car is behind door 3. Thus the Bayes factor consists of the ratios : 1 : 0 or equivalently 1 : 2 : 0, while the prior odds were 1 : 1 : 1. Thus, the posterior odds become equal to the Bayes factor 1 : 2 : 0. Given that the host opened door 3, the probability that the car is behind door 3 is zero, and it is twice as likely to be behind door 2 than door 1. Richard Gill analyzes the likelihood for the host to open door 3 as follows. Given that the car is "not" behind door 1, it is equally likely that it is behind door 2 or 3. Therefore, the chance that the host opens door 3 is 50%. Given that the car "is" behind door 1, the chance that the host opens door 3 is also 50%, because, when the host has a choice, either choice is equally likely. Therefore, whether or not the car is behind door 1, the chance that the host opens door 3 is 50%. The information "host opens door 3" contributes a Bayes factor or likelihood ratio of 1 : 1, on whether or not the car is behind door 1. Initially, the odds against door 1 hiding the car were 2 : 1. Therefore, the posterior odds against door 1 hiding the car remain the same as the prior odds, 2 : 1. In words, the information "which" door is opened by the host (door 2 or door 3?) reveals no information at all about whether or not the car is behind door 1, and this is precisely what is alleged to be intuitively obvious by supporters of simple solutions, or using the idioms of mathematical proofs, "obviously true, by symmetry". Strategic dominance solution. Going back to Nalebuff, the Monty Hall problem is also much studied in the literature on game theory and decision theory, and also some popular solutions correspond to this point of view. Savant asks for a decision, not a chance. And the chance aspects of how the car is hidden and how an unchosen door is opened are unknown. From this point of view, one has to remember that the player has two opportunities to make choices: first of all, which door to choose initially; and secondly, whether or not to switch. Since he does not know how the car is hidden nor how the host makes choices, he may be able to make use of his first choice opportunity, as it were to neutralize the actions of the team running the quiz show, including the host. Following Gill, a "strategy" of contestant involves two actions: the initial choice of a door and the decision to switch (or to stick) which may depend on both the door initially chosen and the door to which the host offers switching. For instance, one contestant's strategy is "choose door 1, then switch to door 2 when offered, and do not switch to door 3 when offered". Twelve such deterministic strategies of the contestant exist. Elementary comparison of contestant's strategies shows that, for every strategy A, there is another strategy B "pick a door then switch no matter what happens" that dominates it. No matter how the car is hidden and no matter which rule the host uses when he has a choice between two goats, if A wins the car then B also does. For example, strategy A "pick door 1 then always stick with it" is dominated by the strategy B "pick door 2 then always switch after the host reveals a door": A wins when door 1 conceals the car, while B wins when either of the doors 1 or 3 conceals the car. Similarly, strategy A "pick door 1 then switch to door 2 (if offered), but do not switch to door 3 (if offered)" is dominated by strategy B "pick door 2 then always switch". A wins when door 1 conceals the car and Monty chooses to open door 2 or if door 3 conceals the car. Strategy B wins when either door 1 or door 3 conceals the car, that is, whenever A wins plus the case where door 1 conceals the car and Monty chooses to open door 3. Dominance is a strong reason to seek for a solution among always-switching strategies, under fairly general assumptions on the environment in which the contestant is making decisions. In particular, if the car is hidden by means of some randomization device – like tossing symmetric or asymmetric three-sided die – the dominance implies that a strategy maximizing the probability of winning the car will be among three always-switching strategies, namely it will be the strategy that initially picks the least likely door then switches no matter which door to switch is offered by the host. Strategic dominance links the Monty Hall problem to game theory. In the zero-sum game setting of Gill, discarding the non-switching strategies reduces the game to the following simple variant: the host (or the TV-team) decides on the door to hide the car, and the contestant chooses two doors (i.e., the two doors remaining after the player's first, nominal, choice). The contestant wins (and her opponent loses) if the car is behind one of the two doors she chose. Solutions by simulation. A simple way to demonstrate that a switching strategy really does win two out of three times with the standard assumptions is to simulate the game with playing cards. Three cards from an ordinary deck are used to represent the three doors; one 'special' card represents the door with the car and two other cards represent the goat doors. The simulation can be repeated several times to simulate multiple rounds of the game. The player picks one of the three cards, then, looking at the remaining two cards the 'host' discards a goat card. If the card remaining in the host's hand is the car card, this is recorded as a switching win; if the host is holding a goat card, the round is recorded as a staying win. As this experiment is repeated over several rounds, the observed win rate for each strategy is likely to approximate its theoretical win probability, in line with the law of large numbers. Repeated plays also make it clearer why switching is the better strategy. After the player picks his card, it is "already determined" whether switching will win the round for the player. If this is not convincing, the simulation can be done with the entire deck. In this variant, the car card goes to the host 51 times out of 52, and stays with the host no matter how many "non"-car cards are discarded. Variants. A common variant of the problem, assumed by several academic authors as the canonical problem, does not make the simplifying assumption that the host must uniformly choose the door to open, but instead that he uses some other strategy. The confusion as to which formalization is authoritative has led to considerable acrimony, particularly because this variant makes proofs more involved without altering the optimality of the always-switch strategy for the player. In this variant, the player can have different probabilities of winning depending on the observed choice of the host, but in any case the probability of winning by switching is at least (and can be as high as 1), while the overall probability of winning by switching is still exactly . The variants are sometimes presented in succession in textbooks and articles intended to teach the basics of probability theory and game theory. A considerable number of other generalizations have also been studied. Other host behaviors. The version of the Monty Hall problem published in "Parade" in 1990 did not specifically state that the host would always open another door, or always offer a choice to switch, or even never open the door revealing the car. However, Savant made it clear in her second follow-up column that the intended host's behavior could only be what led to the probability she gave as her original answer. "Anything else is a different question." "Virtually all of my critics understood the intended scenario. I personally read nearly three thousand letters (out of the many additional thousands that arrived) and found nearly every one insisting simply that because two options remained (or an equivalent error), the chances were even. Very few raised questions about ambiguity, and the letters actually published in the column were not among those few." The answer follows if the car is placed randomly behind any door, the host must open a door revealing a goat regardless of the player's initial choice and, if two doors are available, chooses which one to open randomly. The table below shows a variety of "other" possible host behaviors and the impact on the success of switching. Determining the player's best strategy within a given set of other rules the host must follow is the type of problem studied in game theory. For example, if the host is not required to make the offer to switch the player may suspect the host is malicious and makes the offers more often if the player has initially selected the car. In general, the answer to this sort of question depends on the specific assumptions made about the host's behavior, and might range from "ignore the host completely" to "toss a coin and switch if it comes up heads"; see the last row of the table below. Morgan "et al" and Gillman both show a more general solution where the car is (uniformly) randomly placed but the host is not constrained to pick uniformly randomly if the player has initially selected the car, which is how they both interpret the statement of the problem in "Parade" despite the author's disclaimers. Both changed the wording of the "Parade" version to emphasize that point when they restated the problem. They consider a scenario where the host chooses between revealing two goats with a preference expressed as a probability "q", having a value between 0 and 1. If the host picks randomly "q" would be and switching wins with probability regardless of which door the host opens. If the player picks door 1 and the host's preference for door 3 is "q", then the probability the host opens door 3 and the car is behind door 2 is , while the probability the host opens door 3 and the car is behind door 1 is . These are the only cases where the host opens door 3, so the conditional probability of winning by switching "given the host opens door 3" is which simplifies to . Since "q" can vary between 0 and 1 this conditional probability can vary between and 1. This means even without constraining the host to pick randomly if the player initially selects the car, the player is never worse off switching. However neither source suggests the player knows what the value of "q" is so the player cannot attribute a probability other than the that Savant assumed was implicit. "N" doors. D. L. Ferguson (1975 in a letter to Selvin) suggests an "N"-door generalization of the original problem in which the host opens "p" losing doors and then offers the player the opportunity to switch; in this variant switching wins with probability formula_0. This probability is always greater than formula_1, therefore switching always brings an advantage. Even if the host opens only a single door (formula_2), the player is better off switching in every case. As "N" grows larger, the advantage decreases and approaches zero. At the other extreme, if the host opens all losing doors but one ("p" = "N" − 2) the advantage increases as "N" grows large (the probability of winning by switching is , which approaches 1 as "N" grows very large). Quantum version. A quantum version of the paradox illustrates some points about the relation between classical or non-quantum information and quantum information, as encoded in the states of quantum mechanical systems. The formulation is loosely based on quantum game theory. The three doors are replaced by a quantum system allowing three alternatives; opening a door and looking behind it is translated as making a particular measurement. The rules can be stated in this language, and once again the choice for the player is to stick with the initial choice, or change to another "orthogonal" option. The latter strategy turns out to double the chances, just as in the classical case. However, if the show host has not randomized the position of the prize in a fully quantum mechanical way, the player can do even better, and can sometimes even win the prize with certainty. History. The earliest of several probability puzzles related to the Monty Hall problem is Bertrand's box paradox, posed by Joseph Bertrand in 1889 in his "Calcul des probabilités". In this puzzle, there are three boxes: a box containing two gold coins, a box with two silver coins, and a box with one of each. After choosing a box at random and withdrawing one coin at random that happens to be a gold coin, the question is what is the probability that the other coin is gold. As in the Monty Hall problem, the intuitive answer is , but the probability is actually . The Three Prisoners problem, published in Martin Gardner's "Mathematical Games" column in "Scientific American" in 1959 is equivalent to the Monty Hall problem. This problem involves three condemned prisoners, a random one of whom has been secretly chosen to be pardoned. One of the prisoners begs the warden to tell him the name of one of the others to be executed, arguing that this reveals no information about his own fate but increases his chances of being pardoned from to . The warden obliges, (secretly) flipping a coin to decide which name to provide if the prisoner who is asking is the one being pardoned. The question is whether knowing the warden's answer changes the prisoner's chances of being pardoned. This problem is equivalent to the Monty Hall problem; the prisoner asking the question still has a chance of being pardoned but his unnamed colleague has a chance. Steve Selvin posed the Monty Hall problem in a pair of letters to "The American Statistician" in 1975. The first letter presented the problem in a version close to its presentation in "Parade" 15 years later. The second appears to be the first use of the term "Monty Hall problem". The problem is actually an extrapolation from the game show. Monty Hall "did" open a wrong door to build excitement, but offered a known lesser prize – such as $100 cash – rather than a choice to switch doors. As Monty Hall wrote to Selvin: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;And if you ever get on my show, the rules hold fast for you – no trading boxes after the selection. A version of the problem very similar to the one that appeared three years later in "Parade" was published in 1987 in the Puzzles section of "The Journal of Economic Perspectives". Nalebuff, as later writers in mathematical economics, sees the problem as a simple and amusing exercise in game theory. "The Monty Hall Trap", Phillip Martin's 1989 article in "Bridge Today", presented Selvin's problem as an example of what Martin calls the probability trap of treating non-random information as if it were random, and relates this to concepts in the game of bridge. A restated version of Selvin's problem appeared in Marilyn vos Savant's "Ask Marilyn" question-and-answer column of "Parade" in September 1990. Though Savant gave the correct answer that switching would win two-thirds of the time, she estimates the magazine received 10,000 letters including close to 1,000 signed by PhDs, many on letterheads of mathematics and science departments, declaring that her solution was wrong. Due to the overwhelming response, "Parade" published an unprecedented four columns on the problem. As a result of the publicity the problem earned the alternative name "Marilyn and the Goats". In November 1990, an equally contentious discussion of Savant's article took place in Cecil Adams's column "The Straight Dope". Adams initially answered, incorrectly, that the chances for the two remaining doors must each be one in two. After a reader wrote in to correct the mathematics of Adams's analysis, Adams agreed that mathematically he had been wrong. "You pick door #1. Now you're offered this choice: open door #1, or open door #2 and door #3. In the latter case you keep the prize if it's behind either door. You'd rather have a two-in-three shot at the prize than one-in-three, wouldn't you? If you think about it, the original problem offers you basically the same choice. Monty is saying in effect: you can keep your one door or you can have the other two doors, one of which (a non-prize door) I'll open for you." Adams did say the "Parade" version left critical constraints unstated, and without those constraints, the chances of winning by switching were not necessarily two out of three (e.g., it was not reasonable to assume the host always opens a door). Numerous readers, however, wrote in to claim that Adams had been "right the first time" and that the correct chances were one in two. The "Parade" column and its response received considerable attention in the press, including a front-page story in "The New York Times" in which Monty Hall himself was interviewed. Hall understood the problem, giving the reporter a demonstration with car keys and explaining how actual game play on "Let's Make a Deal" differed from the rules of the puzzle. In the article, Hall pointed out that because he had control over the way the game progressed, playing on the psychology of the contestant, the theoretical solution did not apply to the show's actual gameplay. He said he was not surprised at the experts' insistence that the probability was 1 out of 2. "That's the same assumption contestants would make on the show after I showed them there was nothing behind one door," he said. "They'd think the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I offered. By opening that door we were applying pressure. We called it the Henry James treatment. It was 'The Turn of the Screw'." Hall clarified that as a game show host he did not have to follow the rules of the puzzle in the Savant column and did not always have to allow a person the opportunity to switch (e.g., he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might allow them the opportunity to switch only if they had a winning door). "If the host is required to open a door all the time and offer you a switch, then you should take the switch," he said. "But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all depends on his mood." In literature. Andrew Crumey's novel "Mr Mee" (2000) contains a version of the Monty Hall problem set in the 18th century. In Chapter 1 it is presented as a shell game that a prisoner must win in order to save his life. In Chapter 8 the philosopher Rosier, his student Tissot and Tissot's wife test the probabilities by simulation and verify the counter-intuitive result. They then perform an experiment involving black and white beads resembling the boy or girl paradox, and in a humorous allusion to the Einstein-Podolsky-Rosen paradox, Rosier erroneously concludes that "as soon as I saw my own bead, a wave of pure probability flew, instantaneously, from one end of the room to the other. This accounted for the sudden change from two thirds to a half, as a finite quantum of probability (of weight one sixth) passed miraculously between the beads, launched by my own act of observation." Rosier explains his theory to Tissot, but "His poor grasp of my theories emerged some days later when, his sister being about to give birth, Tissot paid for a baby girl to be sent into the room, believing it would make the new child twice as likely to be born male. My pupil however gained a niece; and I found no difficulty in explaining the fallacy of his reasoning. Tissot had merely misunderstood my remarkable Paradox of the Twins, which states that if a boy tells you he has a sibling, then the probability of it being a sister is not a half, but two thirds." This is followed by a version of the unexpected hanging paradox. In chapter 101 of Mark Haddon's "The Curious Incident of the Dog in the Night-Time" (2003) the narrator Christopher discusses the Monty Hall Problem, describing its history and solution. He concludes, "And this shows that intuition can sometimes get things wrong. And intuition is what people use in life to make decisions. But logic can help you work out the right answer." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{N} \\cdot \\frac{N-1}{N-p-1}" }, { "math_id": 1, "text": "\\frac{1}{N}" }, { "math_id": 2, "text": "p=1" } ]
https://en.wikipedia.org/wiki?curid=6026198
60262610
Partially linear model
Type of statistical model A partially linear model is a form of semiparametric model, since it contains parametric and nonparametric elements. Application of the least squares estimators is available to partially linear model, if the hypothesis of the known of nonparametric element is valid. Partially linear equations were first used in the analysis of the relationship between temperature and usage of electricity by Engle, Granger, Rice and Weiss (1986). Typical application of partially linear model in the field of Microeconomics is presented by Tripathi in the case of profitability of firm's production in 1997. Also, partially linear model applied successfully in some other academic field. In 1994, Zeger and Diggle introduced partially linear model into biometrics. In environmental science, Parda-Sanchez et al. used partially linear model to analysis collected data in 2000. So far, partially linear model was optimized in many other statistic methods. In 1988, Robinson applied Nadaraya-Waston kernel estimator to test the nonparametric element to build a least-squares estimator  After that, in 1997, local linear method was found by Truong. Partially linear model. Synopsis. Algebra equation. The algebra expression of partially linear model is written as: formula_0 Equation components outline. formula_1 and formula_2: Vectors of explanatory variables. Independently random or fixed distributed variables. formula_3: To be measured Parameter. formula_4: The random error in statistics with 0 mean. formula_5: To be measured part in partially linear model. Assumption. Wolfgang, Hua Liang and Jiti Gao consider the assumptions and remarks of partially linear model under fixed and random design conditions. When randomly distributed, introduce formula_6 and formula_7 (1) formula_8is smaller than positive infinity when t value between 0 and 1, and the sum of covariance of formula_9 is positive. The random errors μ are independent of formula_10, When formula_11 and Ti are fixed distributed, formula_12valued between 0 and 1, and formula_11satisfies formula_13, where factor i values between 1 and n, and factor j value between 1 and p, Error factor formula_14satisfies, formula_15. The least square (LS) estimators. The precondition of application of the least squares estimators is the existence of nonparametric component, and running at random distributed and fixed distributed cases. Engle, Granger, Rice and Weiss's (1986) smoothing model should be first introduced, before applying the least squares estimators. The algebra function of their model is expressed as formula_16 (2). Wolfgang, Liang and Gao (1988) make an assumption that the pair (ß,g) satisfies formula_17 (3). This means that for all formula_18, formula_19. So, formula_20 and formula_21. Under random distributed case, Wolfgang, Hua Liang and Jiti Gao assume that for all 1 ≤ i ≤ n, formula_22 (4) so, formula_23formula_24, due to the fact that formula_25 is a positive number, as proved by function (1). So, formula_26established for all 1≤i≤n and j equals to 1 and 2 when formula_27. Under fixed distributed case, By parameterizing factor  from smoothing model (2) as formula_28where formula_29. By making same assumption as (4), which follows from assumption (1), formula_30and formula_27under the fact of formula_31. Assuming factors formula_32(i here are positive integers) satisfies formula_33and establish positive weight functions formula_34. Any estimators of formula_35, for every formula_36, we have formula_37. By applying LS criterion, the LS estimator of formula_38. The nonparametric estimator of formula_39  is expressed as formula_40. So, When the random errors are identically distributed, the estimators of variance formula_41 is expressed as, formula_42. History and applications of partially linear model. The real-world application of partially linear model was first considered for analyzing data by Engle, Granger, Rice and Weiss in 1986. In their point of view, the relevance between temperature and the consumption of electricity cannot be expressed in a linear model, because there are massive of confounding factors, such as average income, goods price, consumer purchase ability and some other economic activities. Some of the factors are relevance with each other and might influence the observed result. Therefore, they introduced partially linear model, which contained both with parametric and nonparametric factors. The partially linear model enables and simplifies the linear transformation of data (Engle, Granger, Rice and Weiss, 1986). They also applied the smoothing spline technique for their research. There was a case of application of partially linear model in biometrics by Zeger and Diggle in 1994. The research objective of their paper is the evolution period cycle of CD4 cell amounts in HIV (Human immune-deficiency virus) seroconverters (Zeger and Diggle, 1994). CD4 cell plays a significant role in immune function in human body. Zeger and Diggle aimed to assess the proceed of disease by measuring the changing amount of CD4 cells. The number of CD4 cell is associated with body age and smoking behavior and so on. To clear the group of observation data in their experiment, Zeger and Diggle applied partially linear model for their work. Partially linear model primarily contributes to the estimation of average loss time of CD4 cells and adjusts the time dependence of some other covariables in order to simplify the proceed of data comparison, and also, the partially linear model characterizes the deviation of typical curve for their observed group to estimate the progression curve of the changing amount of CD4 cell. The deviation, granted by partially linear model, potentially helps to recognize the observed targets who had a slow progression on the amounting change of CD4 cells.  In 1999, Schmalensee and Stoker (1999) have used partially linear model in the field of economics. The independent variable of their research is the demand for gasoline in The United States. The primary research target in their paper is the relationship between gasoline consumption and long-run income elasticity in the U.S. Similarly, there are also massive of confounding variables, which might mutually affect. Hence, Schmalemsee and Stoker chose to deal with the issues of linear transformation of data between parametric and nonparametric by applying partially linear model. In the field of environment science, Prada-Sanchez used partially linear model to predict the sulfur dioxide pollution in 2000 (Prada-Sanchez, 2000), and in the next year, Lin and Carroll applied partially linear model for clustered data (Lin and Carroall, 2001). Development of partially linear model. According to Liang's paper in 2010 (Liang, 2010), The smoothing spline technique was introduced in partially linear model by Engle, Heckman and Rice in 1986. After that, Robinson found an available LS estimator for nonparametric factors in partially linear model in 1988. At the same year, profile LS method was recommended by Speckman. Other econometrics tools in partially linear model. Kernel regression also was introduced in partially linear model. The local constant method, which is developed by Speckman, and local linear techniques, which was found by Hamilton and Truong in 1997 and was revised by Opsomer and Ruppert in 1997, are all included in kernel regression. Green et al., Opsomer and Ruppert found that one of the significant characteristic of kernel-based methods is that under-smoothing has been taken in order to find root-n estimator of beta. However, Speckman's research in 1988 and Severini's and Staniswalis's research in 1994 proved that those restriction might be canceled. Bandwidth selection in partially linear model. Bandwidth selection in partially linear model is a confusing issue. Liang addressed a possible solution for this bandwidth selection in his literature by applying profile-kernel based method and backfitting methods. Also the necessity of undersmoothing for backfitting method and the reason why profile-kernel based method can work out the optimal bandwidth selection were justified by Liang. The general computation strategy is applied in Liang's literature for estimating nonparametric function. Moreover, the penalized spline method for partially linear models and intensive simulation experiments were introduced to discover the numerical feature of the penalized spline method, profile and backfitting methods. Kernel-based profile and backfitting method. By introducing formula_43 Following with formula_44 The intuitive estimator of ß can be defined as the LS estimator after appropriately estimating formula_45 and formula_46. Then, for all random vector variable formula_47, assume formula_48is a kernel regression estimator of formula_49. Let formula_50. For example, formula_51. Denote formula_52X,g and T similarly. Let formula_53. So formula_54 The profile-kernel based estimators formula_55solves, formula_56 where formula_57are kernel estimators of mx and my. The penalized spline method. The penalized spline method was developed by Eilers and Marx in 1996. Ruppert and Carroll in 2000 and Brumback, Ruppert and Wand in 1999 employed this method in LME framework. Assuming function formula_58can be estimated by formula_59 where formula_60is an integer, and formula_61are fixed knots, formula_62Denote formula_63Consider formula_64. The penalized spline estimator formula_65is defined as follow formula_66 Where formula_67is a smoothing parameter. As Brumback et al. mentioned in 1999, the estimator formula_68is same as the estimator of formula_3 based on LME model. formula_69, where formula_70, formula_71 Where formula_72, and formula_73. The matrix shows the penalized spline smoother for up above framework.
[ { "math_id": 0, "text": "y_i=\\delta_T^i\\beta+f(T_i)+\\mu_i" }, { "math_id": 1, "text": "\\delta^i_T" }, { "math_id": 2, "text": "T_i" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\\mu_i" }, { "math_id": 5, "text": "f(T_i)" }, { "math_id": 6, "text": "L_j(T_i)=E(\\delta_{i,j}|T_i)" }, { "math_id": 7, "text": "\\mu_{i,j}=\\delta_{i,j}-E(\\delta_{i,j}|T_i)" }, { "math_id": 8, "text": "E(||\\delta_1 | |^3 )|T=t) " }, { "math_id": 9, "text": "\\delta_1 - E(\\delta_1|T_1)" }, { "math_id": 10, "text": "(\\delta_i,T_i) " }, { "math_id": 11, "text": "\\delta_i " }, { "math_id": 12, "text": "L_j " }, { "math_id": 13, "text": "\\delta_{ij}=L_j(T_i)+\\mu_{ij} " }, { "math_id": 14, "text": "\\mu_{ij} " }, { "math_id": 15, "text": "\\lim_{n \\to \\infty}1/n\\sum_{i=1}^n \\mu_i\\mu_i^T=\\sum " }, { "math_id": 16, "text": "Y=\\delta^T\\beta+f(t) " }, { "math_id": 17, "text": "1/n \\textstyle \\sum_{i=1}^n \\displaystyle E\\{Y_i-\\delta_i^T\\beta-f(T_i)\\}^2=\\text{min}1/n\\textstyle \\sum_{i=1}^n \\displaystyle E\\{Y_i-\\delta_i^T-f(T_i)\\}^2 " }, { "math_id": 18, "text": "1 \\leq i \\leq n " }, { "math_id": 19, "text": "\\delta_i^T\\beta_1+f_1(T_i)=\\delta_i^T\\beta_2+f_2(T_i) " }, { "math_id": 20, "text": "f_1 = f_2" }, { "math_id": 21, "text": "\\beta_1 = \\beta_2 " }, { "math_id": 22, "text": "E[Y_i|(\\delta_i,T_i)]=\\delta_i^T\\beta1+f_1(T_i)=\\delta_i^T\\beta_2+f_2(T_i) " }, { "math_id": 23, "text": "E\\{Y_i-\\delta_i^T\\beta_1-f_1(T_i)\\}^2=E\\{Y_i-\\delta_i^T\\beta_2-f_2(T_i)\\}^2+(\\beta_1-\\beta_2)^TE\\{(\\delta_i-E[\\delta_i|T_i])(\\delta_i-E[\\delta_i|T_i]^T)\\}(\\beta_1-\\beta_2) " }, { "math_id": 24, "text": "\\beta_1=\\beta_2 " }, { "math_id": 25, "text": "E\\{(\\delta_i-E[\\delta_i|T_i])(\\delta_i-E[\\delta_i|T_i]^T)\\} " }, { "math_id": 26, "text": "f_j(T_i)=E[Y_i|T_i]-E[\\delta_i^T\\beta_j|T_i] " }, { "math_id": 27, "text": "f_1=f_2 " }, { "math_id": 28, "text": "\\{f(T_1),.........,f(T_n)\\}^T=\\omega_r and \\omega_r = Q(Y-x\\beta) " }, { "math_id": 29, "text": "Q=\\omega(\\omega^T\\omega)^{-1}\\omega^T " }, { "math_id": 30, "text": "\\beta_1=\\beta_2 " }, { "math_id": 31, "text": "1/nE\\{(Y-X\\beta_1-\\omega_{r1})^T(Y-X\\beta_1-\\omega_{r1})\\}=1/nE{(Y-X\\beta_2-\\omega_{r2})^T(Y-X\\beta_2-\\omega_{r2})}+1/n(\\beta_1-\\beta_2)^TX^T(1-Q)X(\\beta_1-\\beta_2) " }, { "math_id": 32, "text": "\\delta_i,T_i,Y_i " }, { "math_id": 33, "text": "y_i=\\delta_i^T\\beta+f(T_i)+\\mu_i " }, { "math_id": 34, "text": "\\psi_{ni}(t) " }, { "math_id": 35, "text": "f(t) " }, { "math_id": 36, "text": "\\beta " }, { "math_id": 37, "text": "f_n(t;\\beta)=\\sum_{i=1}^n\\psi_{ni}(t)(Y_i-\\delta_i^T\\beta) " }, { "math_id": 38, "text": "\\beta_{LS}=\\{(\\tilde{\\delta}^T\\tilde{\\delta})\\}^{-1}\\tilde{\\delta}^T\\tilde{Y} " }, { "math_id": 39, "text": "f(n) " }, { "math_id": 40, "text": "\\hat{f_n}(t)=\\sum_{i=1}^n\\psi_{ni}(t)(Y_i-\\delta_i^T\\beta_{LS})" }, { "math_id": 41, "text": "\\sigma^2" }, { "math_id": 42, "text": "\\hat{\\sigma}_n^2=1/n\\sum_{i=1}^n(\\tilde{Y_i}-\\tilde{\\delta_i^T}\\beta_{LS})" }, { "math_id": 43, "text": "E(Y|T)={E(X|T)}^T\\beta+g(T)" }, { "math_id": 44, "text": "Y-E(Y|T)=({X-E(X|T)})^T\\beta+\\epsilon" }, { "math_id": 45, "text": "E(Y|T)" }, { "math_id": 46, "text": "E(X|T)" }, { "math_id": 47, "text": "\\xi" }, { "math_id": 48, "text": "\\hat{E}(\\xi|T)" }, { "math_id": 49, "text": "E(\\xi|T)" }, { "math_id": 50, "text": "\\tilde{\\xi}=\\xi-E(\\xi|T),\\textstyle \\sum_{X|T} \\displaystyle=cov{X-E(X|T)}" }, { "math_id": 51, "text": "\\tilde{X}_i=X_i-E(X_i|T_i)" }, { "math_id": 52, "text": "Y=(Y_1,...,Y_n)^T" }, { "math_id": 53, "text": "m_x(t)=E(X|T=t),m_y(t)=E(Y|T=t)" }, { "math_id": 54, "text": "\\psi(m_x,m_y,\\beta,Y,X,T)={X-m_x(T)}[Y-m_y(T)-{X-m_x(T)^T\\beta}]" }, { "math_id": 55, "text": "\\hat{\\beta_p}" }, { "math_id": 56, "text": "0=\\sum_{i=1}^n\\psi(\\hat{m_x},\\hat{m_y},\\beta,Y_i,X_i,T_i)" }, { "math_id": 57, "text": "\\hat{m_x},\\hat{m_y}" }, { "math_id": 58, "text": "g(t)" }, { "math_id": 59, "text": "g(t,\\tau)=\\tau_0+\\tau_1t+...+\\tau_pt^p+\\textstyle \\sum_{k=1}^K \\displaystyle b_k(t-\\xi_k)^p" }, { "math_id": 60, "text": "p\\geqslant1" }, { "math_id": 61, "text": "\\xi_1<...<\\xi_k" }, { "math_id": 62, "text": "a_+=max(a,0)." }, { "math_id": 63, "text": "\\tau=(tau_0,...,\\tau_p)^T" }, { "math_id": 64, "text": "Y=X^T\\beta+g(T,\\tau)+\\epsilon" }, { "math_id": 65, "text": "(\\hat{\\beta_{ps}^T},\\hat{\\tau_{ps}^T})^T of (\\beta^T,\\tau^T)^T" }, { "math_id": 66, "text": "\\sum_{i=1}^n [Y_i-X_i^T\\beta_i-g(T_i,\\tau)]^2+\\alpha\\sum_{k=1}^Kb_k^2" }, { "math_id": 67, "text": "\\alpha" }, { "math_id": 68, "text": "(\\hat{\\beta_{ps}^T},\\hat{\\tau_{ps}^T})^T " }, { "math_id": 69, "text": "y=\\Lambda(\\beta^T,\\tau^T)^T+Zb+\\epsilon" }, { "math_id": 70, "text": "\\Lambda=\\begin{pmatrix} x_{11} & ... & x_{1d} & 1 & T_1 & ... & T_1^p\\\\ x_{21} & ... & x_{2d} & 1 & T_2 & ... & T_2^p\\\\ . & ... & . & . & . & ... & .\\\\ . & ... & . & . & . & ... & .\\\\. & ... & . & . & . & ... & .\\\\x_{n1} & ... & x_{nd} & 1 & T_n & ... & T_n^p \\end{pmatrix}" }, { "math_id": 71, "text": "Z=\\begin{pmatrix} (T_1-\\xi_1)^p & ... & (T_1-\\xi_K)_+^p \\\\ (T_2-\\xi_1)^p & ... & (T_2-\\xi_K)_+^p\\\\ . & ... & . \\\\ . & ... & . \\\\. & ... & . \\\\(T_n-\\xi_1)^p & ... & (T_n-\\xi_K)_+^p \\end{pmatrix}" }, { "math_id": 72, "text": "b=(b_1,...,b_k)^T \\backsim (0,\\sigma_b^2),\\epsilon=(\\epsilon_1,...,\\epsilon_n)^T \\sim (0,\\sigma_\\epsilon^2)" }, { "math_id": 73, "text": "\\alpha=\\sigma_\\epsilon^2/\\sigma_b^2" } ]
https://en.wikipedia.org/wiki?curid=60262610
602650
Type safety
Extent to which a programming language discourages type errors In computer science, type safety and type soundness are the extent to which a programming language discourages or prevents type errors. Type safety is sometimes alternatively considered to be a property of facilities of a computer language; that is, some facilities are type-safe and their usage will not result in type errors, while other facilities in the same language may be type-unsafe and a program using them may encounter type errors. The behaviors classified as type errors by a given programming language are usually those that result from attempts to perform operations on values that are not of the appropriate data type, e.g., adding a string to an integer when there's no definition on how to handle this case. This classification is partly based on opinion. Type enforcement can be static, catching potential errors at compile time, or dynamic, associating type information with values at run-time and consulting them as needed to detect imminent errors, or a combination of both. Dynamic type enforcement often allows programs to run that would be invalid under static enforcement. In the context of static (compile-time) type systems, type safety usually involves (among other things) a guarantee that the eventual value of any expression will be a legitimate member of that expression's static type. The precise requirement is more subtle than this — see, for example, subtyping and polymorphism for complications. Definitions. Intuitively, type soundness is captured by Robin Milner's pithy statement that Well-typed programs cannot "go wrong". In other words, if a type system is "sound", then expressions accepted by that type system must evaluate to a value of the appropriate type (rather than produce a value of some other, unrelated type or crash with a type error). Vijay Saraswat provides the following, related definition: A language is type-safe if the only operations that can be performed on data in the language are those sanctioned by the type of the data. However, what precisely it means for a program to be "well typed" or to "go wrong" are properties of its static and dynamic semantics, which are specific to each programming language. Consequently, a precise, formal definition of type soundness depends upon the style of formal semantics used to specify a language. In 1994, Andrew Wright and Matthias Felleisen formulated what has become the standard definition and proof technique for type safety in languages defined by operational semantics, which is closest to the notion of type safety as understood by most programmers. Under this approach, the semantics of a language must have the following two properties to be considered type-sound: A number of other formal treatments of type soundness have also been published in terms of denotational semantics and structural operational semantics. Relation to other forms of safety. In isolation, type soundness is a relatively weak property, as it essentially just states that the rules of a type system are internally consistent and cannot be subverted. However, in practice, programming languages are designed so that well-typedness also entails other, stronger properties, some of which include: Type-safe and type-unsafe languages. Type safety is usually a requirement for any toy language (i.e. esoteric language) proposed in academic programming language research. Many languages, on the other hand, are too big for human-generated type safety proofs, as they often require checking thousands of cases. Nevertheless, some languages such as Standard ML, which has rigorously defined semantics, have been proved to meet one definition of type safety. Some other languages such as Haskell are "believed"[""] to meet some definition of type safety, provided certain "escape" features are not used (for example Haskell's unsafePerformIO, used to "escape" from the usual restricted environment in which I/O is possible, circumvents the type system and so can be used to break type safety.) Type punning is another example of such an "escape" feature. Regardless of the properties of the language definition, certain errors may occur at run-time due to bugs in the implementation, or in linked libraries written in other languages; such errors could render a given implementation type unsafe in certain circumstances. An early version of Sun's Java virtual machine was vulnerable to this sort of problem. Strong and weak typing. Programming languages are often colloquially classified as strongly typed or weakly typed (also loosely typed) to refer to certain aspects of type safety. In 1974, Liskov and Zilles defined a strongly-typed language as one in which "whenever an object is passed from a calling function to a called function, its type must be compatible with the type declared in the called function." In 1977, Jackson wrote, "In a strongly typed language each data area will have a distinct type and each process will state its communication requirements in terms of these types." In contrast, a weakly typed language may produce unpredictable results or may perform implicit type conversion. Memory management and type safety. Type safety is closely linked to memory safety. For instance, in an implementation of a language that has some type formula_0 which allows some bit patterns but not others, a dangling pointer memory error allows writing a bit pattern that does not represent a legitimate member of formula_0 into a dead variable of type formula_0, causing a type error when the variable is read. Conversely, if the language is memory-safe, it cannot allow an arbitrary integer to be used as a pointer, hence there must be a separate pointer or reference type. As a minimal condition, a type-safe language must not allow dangling pointers across allocations of different types. But most languages enforce the proper use of abstract data types defined by programmers even when this is not strictly necessary for memory safety or for the prevention of any kind of catastrophic failure. Allocations are given a type describing its contents, and this type is fixed for the duration of the allocation. This allows type-based alias analysis to infer that allocations of different types are distinct. Most type-safe languages use garbage collection. Pierce says, "it is extremely difficult to achieve type safety in the presence of an explicit deallocation operation", due to the dangling pointer problem. However Rust is generally considered type-safe and uses a borrow checker to achieve memory safety, instead of garbage collection. Type safety in object oriented languages. In object oriented languages type safety is usually intrinsic in the fact that a type system is in place. This is expressed in terms of class definitions. A class essentially defines the structure of the objects derived from it and an API as a "contract" for handling these objects. Each time a new object is created it will "comply" with that contract. Each function that exchanges objects derived from a specific class, or implementing a specific interface, will adhere to that contract: hence in that function the operations permitted on that object will be only those defined by the methods of the class the object implements. This will guarantee that the object integrity will be preserved. Exceptions to this are object oriented languages that allow dynamic modification of the object structure, or the use of reflection to modify the content of an object to overcome the constraints imposed by the class methods definitions. Type safety issues in specific languages. Ada. Ada was designed to be suitable for embedded systems, device drivers and other forms of system programming, but also to encourage type-safe programming. To resolve these conflicting goals, Ada confines type-unsafety to a certain set of special constructs whose names usually begin with the string Unchecked_. Unchecked_Deallocation can be effectively banned from a unit of Ada text by applying pragma Pure to this unit. It is expected that programmers will use Unchecked_ constructs very carefully and only when necessary; programs that do not use them are type-safe. The SPARK programming language is a subset of Ada eliminating all its potential ambiguities and insecurities while at the same time adding statically checked contracts to the language features available. SPARK avoids the issues with dangling pointers by disallowing allocation at run time entirely. Ada2012 adds statically checked contracts to the language itself (in form of pre-, and post-conditions, as well as type invariants). C. The C programming language is type-safe in limited contexts; for example, a compile-time error is generated when an attempt is made to convert a pointer to one type of structure to a pointer to another type of structure, unless an explicit cast is used. However, a number of very common operations are non-type-safe; for example, the usual way to print an integer is something like codice_1, where the codice_2 tells codice_3 at run-time to expect an integer argument. (Something like codice_4, which tells the function to expect a pointer to a character-string and yet supplies an integer argument, may be accepted by compilers, but will produce undefined results.) This is partially mitigated by some compilers (such as gcc) checking type correspondences between printf arguments and format strings. In addition, C, like Ada, provides unspecified or undefined explicit conversions; and unlike in Ada, idioms that use these conversions are very common, and have helped to give C a type-unsafe reputation. For example, the standard way to allocate memory on the heap is to invoke a memory allocation function, such as codice_5, with an argument indicating how many bytes are required. The function returns an untyped pointer (type codice_6), which the calling code must explicitly or implicitly cast to the appropriate pointer type. Pre-standardized implementations of C required an explicit cast to do so, therefore the code codice_7 became the accepted practice. C++. Some features of C++ that promote more type-safe code: C#. C# is type-safe. It has support for untyped pointers, but this must be accessed using the "unsafe" keyword which can be prohibited at the compiler level. It has inherent support for run-time cast validation. Casts can be validated by using the "as" keyword that will return a null reference if the cast is invalid, or by using a C-style cast that will throw an exception if the cast is invalid. See C Sharp conversion operators. Undue reliance on the object type (from which all other types are derived) runs the risk of defeating the purpose of the C# type system. It is usually better practice to abandon object references in favour of generics, similar to templates in C++ and generics in Java. Java. The Java language is designed to enforce type safety. Anything in Java "happens" inside an object and each object is an instance of a class. To implement the "type safety" enforcement, each object, before usage, needs to be allocated. Java allows usage of primitive types but only inside properly allocated objects. Sometimes a part of the type safety is implemented indirectly: e.g. the class BigDecimal represents a floating point number of arbitrary precision, but handles only numbers that can be expressed with a finite representation. The operation BigDecimal.divide() calculates a new object as the division of two numbers expressed as BigDecimal. In this case if the division has no finite representation, as when one computes e.g. 1/3=0.33333..., the divide() method can raise an exception if no rounding mode is defined for the operation. Hence the library, rather than the language, guarantees that the object respects the contract implicit in the class definition. Standard ML. Standard ML has rigorously defined semantics and is known to be type-safe. However, some implementations, including Standard ML of New Jersey (SML/NJ), its syntactic variant Mythryl and MLton, provide libraries that offer unsafe operations. These facilities are often used in conjunction with those implementations' foreign function interfaces to interact with non-ML code (such as C libraries) that may require data laid out in specific ways. Another example is the SML/NJ interactive toplevel itself, which must use unsafe operations to execute ML code entered by the user. Modula-2. Modula-2 is a strongly-typed language with a design philosophy to require any unsafe facilities to be explicitly marked as unsafe. This is achieved by "moving" such facilities into a built-in pseudo-library called SYSTEM from where they must be imported before they can be used. The import thus makes it visible when such facilities are used. Unfortunately, this was not consequently implemented in the original language report and its implementation. There still remained unsafe facilities such as the type cast syntax and variant records (inherited from Pascal) that could be used without prior import. The difficulty in moving these facilities into the SYSTEM pseudo-module was the lack of any identifier for the facility that could then be imported since only identifiers can be imported, but not syntax. IMPORT SYSTEM; (* allows the use of certain unsafe facilities: *) VAR word : SYSTEM.WORD; addr : SYSTEM.ADDRESS; addr := SYSTEM.ADR(word); VAR i : INTEGER; n : CARDINAL; n := CARDINAL(i); (* or *) i := INTEGER(n); The ISO Modula-2 standard corrected this for the type cast facility by changing the type cast syntax into a function called CAST which has to be imported from pseudo-module SYSTEM. However, other unsafe facilities such as variant records remained available without any import from pseudo-module SYSTEM. IMPORT SYSTEM; VAR i : INTEGER; n : CARDINAL; i := SYSTEM.CAST(INTEGER, n); (* Type cast in ISO Modula-2 *) A recent revision of the language applied the original design philosophy rigorously. First, pseudo-module SYSTEM was renamed to UNSAFE to make the unsafe nature of facilities imported from there more explicit. Then all remaining unsafe facilities where either removed altogether (for example variant records) or moved to pseudo-module UNSAFE. For facilities where there is no identifier that could be imported, enabling identifiers were introduced. In order to enable such a facility, its corresponding enabling identifier must be imported from pseudo-module UNSAFE. No unsafe facilities remain in the language that do not require import from UNSAFE. IMPORT UNSAFE; VAR i : INTEGER; n : CARDINAL; i := UNSAFE.CAST(INTEGER, n); (* Type cast in Modula-2 Revision 2010 *) FROM UNSAFE IMPORT FFI; (* enabling identifier for foreign function interface facility *) &lt;*FFI="C"*&gt; (* pragma for foreign function interface to C *) Pascal. Pascal has had a number of type safety requirements, some of which are kept in some compilers. Where a Pascal compiler dictates "strict typing", two variables cannot be assigned to each other unless they are either compatible (such as conversion of integer to real) or assigned to the identical subtype. For example, if you have the following code fragment: type TwoTypes = record I: Integer; Q: Real; end; DualTypes = record I: Integer; Q: Real; end; var T1, T2: TwoTypes; D1, D2: DualTypes; Under strict typing, a variable defined as TwoTypes is "not compatible" with DualTypes (because they are not identical, even though the components of that user defined type are identical) and an assignment of codice_8 is illegal. An assignment of codice_9 would be legal because the subtypes they are defined to "are" identical. However, an assignment such as codice_10 would be legal. Common Lisp. In general, Common Lisp is a type-safe language. A Common Lisp compiler is responsible for inserting dynamic checks for operations whose type safety cannot be proven statically. However, a programmer may indicate that a program should be compiled with a lower level of dynamic type-checking. A program compiled in such a mode cannot be considered type-safe. C++ examples. The following examples illustrates how C++ cast operators can break type safety when used incorrectly. The first example shows how basic data types can be incorrectly cast: using namespace std; int main () { int ival = 5; // integer value float fval = reinterpret_cast&lt;float&amp;&gt;(ival); // reinterpret bit pattern cout « fval « endl; // output integer as float return 0; In this example, codice_11 explicitly prevents the compiler from performing a safe conversion from integer to floating-point value. When the program runs it will output a garbage floating-point value. The problem could have been avoided by instead writing codice_12 The next example shows how object references can be incorrectly downcast: using namespace std; class Parent { public: virtual ~Parent() {} // virtual destructor for RTTI class Child1 : public Parent { public: int a; class Child2 : public Parent { public: float b; int main () { Child1 c1; c1.a = 5; Parent &amp; p = c1; // upcast always safe Child2 &amp; c2 = static_cast&lt;Child2&amp;&gt;(p); // invalid downcast cout « c2.b « endl; // will output garbage data return 0; The two child classes have members of different types. When downcasting a parent class pointer to a child class pointer, then the resulting pointer may not point to a valid object of correct type. In the example, this leads to garbage value being printed. The problem could have been avoided by replacing codice_13 with codice_14 that throws an exception on invalid casts. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t" } ]
https://en.wikipedia.org/wiki?curid=602650
602678
Extraterrestrial sky
Extraterrestrial view of outer space In astronomy, an extraterrestrial sky is a view of outer space from the surface of an astronomical body other than Earth. The only extraterrestrial sky that has been directly observed and photographed by astronauts is that of the Moon. The skies of Venus, Mars and Titan have been observed by space probes designed to land on the surface and transmit images back to Earth. Characteristics of extraterrestrial sky appear to vary substantially due to a number of factors. An extraterrestrial atmosphere, if present, has a large bearing on visible characteristics. The atmosphere's density and chemical composition can contribute to differences in colour, opacity (including haze) and the presence of clouds. Astronomical objects may also be visible and can include natural satellites, rings, star systems and nebulas and other planetary system bodies. Luminosity and angular diameter of the Sun. The Sun's apparent magnitude changes according to the inverse square law, therefore, the difference in magnitude as a result of greater or lesser distances from different celestial bodies can be predicted by the following formula: formula_0 Where "distance" can be in km, AU, or any other appropriate unit. To illustrate, since Pluto is 40 AU away from the Sun on average, it follows that the parent star would appear to be formula_1 times as bright as it is on Earth. Though a terrestrial observer would find a dramatic decrease in available sunlight in these environments, the Sun would still be bright enough to cast shadows even as far as the hypothetical Planet Nine, possibly located 1,200 AU away, and by analogy would still outshine the full Moon as seen from Earth. The change in angular diameter of the Sun with distance is illustrated in the diagram below: The angular diameter of a circle whose plane is perpendicular to the displacement vector between the point of view and the centre of said circle can be calculated using the formula formula_2 in which formula_3 is the angular diameter, and formula_4 and formula_5 are the actual diameter of and the distance to the object. When formula_6, we have formula_7, and the result obtained is in radians. For a spherical object whose "actual" diameter equals formula_8 and where formula_5 is the distance to the "centre" of the sphere, the angular diameter can be found by the formula formula_9 The difference is due to the fact that the apparent edges of a sphere are its tangent points, which are closer to the observer than the centre of the sphere. For practical use, the distinction is significant only for spherical objects that are relatively close, since the small-angle approximation holds for formula_10: formula_11 . Horizon. On terrestrial planets and other solid celestial bodies with negligible atmospheric effects, the distance to the horizon for a "standard observer" varies as the square root of the planet's radius. Thus, the horizon on Mercury is 62% as far away from the observer as it is on Earth, on Mars the figure is 73%, on the Moon the figure is 52%, on Mimas the figure is 18%, and so on. The observer's height must be taken into account when calculating the distance to the horizon. Mercury. Because Mercury has little atmosphere, a view of the planet's skies would be no different from viewing space from orbit. Mercury has a southern pole star, α Pictoris, a magnitude 3.2 star. It is fainter than Earth's Polaris (α Ursae Minoris). Omicron Draconis is its north star. Other planets seen from Mercury. After the Sun, the second-brightest object in the Mercurian sky is Venus, which is much brighter there than for terrestrial observers. The reason for this is that when Venus is closest to Earth, it is between the Earth and the Sun, so we see only its night side. Indeed, even when Venus is brightest in the Earth's sky, we are actually seeing only a narrow crescent. For a Mercurian observer, on the other hand, Venus is closest when it is in opposition to the Sun and is showing its full disk. The apparent magnitude of Venus is as bright as −7.7. The Earth and the Moon are also very prominent, their apparent magnitudes being about −5 and −1.2, respectively. The maximum apparent distance between the Earth and the Moon is about 15′. All other planets are visible just as they are on Earth, but somewhat less bright at opposition with the difference being most considerable for Mars. The zodiacal light is probably more prominent than it is from Earth. Venus. The atmosphere of Venus is so thick that the Sun is not distinguishable in the daytime sky, and the stars are not visible at night. Being closer to the Sun, Venus receives about 1.9 times more sunlight than Earth, but due to the thick atmosphere, only about 20% of the light reaches the surface. Color images taken by the Soviet Venera probes suggest that the sky on Venus is orange. If the Sun could be seen from Venus's surface, the time from one sunrise to the next (a solar day) would be 116.75 Earth days. Because of Venus's retrograde rotation, the Sun would appear to rise in the west and set in the east. An observer aloft in Venus's cloud tops, on the other hand, would circumnavigate the planet in about four Earth days and see a sky in which Earth and the Moon shine brightly (about magnitudes −6.6 and −2.7, respectively) at opposition. The maximum angular separation between the Moon and Earth from the perspective of Venus is 0.612°, or approximately the same separation of one centimetre of separation at a distance of one metre and coincidentally, about the apparent size of the Moon as seen from Earth. Mercury would also be easy to spot, because it is closer and brighter, at up to magnitude −2.7, and because its maximum elongation from the Sun is considerably larger (40.5°) than when observed from Earth (28.3°). 42 Draconis is the closest star to the north pole of Venus. Eta¹ Doradus is the closest to its south pole. (Note: The IAU uses the right-hand rule to define a "positive pole" for the purpose of determining orientation. Using this convention, Venus is tilted 177° ("upside down"), and the positive pole is instead the south pole.) The Moon. The Moon's atmosphere is negligibly thin, essentially vacuum, so its sky is black, as in the case of Mercury. At lunar twilight astronauts have though observed some crepuscular rays and lunar horizon glow of the illuminated atmosphere, beside interplanetary light phenomenons like zodiacal light. Furthermore, the Sun is so bright that it is still impossible to see stars during the lunar daytime, unless the observer is well shielded from sunlight (direct or reflected from the ground). The Moon has a southern polar star, δ Doradus, a magnitude 4.34 star. It is better aligned than Earth's Polaris (α Ursae Minoris), but much fainter. Its north pole star is Omicron Draconis. Sun and Earth in the lunar sky. While the Sun moves across the Moon's sky within fourteen days, the daytime of a lunar day or the lunar month, Earth is only visible on the Moon's near side and moves around a central point in the near side's sky. This is due to the Moon always facing the Earth with the same side, a result of the Moon's rotation being tidally locked to Earth. That said, the Earth does move around slightly around a central point in the Moon's sky, because of monthly libration. Therefore rising or setting of Earth at the horizon on the Moon occurs only at few lunar locations and only to a small degree, at the border of the near side of the Moon to the far side, and takes much longer than a sunrise or sunset on Earth due to the Moon's slow monthly rotation. The famous Earthrise image by Apollo 8 though is an instance where the astronauts moved around the Moon, making the Earth to rise above the Moon because of that motion. Eclipses from the Moon. When sometimes the Moon, Earth and the Sun align exactly in a straight line (a syzygy), the Moon or Earth move through the other's shadow, producing an eclipse for an observer on the surface in the shadow. When the Moon moves into Earth's shadow a Solar eclipse occurs on the near side of the Moon (which is observable as a Lunar eclipse facing the Moon). Since the apparent diameter of the Earth is four times larger than that of the Sun, the Sun would be hidden behind the Earth for hours. Earth's atmosphere would be visible as a reddish ring. During the Apollo 15 mission, an attempt was made to use the Lunar Roving Vehicle's TV camera to view such an eclipse, but the camera or its power source failed after the astronauts left for Earth. When Earth moves into the Moon's shadow a Solar eclipse occurs on Earth where the Moon's shadow passes, and is visible facing Earth as a tapered out lunar shadow on Earth's surface traveling across the full Earth's disk. The effect would be comparable to the shadow of a golf ball cast by sunlight on an object away. Lunar observers with telescopes might be able to discern the umbral shadow as a black spot at the center of a less dark region (penumbra). It would look essentially the same as it does to the Deep Space Climate Observatory, which orbits Earth at the L1 Lagrangian point in the Sun-Earth system, from Earth. Mars. Mars has only a thin atmosphere; however, it is extremely dusty and there is much light that is scattered about. The sky is thus rather bright during the daytime and stars are not visible. The Martian northern pole star is Deneb, although the actual pole is somewhat offset in the direction of Alpha Cephei; it is more accurate to state that the top two stars of the Northern Cross, Sadr and Deneb, point to the north Celestial pole of Mars. Kappa Velorum is only a couple of degrees from the south Celestial pole of Mars. The moons of Mars. Phobos appears in the sky of Mars with an angular size of 4.1′, making its shape recognizable, appearing larger than Venus in Earth's sky, while the Moon appears in Earth's sky as large as 31′ on average. The color of the Martian sky. Generating accurate true-color images from Mars' surface is surprisingly complicated. To give but one aspect to consider, there is the Purkinje effect: the human eye's response to color depends on the level of ambient light; red objects appear to darken faster than blue objects as the level of illumination goes down. There is much variation in the color of the sky as reproduced in published images, since many of those images have used filters to maximize their scientific value and are not trying to show true color. For many years, the sky on Mars was thought to be more pinkish than it is now believed to be. It is now known that during the Martian day, the sky is a butterscotch color. Around sunset and sunrise, the sky is rose in color, but in the vicinity of the setting Sun it is blue. This is the opposite of the situation on Earth. Twilight lasts a long time after the Sun has set and before it rises because of the dust high in Mars's atmosphere. On Mars, Rayleigh scattering is usually a very weak effect; the red color of the sky is caused by the presence of iron(III) oxide in the airborne dust particles. These particles are larger in size than gas molecules, so most of the light is scattered by Mie scattering. Dust absorbs blue light and scatters longer wavelengths (red, orange, yellow). The Sun from Mars. The Sun as seen from Mars appears to be &lt;templatestyles src="Fraction/styles.css" /&gt;5⁄8 the angular diameter as seen from Earth (0.35°), and sends 40% of the light, approximately the brightness of a slightly cloudy afternoon on Earth. On June 3, 2014, the "Curiosity" rover on Mars observed the planet Mercury transiting the Sun, marking the first time a planetary transit has been observed from a celestial body besides Earth. Earth and Moon from Mars. The Earth is visible from Mars as a double star; the Moon would be visible alongside it as a fainter companion. The difference in brightness between the two would be greatest around inferior conjunction. At that time, both bodies would present their dark sides to Mars, but Earth's atmosphere would largely offset this by refracting sunlight much like the atmosphere of Venus does. On the other hand, the airless Moon would behave like the similarly airless Mercury, going completely dark when within a few degrees of the Sun. Also at inferior conjunction (for the terrestrial observer, this is the opposition of Mars and the Sun), the maximum visible distance between the Earth and the Moon would be about 25′, which is close to the apparent size of the Moon in Earth's sky. The angular size of Earth is between 48.1″ and 6.6″ and of the Moon between 13.3″ and 1.7″, comparable to that of Venus and Mercury from Earth. Near maximum elongation (47.4°), the Earth and Moon would shine at apparent magnitudes −2.5 and +0.9, respectively. Venus from Mars. Venus as seen from Mars (when near the maximum elongation from the Sun of 31.7°) would have an apparent magnitude of about −3.2. Jupiter. Although no images from within Jupiter's atmosphere have ever been taken, artistic representations typically assume that the planet's sky is blue, though dimmer than Earth's, because the sunlight there is on average 27 times fainter, at least in the upper reaches of the atmosphere. The planet's narrow rings might be faintly visible from latitudes above the equator. Further down into the atmosphere, the Sun would be obscured by clouds and haze of various colors, most commonly blue, brown, and red. Although theories abound on the cause of the colors, there is currently no unambiguous answer. From Jupiter, the Sun appears to cover only 5 arcminutes, less than a quarter of its size as seen from Earth. The north pole of Jupiter is a little over two degrees away from Zeta Draconis, while its south pole is about two degrees north of Delta Doradus. Jupiter's moons as seen from Jupiter. Aside from the Sun, the most prominent objects in Jupiter's sky are the four Galilean moons. Io, the nearest to the planet, would be slightly larger than the full moon in Earth's sky, though less bright, and would be the largest moon in the Solar System as seen from its parent planet. The higher albedo of Europa would not overcome its greater distance from Jupiter, so it would not outshine Io. In fact, the low solar constant at Jupiter's distance (3.7% Earth's) ensures that none of the Galilean satellites would be as bright as the full moon is on Earth, and neither would any other moon in the Solar System. All four Galilean moons stand out because of the swiftness of their motion, compared to the Moon. They are all also large enough to fully eclipse the Sun. Because Jupiter's axial tilt is minimal, and the Galilean moons all orbit in the plane of Jupiter's equator, solar eclipses are quite common. The skies of Jupiter's moons. None of Jupiter's moons have more than traces of atmosphere, so their skies are very nearly black. For an observer on one of the moons, the most prominent feature of the sky by far would be Jupiter. For an observer on Io, the closest large moon to the planet, Jupiter's apparent diameter would be about 20° (38 times the visible diameter of the Moon, covering 5% of Io's sky). An observer on Metis, the innermost moon, would see Jupiter's apparent diameter increased to 68° (130 times the visible diameter of the Moon, covering 18% of Metis's sky). A "full Jupiter" over Metis shines with about 4% of the Sun's brightness (light on Earth from a full moon is 400,000 times dimmer than sunlight). Because the inner moons of Jupiter are in synchronous rotation around Jupiter, the planet always appears in nearly the same spot in their skies (Jupiter would wiggle a bit because of the non-zero eccentricities). Observers on the sides of the Galilean satellites facing away from the planet would never see Jupiter, for instance. From the moons of Jupiter, solar eclipses caused by the Galilean satellites would be spectacular, because an observer would see the circular shadow of the eclipsing moon travel across Jupiter's face. Saturn. The sky in the upper reaches of Saturn's atmosphere is blue (from imagery of the Cassini mission at the time of its September 2017 demise), but the predominant color of its cloud decks suggests that it may be yellowish further down. Observations from spacecraft show that seasonal smog develops in Saturn's southern hemisphere at its perihelion due to its axial tilt. This could cause the sky to become yellowish at times. As the northern hemisphere is pointed towards the Sun only at aphelion, the sky there would likely remain blue. The rings of Saturn are almost certainly visible from the upper reaches of its atmosphere. The rings are so thin that from a position on Saturn's equator, they would be almost invisible. However, from anywhere else on the planet, they could be seen as a spectacular arc stretching across half the celestial hemisphere. Delta Octantis is the south pole star of Saturn. Its north pole is in the far northern region of Cepheus, about six degrees from Polaris. The sky of Titan. Titan is the only moon in the Solar System to have a thick atmosphere. Images from the "Huygens" probe show that the Titanean sky is a light tangerine color. However, an astronaut standing on the surface of Titan would see a hazy brownish/dark orange color. As a consequence of its greater distance from the Sun and the opacity of its atmosphere, the surface of Titan receives only about &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3000 of the sunlight Earth does – daytime on Titan is thus only as bright as twilight on the Earth. It seems likely that Saturn is permanently invisible behind orange smog, and even the Sun would be only a lighter patch in the haze, barely illuminating the surface of ice and methane lakes. However, in the upper atmosphere, the sky would have a blue color and Saturn would be visible. With its thick atmosphere and methane rain, Titan is the only celestial body other than Earth upon which rainbows on the surface could form. However, given the extreme opacity of the atmosphere in visible light, the vast majority would be in the infrared. Uranus. From a vantage above the clouds on Uranus, the sky would probably appear dark blue. It is unlikely that the planet's rings can be seen from the upper atmosphere, as they are very thin and dark. Uranus has a northern polar star, Sabik (η Ophiuchi), a magnitude 2.4 star. Uranus also has a southern polar star, 15 Orionis, a magnitude 4.8 star. Both are fainter than Earth's Polaris (α Ursae Minoris), although Sabik only slightly. Neptune. The north pole of Neptune points to a spot midway between Gamma and Delta Cygni. Its south pole star is Gamma Velorum. Judging by the color of its atmosphere, the sky of Neptune is probably an azure or sky blue, similar to Uranus's. As in the case of Uranus, it is unlikely that the planet's rings can be seen from the upper atmosphere, as they are very thin and dark. Aside from the Sun, the most notable object in Neptune's sky is its large moon Triton, which would appear slightly smaller than a full Moon on Earth. It moves more swiftly than the Moon, because of its shorter period (5.8 days) compounded by its retrograde orbit. The smaller moon Proteus would show a disk about half the size of the full Moon. Surprisingly, Neptune's small inner moons all cover, at some point in their orbits, more than 10′ in Neptune's sky. At some points, Despina's angular diameter rivals that of Ariel from Uranus and Ganymede from Jupiter. Here are the angular diameters for Neptune's moons (for comparison, Earth's moon measures on average 31′ for terrestrial observers): Naiad, 7–13′; Thalassa, 8–14′; Despina, 14–22′; Galatea, 13–18′; Larissa, 10–14′; Proteus, 12–16′; Triton, 26–28′. An alignment of the inner moons would likely produce a spectacular sight. Neptune's largest outer satellite, Nereid, is not large enough to appear as a disk from Neptune, and is not noticeable in the sky, as its brightness at full phase varies from magnitude 2.2–6.4, depending on which point in its eccentric orbit it happens to be. The other irregular outer moons would not be visible to the naked eye, although a dedicated telescopic observer could potentially spot some at full phase. As with Uranus, the low light levels cause the major moons to appear very dim. The brightness of Triton at full phase is only −7.11, despite the fact that Triton is more than four times as intrinsically bright as Earth's moon and orbits much closer to Neptune. The sky of Triton. Triton, Neptune's largest moon, has a hazy atmosphere composed primarily of nitrogen. Because Triton orbits with synchronous rotation, Neptune always appears in the same position in its sky. Triton's rotation axis is inclined 130° to Neptune's orbital plane and thus points within 40° of the Sun twice per Neptunian year, much like Uranus's. As Neptune orbits the Sun, Triton's polar regions take turns facing the Sun for 82 years at a stretch, resulting in radical seasonal changes as one pole, then the other, moves into the sunlight. Neptune itself would span 8 degrees in Triton's sky, though with a maximum brightness roughly comparable to that of the full moon on Earth it would appear only about &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄256 as bright as the full moon, per unit area. Due to its eccentric orbit, Nereid would vary considerably in brightness, from fifth to first magnitude; its disk would be far too small to see with the naked eye. Proteus would also be difficult to resolve at just 5–6 arcminutes across, but it would never be fainter than first magnitude, and at its closest would rival Canopus. Trans-Neptunian objects. A trans-Neptunian object is any minor planet in the Solar System that orbits the Sun at a greater average distance (semi-major axis) than Neptune, 30 astronomical units (AU). Pluto and Charon. Pluto, accompanied by its largest moon Charon, orbits the Sun at a distance usually outside the orbit of Neptune except for a twenty-year period in each orbit. From Pluto, the Sun is point-like to human eyes, but still very bright, giving roughly 150 to 450 times the light of the full Moon from Earth (the variability being due to the fact that Pluto's orbit is highly elliptical, stretching from just 4.4 billion km to over 7.3 billion km from the Sun). Nonetheless, human observers would notice a large decrease in available light: the solar illuminance at Pluto's average distance is about 85 lx, which is equivalent to the lighting of an office building's hallway or restroom. Pluto's atmosphere consists of a thin envelope of nitrogen, methane, and carbon monoxide gases, all of which are derived from the ices of these substances on its surface. When Pluto is close to the Sun, the temperature of Pluto's solid surface increases, causing these ices to sublimate into gases. This atmosphere also produces a noticeable blue haze that is visible at sunset and possibly other times of the Plutonian day. Pluto and Charon are tidally locked to each other. This means that Charon always presents the same face to Pluto, and Pluto also always presents the same face to Charon. Observers on the far side of Charon from Pluto would never see the dwarf planet; observers on the far side of Pluto from Charon would never see the moon. Every 124 years, for several years it is mutual-eclipse season, during which Pluto and Charon each alternately eclipse the Sun for the other at intervals of 3.2 days. Charon, as seen from Pluto's surface at the sub-Charon point, has an angular diameter of about 3.8°, nearly eight times the Moon's angular diameter as seen from Earth and about 56 times the area. It would be a very large object in the night sky, shining about 8% as bright as the Moon (it would appear darker than the Moon because its lesser illumination comes from a larger disc). Charon's illuminance would be about 14 mlx (for comparison, a moonless clear night sky is 2 mlx while a full Moon is between 300 and 50 mlx). Extrasolar planets. For observers on extrasolar planets, the constellations would differ depending on the distances involved. The view of outer space of exoplanets can be extrapolated from open source software such as Celestia or Stellarium. Due to parallax, distant stars change their position less than nearby ones. For alien observers, the Sun would be visible to the naked human eye only at distances below 20 – 27 parsec (60–90 ly). If the Sun were to be observed from another star, it would always appear on the opposite coordinates in the sky. Thus, an observer located near a star with RA at 4 hr and declination −10 would see the Sun located at RA: 16 hr, dec: +10. A consequence of observing the universe from other stars is that stars that may appear bright in our own sky may appear dimmer in other skies and vice versa. In May 2017, glints of light from Earth, seen as twinkling by DSCOVR, a satellite stationed roughly a million miles from Earth at the Earth-Sun L1 Lagrange point, were found to be reflected light from ice crystals in the atmosphere. The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets. The position of stars in extrasolar skies differs the least to the positions in Earth's sky at the closest stars to Earth, with nearby stars shifting position the most. The Sun would appear as a bright star only at the closest stars. At the Alpha Centauri star system the Sun would appear as a bright star continuing the wavy line of Cassiopeia eastward, while Sirius would shift to a position just next to Betelgeuse and its own Proxima Centauri red dwarf would still appear as a dim star contrary to its main A and B stars. At Barnard's star the Sun would appear between the not much shifted Sirius and Belt of Orion compared to in the sky of Earth. Conversely the Sun would appear from Sirius and also Procyon around Altair. Planets of the TRAPPIST-1 system orbit extremely close together, enough so that each planet of the system would provide a detailed view of the other six. Planets of the TRAPPIST-1 system would appear in the sky with angular diameters comparable to the moon as viewed from Earth. Under clear viewing conditions, details such as phases and surface features would be easily visible to the naked eye. From the Large Magellanic Cloud. From a viewpoint in the LMC, the Milky Way's total apparent magnitude would be −2.0—over 14 times brighter than the LMC appears to us on Earth—and it would span about 36° across the sky, the width of over 70 full moons. Furthermore, because of the LMC's high galactic latitude, an observer there would get an oblique view of the entire galaxy, free from the interference of interstellar dust that makes studying in the Milky Way's plane difficult from Earth. The Small Magellanic Cloud would be about magnitude 0.6, substantially brighter than the LMC appears to us. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{intensity} \\ \\propto \\ \\frac{1}{\\text{distance}^2} \\, " }, { "math_id": 1, "text": "\\frac{1}{1600}" }, { "math_id": 2, "text": "\\delta = 2\\arctan \\left(\\frac{d}{2D}\\right)," }, { "math_id": 3, "text": "\\delta" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "D" }, { "math_id": 6, "text": "D \\gg d" }, { "math_id": 7, "text": "\\delta \\approx d / D" }, { "math_id": 8, "text": "d_\\mathrm{act}," }, { "math_id": 9, "text": "\\delta = 2\\arcsin \\left(\\frac{d_\\mathrm{act}}{2D}\\right)" }, { "math_id": 10, "text": " x \\ll 1" }, { "math_id": 11, "text": "\\arcsin x \\approx \\arctan x \\approx x" } ]
https://en.wikipedia.org/wiki?curid=602678
60274693
Difference bound matrix
In model checking, a field of computer science, a difference bound matrix (DBM) is a data structure used to represent some convex polytopes called zones. This structure can be used to efficiently implement some geometrical operations over zones, such as testing emptyness, inclusion, equality, and computing the intersection and the sum of two zones. It is, for example, used in the Uppaal model checker; where it is also distributed as an independent library. More precisely, there is a notion of canonical DBM; there is a one-to-one relation between canonical DBMs and zones and from each DBM a canonical equivalent DBM can be efficiently computed. Thus, equality of zone can be tested by checking for equality of canonical DBMs. Zone. A difference bound matrix is used to represents some kind of convex polytopes. Those polytopes are called zone. They are now defined. Formally, a zone is defined by equations of the form formula_0, formula_1, formula_2 and formula_3, with formula_4 and formula_5 some variables, and formula_6 a constant. Zones have originally be called region, but nowadays this name usually denote region, a special kind of zone. Intuitively, a region can be considered as a minimal non-empty zones, in which the constants used in constraint are bounded. Given formula_7 variables, there are exactly formula_8 different non-redundant constraints possible, formula_7 constraints which use a single variable and an upper bound, formula_7 constraints which uses a single variable and a lower bound, and for each of the formula_9 ordered pairs of variable formula_10, an upper bound on formula_11. However, an arbitrary convex polytope in formula_12 may require an arbitrarily great number of constraints. Even when formula_13, there can be an arbitrary great number of non-redundant constraints formula_14, for formula_15 some constants. This is the reason why DBMs can not be extended from zones to convex polytopes. Example. As stated in the introduction, we consider a zone defined by a set of statements of the form formula_16, formula_17, formula_2 and formula_3, with formula_4 and formula_5 some variables, and formula_6 a constant. However some of those constraints are either contradictory or redundant. We now give such examples. We also give example showing how to generate new constraints from existing constraints. For each pair of clocks formula_4 and formula_5, the DBM has a constraint of the form formula_21, where formula_22 is either &lt; or ≤. If no such constraint can be found, the constraint formula_23 can be added to the zone definition without loss of generality. But in some case, a more precise constraint can be found. Such an example is now going to be given. Actually, the two first cases above are particular cases of the third cases. Indeed, formula_18 and formula_25 can be rewritten as formula_38 and formula_39 respectively. And thus, the constraint formula_26 added in the first example is similar to the constraint added in the third example. Definition. We now fix a monoid formula_40 which is a subset of the real line. This monoid is traditionally the set of integers, rationals, reals, or their subset of non-negative numbers. Constraints. In order to define the data structure difference bound matrix, it is first required to give a data structure to encode atomic constraints. Furthermore, we introduce an algebra for atomic constraints. This algebra is similar to the tropical semiring, with two modifications: Definition of constraints. The set of satisfiable constraints is defined as the set of pairs of the form: The set of constraint contains all satisfiable constraints and contains also the following unsatisfiable constraint: The subset formula_51 can not be defined using this kind of constraints. More generally, some convex polytopes can not be defined when the ordered monoid does not have the least-upper-bound property, even if each of the constraints in its definition uses at most two variables. Operation on constraints. In order to generate a single constraint from a pair of constraints applied to the same (pair of) variable, we formalize the notion of intersection of constraints and of order over constraints. Similarly, in order to define a new constraints from existing constraints, a notion of sum of constraint must also be defined. Order on constraints. We now define an order relation over constraints. This order symbolize the inclusion relation. First, the set formula_52 is considered as an ordered set, with &lt; being inferior to ≤. Intuitively, this order is chosen because the set defined by formula_53 is strictly included in the set defined by formula_0. We then state that the constraint formula_54 is smaller than formula_55 if either formula_56 or (formula_57 and formula_58 is less than formula_59). That is, the order on constraints is the lexicographical order applied from right to left. Note that this order is a total order. If formula_48 has the least-upper-bound property (or greatest-lower-bound property) then the set of constraints also have it. Intersection of constraints. The intersection of two constraints, denoted as formula_60, is then simply defined as the minimum of those two constraints. If formula_48 has the greatest-lower bound property then the intersection of an infinite number of constraints is also defined. Sum of constraints. Given two variables formula_4 and formula_5 to which are applied constraints formula_54 and formula_55, we now explain how to generate the constraint satisfied by formula_61. This constraint is called the sum of the two above-mentioned constraint, is denoted as formula_62 and is defined as formula_63. Constraints as an algebra. Here is a list of algebraic properties satisfied by the set of constraints. Furthermore, the following algebraic properties holds over satisfiable constraints: Over non-satisfiable constraints both operations have the same zero, which is formula_50. Thus, the set of constraints does not even form a semiring, because the identity of the intersection is distinct from the zero of the sum. DBMs. Given a set of formula_7 variables, formula_67, a DBM is a matrix with column and rows indexed by formula_68 and the entries are constraints. Intuitively, for a column formula_69 and a row formula_70, the value formula_47 at position formula_71 represents formula_72. Thus, the zone defined by a matrix formula_73, denoted by formula_74, is formula_75. Note that formula_72 is equivalent to formula_76, thus the entry formula_77 is still essentially an upper bound. Note however that, since we consider a monoid formula_48, for some values of formula_69 and formula_70 the real formula_78 does not actually belong to the monoid. Before introducing the definition of a canonical DBM, we need to define and discuss an order relation on those matrices. Order on those matrices. A matrix formula_79 is considered to be smaller than a matrix formula_80 if each of its entries are smaller. Note that this order is not total. Given two DBMs formula_79 and formula_80, if formula_79 is smaller than or equal to formula_80, then formula_81. The greatest-lower-bound of two matrices formula_79 and formula_80, denoted by formula_82, has as its formula_83 entry the value formula_84. Note that since formula_85 is the «sum» operation of the semiring of constraints, the operation formula_85 is the «sum» of two DBMs where the set of DBMs is considered as a module. Similarly to the case of constraints, considered in section "Operation on constraints" above, the greatest-lower-bound of an infinite number of matrices is correctly defined as soon as formula_48 satisfies the greatest-lower-bound property. The intersection of matrices/zones is defined. The union operation is not defined, and indeed, a union of zone is not a zone in general. For an arbitrary set formula_86 of matrices which all defines the same zone formula_87, formula_88 also defines formula_87. It thus follow that, as long as formula_48 has the greatest-lower-bound property, each zone which is defined by at least a matrix has a unique minimal matrix defining it. This matrix is called the canonical DBM of formula_87. First definition of canonical DBM. We restate the definition of a canonical difference bound matrix. It is a DBM such that no smaller matrix defines the same set. It is explained below how to check whether a matrix is a DBM, and otherwise how to compute a DBM from an arbitrary matrix such that both matrices represents the same set. But first, we give some examples. Examples of matrices. We first consider the case where there is a single clock formula_4. The real line. We first give the canonical DBM for formula_89. We then introduce another DBM which encode the set formula_89. This allow to find constraints which must be satisfied by any DBM. The canonical DBM of the set of real is formula_90. It represents the constraints formula_91, formula_92, formula_93 and formula_94. All of those constraints are satisfied independently of the value assigned to formula_4. In the remaining of the discussion, we will not explicitly describe constraints due to entries of the form formula_49, since those constraints are systematically satisfied. The DBM formula_95 also encodes the set of real. It contains the constraints formula_96 and formula_97 which are satisfied independently on the value of formula_4. This show that in a canonical DBM formula_79, a diagonal entry is never greater than formula_66, because the matrix obtained from formula_79 by replacing the diagonal entry by formula_66 defines the same set and is smaller than formula_79. The empty set. We now consider many matrices which all encodes the empty set. We first give the canonical DBM for the empty set. We then explain why each of the DBM encodes the empty set. This allow to find constraints which must be satisfied by any DBM. The canonical DBM of the empty set, over one variable, is formula_98. Indeed, it represents the set satisfying the constraint formula_99, formula_100, formula_101 and formula_102. Those constraints are unsatisfiable. The DBM formula_103 also encodes the empty set. Indeed, it contains the constraint formula_101 which is unsatisfiable. More generally, this show that no entry can be formula_50 unless all entries are formula_50. The DBM formula_104 also encodes the empty set. Indeed, it contains the constraint formula_105 which is unsatisfiable. More generally, this show that the entry in the diagonal line can not be smaller than formula_66 unless it is formula_50. The DBM formula_106 also encodes the empty set. Indeed, it contains the constraints formula_107 and formula_108 which are contradictory. More generally, this show that, for each formula_109, if formula_110, then formula_111 and formula_112 are both equal to ≤. The DBM formula_113 also encodes the empty set. Indeed, it contains the constraints formula_114 and formula_115 which are contradictory. More generally, this show that for each formula_109, formula_116, unless formula_117 is formula_118. Strict constraints. The examples given in this section are similar to the examples given in the Example section above. This time, they are given as DBM. The DBM formula_119 represents the set satisfying the constraints formula_29 and formula_30. As mentioned in the Example section, both of those constraints implies that formula_31. It means that the DBM formula_120 encodes the same zone. Actually, it is the DBM of this zone. This shows that in any DBM formula_121, for each formula_122, the constraint formula_123 is smaller than the constraint formula_124. As explained in the Example section, the constant 0 can be considered as any variable, which leads to the more general rule: in any DBM formula_121, for each formula_125, the constraint formula_126 is smaller than the constraint formula_127. Three definition of canonical DBM. As explained in the introduction of the section "Difference Bound Matrix", a canonical DBM is a DBM whose rows and columns are indexed by formula_128, whose entries are constraints. Furthermore, it follows one of the following equivalent properties. The last definition can be directly used to compute the canonical DBM associated to a DBM. It suffices to apply the Floyd–Warshall algorithm to the graph and associates to each entry formula_83 the shortest path from formula_131 to formula_132 in the graph. If this algorithm detects a cycle of negative length, this means that the constraints are not satisfiable, and thus that the zone is empty. Operations on zones. As stated in the introduction, the main interest of DBMs is that they allow to easily and efficiently implements operations on zones. We first recall operations which were considered above: We now describe operations which were not considered above. The first operations described below have clear geometrical meaning. The last ones become corresponds to operations which are more natural for clock valuations. Sum of zones. The Minkowski sum of two zones, defined by two DBMs formula_79 and formula_80, is defined by the DBM formula_135 whose formula_83 entry is formula_136. Note that since formula_137 is the «product» operation of the semiring of constraints, the operation formula_137 over DBMs is not actually an operation of the module of DBM. In particular, it follows that, in order to translate a zone formula_87 by a direction formula_138, it suffices to add the DBM of formula_139 to the DBM of formula_87. Projection of a component to a fixed value. Let formula_140 a constant. Given a vector formula_141, and an index formula_142, the projection of the formula_143-th component of formula_144 to formula_145 is the vector formula_146. In the language of clock, for formula_147, this corresponds to resetting the formula_143-th clock. Projecting the formula_143-th component of a zone formula_87 to formula_145 consists simply in the set of vectors of formula_87 with their formula_143-th component to formula_145. This is implemented on DBM by setting the components formula_148 to formula_149 and the components formula_150 to formula_151 Future and past of a zone. Let us call the future the zone formula_152 and the past the zone formula_153. Given a point formula_154, the future of formula_144 is defined as formula_155, and the past of formula_144 is defined as formula_156. The names future and past comes from the notion of clock. If a set of formula_7 clocks are assigned to the values formula_4, formula_5, etc. then in their future, the set of assignment they'll have is the future of formula_144. Given a zone formula_87, the future of formula_87 are the union of the future of each points of the zone. The definition of the past of a zone is similar. The future of a zone can thus be defined as formula_157, and hence can easily be implemented as a sum of DBMs. However, there is even a simpler algorithm to apply to DBM. It suffices to change every entries formula_158 to formula_49. Similarly, the past of a zone can be computed by setting every entries formula_159 to formula_49.
[ { "math_id": 0, "text": "x\\le c" }, { "math_id": 1, "text": "x\\ge c" }, { "math_id": 2, "text": "x_1\\le x_2+c" }, { "math_id": 3, "text": "x_1\\ge x_2+c" }, { "math_id": 4, "text": "x_1" }, { "math_id": 5, "text": "x_2" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "n(n+1)" }, { "math_id": 9, "text": "n^2" }, { "math_id": 10, "text": "(x,y)" }, { "math_id": 11, "text": "x-y" }, { "math_id": 12, "text": "\\mathbb R^n" }, { "math_id": 13, "text": "n=2" }, { "math_id": 14, "text": "(x<iy+c_i)_{i\\in\\mathbb N}" }, { "math_id": 15, "text": "c_i" }, { "math_id": 16, "text": "x_1\\le c" }, { "math_id": 17, "text": "x_1\\ge c" }, { "math_id": 18, "text": "x_1\\le3" }, { "math_id": 19, "text": "x_1\\ge4" }, { "math_id": 20, "text": "x_1\\le 4" }, { "math_id": 21, "text": "x_1\\prec x_2+c" }, { "math_id": 22, "text": "\\prec" }, { "math_id": 23, "text": "x_1\\prec x_2+\\infty" }, { "math_id": 24, "text": "x_1\\le 3" }, { "math_id": 25, "text": "x_2\\ge-3" }, { "math_id": 26, "text": "x_1\\le x_2+6" }, { "math_id": 27, "text": "x_1\\le x_2+5" }, { "math_id": 28, "text": "x_1<x_2+6" }, { "math_id": 29, "text": "x_2\\le 3" }, { "math_id": 30, "text": "x_1\\le x_2+3" }, { "math_id": 31, "text": "x_1\\le 6" }, { "math_id": 32, "text": "x_1\\le 5" }, { "math_id": 33, "text": "x_1<6" }, { "math_id": 34, "text": "x_2\\le x_3+3" }, { "math_id": 35, "text": "x_1\\le x_3+6" }, { "math_id": 36, "text": "x_1\\le x_3+5" }, { "math_id": 37, "text": "x_1< x_3+6" }, { "math_id": 38, "text": "x_1\\le 0+3" }, { "math_id": 39, "text": "0\\le x_2+3" }, { "math_id": 40, "text": "(M,0,+)" }, { "math_id": 41, "text": "\\mathbb{R}" }, { "math_id": 42, "text": "< m" }, { "math_id": 43, "text": "\\le m" }, { "math_id": 44, "text": "(\\le,m)" }, { "math_id": 45, "text": "m\\in M" }, { "math_id": 46, "text": "(<,m)" }, { "math_id": 47, "text": "m" }, { "math_id": 48, "text": "M" }, { "math_id": 49, "text": "(<,\\infty)" }, { "math_id": 50, "text": "(<,-\\infty)" }, { "math_id": 51, "text": "\\{q\\in\\mathbb Q\\mid q\\le\\sqrt2\\}" }, { "math_id": 52, "text": "\\{<,\\le\\}" }, { "math_id": 53, "text": "x< c" }, { "math_id": 54, "text": "(\\prec_1,m_1)" }, { "math_id": 55, "text": "(\\prec_2,m_2)" }, { "math_id": 56, "text": "m_1< m_2" }, { "math_id": 57, "text": "m_1=m_2" }, { "math_id": 58, "text": "\\prec_1" }, { "math_id": 59, "text": "\\prec_2" }, { "math_id": 60, "text": "(\\prec_1,m_1)\\sqcap(\\prec_2,m_2)" }, { "math_id": 61, "text": "x_1+x_2" }, { "math_id": 62, "text": "(\\prec_1,m_1)+(\\prec_2,m_2)" }, { "math_id": 63, "text": "(\\min(\\prec_1,\\prec_2),m_1+m_2)" }, { "math_id": 64, "text": "((\\prec_1,m_1)\\sqcap(\\prec_2,m_2))+(\\prec_3,m_3)" }, { "math_id": 65, "text": "((\\prec_1,m_1)+(\\prec_3,m_3))\\sqcap((\\prec_2,m_2)+(\\prec_3,m_3))" }, { "math_id": 66, "text": "(\\le,0)" }, { "math_id": 67, "text": "x_1,\\dots,x_n" }, { "math_id": 68, "text": "0,x_1,\\dots,x_n" }, { "math_id": 69, "text": "C" }, { "math_id": 70, "text": "R" }, { "math_id": 71, "text": "(C,R)" }, { "math_id": 72, "text": "C\\prec R+m" }, { "math_id": 73, "text": "D((\\prec_{C,R},m_{C,R}))_{C,R\\in\\{0,x_1,\\dots,x_n}\\}" }, { "math_id": 74, "text": "\\mathcal R(D)" }, { "math_id": 75, "text": "\\{(x_1,\\dots,x_n)\\in\\mathbb R\\mid \\bigwedge_{C,R\\in\\{0,x_1,\\dots,x_n\\}}C\\prec_{C,R}R+m_{C,R}\\}" }, { "math_id": 76, "text": "C-R\\prec m" }, { "math_id": 77, "text": "(\\prec,m)" }, { "math_id": 78, "text": "C-R" }, { "math_id": 79, "text": "D" }, { "math_id": 80, "text": "D'" }, { "math_id": 81, "text": "\\mathcal R(D)\\subseteq\\mathcal R(D')" }, { "math_id": 82, "text": "D\\sqcap D'" }, { "math_id": 83, "text": "(a,b)" }, { "math_id": 84, "text": "(\\prec_{a,b},m_{a,b})\\sqcap(\\prec'_{a,b},m'_{a,b})" }, { "math_id": 85, "text": "\\sqcap" }, { "math_id": 86, "text": "\\mathcal D" }, { "math_id": 87, "text": "Z" }, { "math_id": 88, "text": "\\sqcap_{D\\in\\mathcal D}D" }, { "math_id": 89, "text": "\\mathbb R" }, { "math_id": 90, "text": "\\left(\\begin{array}{ll}(\\le,0)&(<,\\infty)\\\\(<,\\infty)&(\\le,0)\\end{array}\\right)" }, { "math_id": 91, "text": "0\\le 0+0" }, { "math_id": 92, "text": "x_1\\le 0+\\infty" }, { "math_id": 93, "text": "0\\le x_1+\\infty" }, { "math_id": 94, "text": "0\\le0+0" }, { "math_id": 95, "text": "\\left(\\begin{array}{ll}(<,\\infty)&(<,\\infty)\\\\(<,\\infty)&(<,\\infty)\\end{array}\\right)" }, { "math_id": 96, "text": "0<0+\\infty" }, { "math_id": 97, "text": "x_1< x_1+\\infty" }, { "math_id": 98, "text": "\\left(\\begin{array}{ll}(<,-\\infty)&(<,-\\infty)\\\\(<,-\\infty)&(<,-\\infty)\\end{array}\\right)" }, { "math_id": 99, "text": "0<0-\\infty" }, { "math_id": 100, "text": "0< x_1-\\infty" }, { "math_id": 101, "text": "x_1< 0-\\infty" }, { "math_id": 102, "text": "x_1< x_1-\\infty" }, { "math_id": 103, "text": "\\left(\\begin{array}{ll}(<,\\infty)&(<,-\\infty)\\\\(<,\\infty)&(<,\\infty)\\end{array}\\right)" }, { "math_id": 104, "text": "\\left(\\begin{array}{ll}(<,\\infty)&(<,\\infty)\\\\(<,\\infty)&(<,-1)\\end{array}\\right)" }, { "math_id": 105, "text": "x_1< x_1-1 " }, { "math_id": 106, "text": "\\left(\\begin{array}{ll}(<,\\infty)&(<,1)\\\\(\\le,-1)&(<,\\infty)\\end{array}\\right)" }, { "math_id": 107, "text": "0< x_1+1" }, { "math_id": 108, "text": "x_1\\le -1" }, { "math_id": 109, "text": "C,R\\in\\{0,x_1,\\dots,x_n\\}" }, { "math_id": 110, "text": "m_{C,R}=-m_{R,C}" }, { "math_id": 111, "text": "\\prec_{R,C}" }, { "math_id": 112, "text": "\\prec_{C,R}" }, { "math_id": 113, "text": "\\left(\\begin{array}{ll}(<,\\infty)&(\\le,1)\\\\(\\le,-2)&(<,\\infty)\\end{array}\\right)" }, { "math_id": 114, "text": "0\\le x_1+1" }, { "math_id": 115, "text": "x_1\\le -2" }, { "math_id": 116, "text": "-m_{C,R}\\le m_{R,C}" }, { "math_id": 117, "text": "m_{C,R}" }, { "math_id": 118, "text": "-\\infty" }, { "math_id": 119, "text": "\\left(\\begin{array}{lll}(\\le,0)&(<,\\infty)&(<,\\infty)\\\\(<,\\infty)&(\\le,0)&(\\le,3)\\\\(\\le,3)&(<,\\infty)&(\\le,0)\\end{array}\\right)" }, { "math_id": 120, "text": "\\left(\\begin{array}{lll}(\\le,0)&(\\le,6)&(<,\\infty)\\\\(<,\\infty)&(\\le,0)&(\\le,3)\\\\(\\le,3)&(<,\\infty)&(\\le,0)\\end{array}\\right)" }, { "math_id": 121, "text": "((\\prec_{C,R},m_{C,R}))_{c,r\\in\\{0,x_1,\\dots,x_n\\}}" }, { "math_id": 122, "text": "1\\le i,j\\le n" }, { "math_id": 123, "text": "(\\prec_{x_i,0},m_{x_i,0})" }, { "math_id": 124, "text": "(\\prec_{x_j,0},m_{x_j,0})+(\\prec_{x_i,x_j},m_{x_i,x_j})" }, { "math_id": 125, "text": "a,b,c\\in\\{0,x_1,\\dots,x_n\\}" }, { "math_id": 126, "text": "(\\prec_{a,b},m_{a,b})" }, { "math_id": 127, "text": "(\\prec_{c,b},m_{c,b})+(\\prec_{a,c},m_{a,c})" }, { "math_id": 128, "text": "(0,x_1,\\dots,x_n)" }, { "math_id": 129, "text": "G" }, { "math_id": 130, "text": "\\{0,x_1,\\dots,x_n\\}" }, { "math_id": 131, "text": "a" }, { "math_id": 132, "text": "b" }, { "math_id": 133, "text": "Z_1" }, { "math_id": 134, "text": "Z_2" }, { "math_id": 135, "text": "D+D'" }, { "math_id": 136, "text": "D_{a,b}+D'_{a,b}" }, { "math_id": 137, "text": "+" }, { "math_id": 138, "text": "v" }, { "math_id": 139, "text": "\\{v\\}" }, { "math_id": 140, "text": "d\\in M" }, { "math_id": 141, "text": "\\vec x=(x_1,\\dots,x_n)" }, { "math_id": 142, "text": "1\\le i\\le n" }, { "math_id": 143, "text": "i" }, { "math_id": 144, "text": "\\vec x" }, { "math_id": 145, "text": "d" }, { "math_id": 146, "text": "(x_1,\\dots,x_{i-1},d,x_{i+1},\\dots,x_n)" }, { "math_id": 147, "text": "d=0" }, { "math_id": 148, "text": "(x_{i},a)" }, { "math_id": 149, "text": "(\\le,d)+D(0,a)" }, { "math_id": 150, "text": "(a,x_{i})" }, { "math_id": 151, "text": "(\\le,-d)+D(0,a)" }, { "math_id": 152, "text": "F=\\{(t,\\dots,t)\\mid t\\in\\mathbb R_{\\ge0}\\}" }, { "math_id": 153, "text": "P=\\{(-t,\\dots,-t)\\mid t\\in\\mathbb R_{\\ge0}\\}" }, { "math_id": 154, "text": "\\vec x=(x_1,\\dots,x_n)\\in\\mathbb R^n" }, { "math_id": 155, "text": "\\{(x_1+t,\\dots,x_n+t)\\mid t\\in\\mathbb R_{\\ge0}\\}=F+\\{\\vec x\\}" }, { "math_id": 156, "text": "P+\\{\\vec x\\}" }, { "math_id": 157, "text": "F+Z" }, { "math_id": 158, "text": "(x_i,0)" }, { "math_id": 159, "text": "(0,x_i)" } ]
https://en.wikipedia.org/wiki?curid=60274693
6027945
Sexual dimorphism measures
Although the subject of sexual dimorphism is not in itself controversial, the measures by which it is assessed differ widely. Most of the measures are used on the assumption that a random variable is considered so that probability distributions should be taken into account. In this review, a series of sexual dimorphism measures are discussed concerning both their definition and the probability law on which they are based. Most of them are sample functions, or statistics, which account for only partial characteristics, for example the mean or expected value, of the distribution involved. Further, the most widely used measure fails to incorporate an inferential support. Introduction. It is widely known that sexual dimorphism is an important component of the morphological variation in biological populations (see, e.g., Klein and Cruz-Uribe, 1984; Oxnard, 1987; Kelley, 1993). In higher Primates, sexual dimorphism is also related to some aspects of the social organization and behavior (Alexander "et al.", 1979; Clutton-Brock, 1985). Thus, it has been observed that the most dimorphic species tend to polygyny and a social organization based on male dominance, whereas in the less dimorphic species, monogamy and family groups are more common. Fleagle "et al." (1980) and Kay (1982), on the other hand, have suggested that the behavior of extinct species can be inferred on the basis of sexual dimorphism and, e.g. Plavcan and van Schaick (1992) think that sex differences in size among primate species reflect processes of an ecological and social nature. Some references on sexual dimorphism regarding human populations can be seen in Lovejoy (1981), Borgognini Tarli and Repetto (1986) and Kappelman (1996). These biological facts do not appear to be controversial. However, they are based on a series of different sexual dimorphism measures, or indices. Sexual dimorphism, in most works, is measured on the assumption that a random variable is being taken into account. This means that there is a law which accounts for the behavior of the whole set of values that compose the domain of the random variable, a law which is called distribution function. Because both studies of sexual dimorphism aim at establishing differences, in some random variable, between sexes and the behavior of the random variable is accounted for by its distribution function, it follows that a sexual dimorphism study should be equivalent to a study whose main purpose is to determine to what extent the two distribution functions - one per sex - overlap (see shaded area in Fig. 1, where two normal distributions are represented). Measures based on sample means. In Borgognini Tarli and Repetto (1986) an account of indices based on sample means can be seen. Perhaps, the most widely used is the quotient, formula_0 where formula_1 is the sample mean of one sex (e.g., male) and formula_2 the corresponding mean of the other. Nonetheless, for instance, formula_3 formula_4 formula_5 have also been proposed. Going over the works where these indices are used, the reader misses any reference to their parametric counterpart (see reference above). In other words, if we suppose that the quotient of two sample means is considered, no work can be found where, in order to make inferences, the way in which the quotient is used as a point estimate of formula_6 is discussed. By assuming that differences between populations are the objective to analyze, when quotients of sample means are used it is important to point out that the only feature of these populations that seems to be interesting is the mean parameter. However, a population has also variance, as well as a shape which is defined by its distribution function (notice that, in general, this function depends on parameters such as means or variances). Measures based on something more than sample means. Marini "et al." (1999) have illustrated that it is a good idea to consider something other than sample means when sexual dimorphism is analyzed. Possibly, the main reason is that the intrasexual variability influences both the manifestation of dimorphism and its interpretation. Normal populations. Sample functions. It is likely that, within this type of indices, the one used the most is the well-known statistic with Student's "t" distribution see, for instance, Green, 1989. Marini "et al." (1999) have observed that variability among females seems to be lower than among males, so that it appears advisable to use the form of the Student's "t" statistic with degrees of freedom given by the Welch-Satterthwaite approximation, formula_7 formula_8 where formula_9 are sample variances and sample sizes, respectively. It is important to point out the following: However, in sexual dimorphism analyses, it does not appear reasonably (see Ipiña and Durand, 2000) to assume that two independent random samples have been selected. Rather on the contrary, when we sample we select some random observations - making up one sample - that sometimes correspond to one sex and sometimes to the other. Taking parameters into account. Chakraborty and Majumder (1982) have proposed an index of sexual dimorphism that is the overlapping area - to be precise, its complement - of two normal density functions (see Fig. 1). Therefore, it is a function of four parameters formula_11 (expected values and variances, respectively), and takes the shape of the two normals into account. Inman and Bradley (1989) have discussed this overlapping area as a measure to assess the distance between two normal densities. Regarding inferences, Chakraborty and Majumder proposed a sample function constructed by considering the Laplace-DeMoivre's theorem (an application to binomial laws of the central limit theorem). According to these authors, the variance of such a statistic is, formula_12 where formula_13 is the statistic, and formula_14 (male, female) stand for the estimate of the probability of observing the measurement of an individual of the formula_15 sex in some interval of the real line, and the sample size of the "i" sex, respectively. Notice that this implies that two independent random variables with binomial distributions have to be regarded. One of such variables is "number of individuals of the f sex in a sample of size formula_16 composed of individuals of the f sex", which seems nonsensical. Mixture models. Authors such as Josephson "et al." (1996) believe that the two sexes to be analyzed form a single population with a probabilistic behavior denominated a mixture of two normal populations. Thus, if formula_17 is a random variable which is normally distributed among the females of a population and likewise this variable is normally distributed among the males of the population, then, formula_18 is the density of the mixture with two normal components, where formula_19 are the normal densities and the mixing proportions of both sexes, respectively. See an example in Fig. 2 where the thicker curve represents the mixture whereas the thinner curves are the formula_20 functions. It is from a population modelled like this that a random sample with individuals of both sexes can be selected. Note that on this sample tests which are based on the normal assumption cannot be applied since, in a mixture of two normal components, formula_20 is not a normal density. Josephson "et al." limited themselves to considering two normal mixtures with the same component variances and mixing proportions. As a consequence, their proposal to measure sexual dimorphism is the difference between the mean parameters of the two normals involved. In estimating these central parameters, the procedure used by Josephson "et al." is the one of Pearson's moments. Nowadays, the EM expectation maximization algorithm (see McLachlan and Basford, 1988) and the MCMC Markov chain Monte Carlo Bayesian procedure (see Gilks "et al.", 1996) are the two competitors for estimating mixture parameters. Possibly the main difference between considering two independent normal populations and a mixture model of two normal components is in the mixing proportions, which is the same as saying that in the two independent normal population model the interaction between sexes is ignored. This, in turn implies that probabilistic properties change (see Ipiña and Durand, 2000). The MI measure. Ipiña and Durand (2000, 2004) have proposed a measure of sexual dimorphism called formula_21. This proposal computes the overlapping area between the formula_22 and formula_23 functions, which represent the contribution of each sex to the two normal components mixture (see shaded area in Fig. 2). Thus, formula_21 can be written, formula_24 formula_25 being the real line. The smaller the overlapping area the greater the gap between the two functions formula_22 and formula_23, in which case the sexual dimorphism is greater. Obviously, this index is a function of the five parameters that characterize a mixture of two normal components formula_26. Its range is in the interval formula_27, and the interested reader can see, in the work of the authors who proposed the index, the way in which an interval estimate is constructed. Measures based on non-parametric methods. Marini "et al." (1999) have suggested the Kolmogorov-Smirnov distance as a measure of sexual dimorphism. The authors use the following form of the statistic, formula_28 with formula_29 being sample cumulative distributions corresponding to two independent random samples. Such a distance has the advantage of being applicable whatever the form of the random variable distributions concerned, yet they should be continuous. The use of this distance assumes that two populations are involved. Further, the Kolmogorov-Smirnov distance is a sample function whose aim is to test that the two samples under analysis have been selected from a single distribution. If one accepts the null hypothesis, then there is not sexual dimorphism; otherwise, there is. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac {\\bar{X}_m}{\\bar{X}_f} ," }, { "math_id": 1, "text": "\\bar{X}_m" }, { "math_id": 2, "text": "\\bar{X}_f" }, { "math_id": 3, "text": "\\operatorname{log}\\frac {\\bar{X}_m}{\\bar{X}_f} ," }, { "math_id": 4, "text": "100\\frac {\\bar{X}_m - \\bar{X}_f}{\\bar{X}_f} ," }, { "math_id": 5, "text": "100\\frac {\\bar{X}_m - \\bar{X}_f}{\\bar{X}_f + \\bar{X}_f} ," }, { "math_id": 6, "text": "\\frac {\\mu_m}{\\mu_f} ," }, { "math_id": 7, "text": "T = \\frac {\\bar{X}_1 - \\bar{X}_2 - (\\mu_1 - \\mu_2)} {\\sqrt{\\frac{S^2_1}{n_1} + \\frac{S^2_2}{n_2}}} : t_\\nu ," }, { "math_id": 8, "text": "\\nu = \\frac {(\\frac{S^2_1}{n_1} + \\frac{S^2_2}{n_2})^2} {\\frac{S^2_1}{n_1(n_1 - 1)} + \\frac{S^2_2}{n_2(n_2 - 1)}} ," }, { "math_id": 9, "text": "S^2_i, n_i, i=1,2" }, { "math_id": 10, "text": "\\mu_0 = \\mu_1 - \\mu_2." }, { "math_id": 11, "text": "\\mu_i,\\sigma^2_i, i=1,2" }, { "math_id": 12, "text": "\\operatorname{var}(\\widehat{D}) = \\frac {\\widehat{p}_m(1 - \\widehat{p}_m)}{n_m} + \\frac {\\widehat{p}_f(1 - \\widehat{p}_f)}{n_f} ," }, { "math_id": 13, "text": "\\widehat{D}" }, { "math_id": 14, "text": "\\widehat{p}_i, n_i, i=m,f" }, { "math_id": 15, "text": "i" }, { "math_id": 16, "text": "n_f" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "f(x) = \\sum_{i=1}^n \\pi_if_i(x), -\\infty < x < \\infty ," }, { "math_id": 19, "text": "f_i, \\pi_i, i=1,2" }, { "math_id": 20, "text": "\\pi_if_i" }, { "math_id": 21, "text": "MI" }, { "math_id": 22, "text": "\\pi_1f_1" }, { "math_id": 23, "text": "\\pi_2f_2" }, { "math_id": 24, "text": "MI = \\int_R \\operatorname{min}[\\pi_1f_1(x), (1 - \\pi_1)f_2(x)]\\,dx ," }, { "math_id": 25, "text": "R" }, { "math_id": 26, "text": "(\\mu_i, \\sigma^2_i, \\pi_1, i=1,2)" }, { "math_id": 27, "text": "(0, 0.5]" }, { "math_id": 28, "text": "\\operatorname{max}_x|F_1(x) - F_2(x)| ," }, { "math_id": 29, "text": "F_i, i=1,2" } ]
https://en.wikipedia.org/wiki?curid=6027945
60280819
Clock (model checking)
In model checking, a subfield of computer science, a clock is a mathematical object used to model time. More precisely, a clock measures how much time passed since a particular event occurs, in this sense, a clock is more precisely an abstraction of a stopwatch. In a model of some particular program, the value of the clock may either be the time since the program was started, or the time since a particular event occurred in the program. Those clocks are used in the definition of timed automaton, signal automaton, timed propositional temporal logic and clock temporal logic. They are also used in programs such as UPPAAL which implement timed automata. Generally, the model of a system uses many clocks. Those multiple clocks are required in order to track a bounded number of events. All of those clocks are synchronized. That means that the difference in value between two fixed clocks is constant until one of them is restarted. In the language of electronics, it means that clock's jitter is null. Example. Let us assume that we want to modelize an elevator in a building with ten floors. Our model may have formula_0 clocks formula_1, such that the value of the clock formula_2 is the time someone had wait for the elevator at floor formula_3. This clock is started when someone calls the elevator on floor formula_3 (and the elevator was not already called on this floor since last time it visited that floor). This clock can be turned off when the elevator arrives at floor formula_3. In this example, we actually need ten distinct clocks because we need to track ten independent events. Another clock formula_4 may be used to check how much time an elevator spent at a particular floor. A model of this elevator can then use those clocks to assert whether the elevator's program satisfies properties such as "assuming the elevator is not kept on a floor for more than fifteen seconds, then no one has to wait for the elevator for more than three minutes". In order to check whether this statement holds, it suffices to check that, in every run of the model in which the clock formula_4 is always smaller than fifteen seconds, each clock formula_2 is turned off before it reaches three minutes. Definition. Formally, a set formula_5 of clocks is simply a finite set191. Each element of a set of clock is called a clock. Intuitively, a clock is similar to a variable in first-order logic, it is an element which may be used in a logical formula and which may takes a number of differente values. Clock valuations. A clock valuation or clock interpretation193 formula_6 over formula_7 is usually defined as a function from formula_5 to the set of non-negative real. Equivalently, a valuation can be considered as a point in formula_8. The initial assignment formula_9 is the constant function sending each clock to 0. Intuitively, it represents the initial time of the program, where each clocks are initialized simultaneously. Given a clock assignment formula_6, and a real formula_10, formula_11 denotes the clock assignment sending each clock formula_12 to formula_13. Intuitively, it represents the valuation formula_6 after which formula_14 time units passed. Given a subset formula_15 of clocks, formula_16 denotes the assignment similar to formula_6 in which the clocks of formula_17 are reset. Formally, formula_16 sends each clock formula_18 to 0 and each clock formula_19 to formula_20. Inactive clocks. The program UPPAAL introduce the notion of inactive clocks. A clock is inactive at some time if there is no possible future in which the clock's value is checked without being reset first. In our example above, the clock formula_2 is considered to be inactive when the elevator arrive at floor formula_3, and remains inactive until someone call the elevator at floor formula_3. When allowing for inactive clock, a valuation may associate a clock formula_21 to some special value formula_22 to indicate that it is inactive. If formula_23 then formula_24 also equals formula_22. Clock constraint. An atomic clock constraint is simply a term of the form formula_25, where formula_21 is a clock, formula_26 is a comparison operator, such as &lt;, ≤, = ≥, or &gt;, and formula_27 is an integral constant. In our previous example, we may use the atomic clock constraints formula_28 to state that the person at floor formula_3 waited for less than three minutes, and formula_29 to state that the elevator stayed at some floor for more than fifteen seconds. A valuation formula_6 satisfies an atomic clock valuation formula_25 if and only if formula_30. A clock constraint is either a finite conjunction of atomic clock constraint or is the constant "true" (which can be considered as the empty conjunction). A valuation formula_6 satisfies a clock constraint formula_31 if it satisfies each atomic clock constraint formula_32. Diagonal constraint. Depending on the context, an atomic clock constraint may also be of the form formula_33. Such a constraint is called a diagonal constraint, because formula_34 defines a diagonal line in formula_35. Allowing diagonal constraints may allow to decrease the size of a formula or of an automaton used to describe a system. However, algorithm's complexity may increase when diagonal constraints are allowed. In most system using clocks, allowing diagonal constraint does not increase the expressivity of the logic. We now explain how to encode such constraint with Boolean variable and non-diagonal constraint. A diagonal constraint formula_33 may be simulated using non-diagonal constraint as follows. When formula_36 is reset, check whether formula_37 holds or not. Recall this information in a Boolean variable formula_38 and replace formula_33 by this variable. When formula_39 is reset, set formula_38 to true if formula_26 is &lt; or ≤ or if formula_26 is = and formula_40. The way to encode a Boolean variable depends on the system which uses the clock. For example, UPPAAL supports Boolean variables directly. Timed automata and signal automata can encode a Boolean value in their locations. In clock temporal logic over timed words, the Boolean variable may be encoded using a new clock formula_41, whose value is 0 if and only if formula_38 is false. That is, formula_41 is reset as long as formula_41 is supposed to be false. In timed propositional temporal logic, the formula formula_42, which restart formula_39 and then evaluates formula_43, can be replaced by the formula formula_44, where formula_45 and formula_46 are copies of the formulas formula_43, where formula_33 are replaced by the true and false constant respectively. Sets defined by clock constraints. A clock constraint defines a set of valuations. Two kinds of such sets are considered in the literature. A zone is a non-empty set of valuations satisfying a clock constraint. Zones and clock constraints are implemented using difference bound matrix. Given a model formula_47, it uses a finite number of constants in its clock constraints. Let formula_48 be the greatest constant used. A region is a non-empty zone in which no constraint greater than formula_48 are used, and furthermore, such that it is minimal for the inclusion.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "c_{0},\\dots,c_{9}" }, { "math_id": 2, "text": "c_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\nu" }, { "math_id": 7, "text": "X=\\{x_1,\\dots,x_n\\}" }, { "math_id": 8, "text": "\\mathbb R_{\\ge0}^n" }, { "math_id": 9, "text": "\\nu_0" }, { "math_id": 10, "text": "t\\ge0" }, { "math_id": 11, "text": "\\nu+t" }, { "math_id": 12, "text": "x\\in C" }, { "math_id": 13, "text": "\\nu(x)+t" }, { "math_id": 14, "text": "t" }, { "math_id": 15, "text": "r\\subseteq C" }, { "math_id": 16, "text": "\\nu[r\\rightarrow 0]" }, { "math_id": 17, "text": "r" }, { "math_id": 18, "text": "x\\in r" }, { "math_id": 19, "text": "x\\not\\in r" }, { "math_id": 20, "text": "\\nu(x)" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "\\bot" }, { "math_id": 23, "text": "\\nu(x)=\\bot" }, { "math_id": 24, "text": "(\\nu+t)(x)" }, { "math_id": 25, "text": "x\\sim c" }, { "math_id": 26, "text": "\\sim" }, { "math_id": 27, "text": "c\\in\\mathbb N" }, { "math_id": 28, "text": "c_i\\le180" }, { "math_id": 29, "text": "s>15" }, { "math_id": 30, "text": "\\nu(x)\\sim c" }, { "math_id": 31, "text": "\\bigwedge_{i=1}^nx_i\\sim_ic_i" }, { "math_id": 32, "text": "x_i\\sim_ic_i" }, { "math_id": 33, "text": "x_i\\sim x_j+c" }, { "math_id": 34, "text": "x_1=x_2+c" }, { "math_id": 35, "text": "\\mathbb R_{\\ge0}^2" }, { "math_id": 36, "text": "x_j" }, { "math_id": 37, "text": "x_i\\sim c" }, { "math_id": 38, "text": "b_{i,j,c}" }, { "math_id": 39, "text": "x_i" }, { "math_id": 40, "text": "c=0" }, { "math_id": 41, "text": "x_{i,j,c}" }, { "math_id": 42, "text": "x_i.\\phi" }, { "math_id": 43, "text": "\\phi" }, { "math_id": 44, "text": "x_i.((x_i\\sim x_j+c\\implies\\phi_\\top)\\land(\\neg x_i\\sim x_j+c\\implies\\phi\\bot))" }, { "math_id": 45, "text": "\\phi_\\top" }, { "math_id": 46, "text": "\\phi_\\bot" }, { "math_id": 47, "text": "M" }, { "math_id": 48, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=60280819
6028339
Kendall's W
Rank correlation statistic used for inter-rater agreement Kendall's "W" (also known as Kendall's coefficient of concordance) is a non-parametric statistic for rank correlation. It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters and in particular inter-rater reliability. Kendall's "W" ranges from 0 (no agreement) to 1 (complete agreement). Suppose, for instance, that a number of people have been asked to rank a list of political concerns, from the most important to the least important. Kendall's "W" can be calculated from these data. If the test statistic "W" is 1, then all the survey respondents have been unanimous, and each respondent has assigned the same order to the list of concerns. If "W" is 0, then there is no overall trend of agreement among the respondents, and their responses may be regarded as essentially random. Intermediate values of "W" indicate a greater or lesser degree of unanimity among the various responses. While tests using the standard Pearson correlation coefficient assume normally distributed values and compare two sequences of outcomes simultaneously, Kendall's "W" makes no assumptions regarding the nature of the probability distribution and can handle any number of distinct outcomes. Steps of Kendall's W. Suppose that object "i" is given the rank "ri,j" by judge number "j", where there are in total "n" objects and "m" judges. Then the total rank given to object "i" is formula_0 and the mean value of these total ranks is formula_1 The sum of squared deviations, "S", is defined as formula_2 and then Kendall's "W" is defined as formula_3 If the test statistic "W" is 1, then all the judges or survey respondents have been unanimous, and each judge or respondent has assigned the same order to the list of objects or concerns. If "W" is 0, then there is no overall trend of agreement among the respondents, and their responses may be regarded as essentially random. Intermediate values of "W" indicate a greater or lesser degree of unanimity among the various judges or respondents. Kendall and Gibbons (1990) also show "W" is linearly related to the mean value of the Spearman's rank correlation coefficients between all formula_4 possible pairs of rankings between judges formula_5 Incomplete Blocks. When the judges evaluate only some subset of the "n" objects, and when the correspondent block design is a (n, m, r, p, λ)-design (note the different notation). In other words, when Then Kendall's "W" is defined as formula_8 If formula_9 and formula_10 so that each judge ranks all "n" objects, the formula above is equivalent to the original one. Correction for Ties. When tied values occur, they are each given the average of the ranks that would have been given had no ties occurred. For example, the data set {80,76,34,80,73,80} has values of 80 tied for 4th, 5th, and 6th place; since the mean of {4,5,6} = 5, ranks would be assigned to the raw data values as follows: {5,3,1,5,2,5}. The effect of ties is to reduce the value of "W"; however, this effect is small unless there are a large number of ties. To correct for ties, assign ranks to tied values as above and compute the correction factors formula_11 where "ti" is the number of tied ranks in the "i"th group of tied ranks, (where a group is a set of values having constant (tied) rank,) and "gj" is the number of groups of ties in the set of ranks (ranging from 1 to "n") for judge "j". Thus, "Tj" is the correction factor required for the set of ranks for judge "j", i.e. the "j"th set of ranks. Note that if there are no tied ranks for judge "j", "Tj" equals 0. With the correction for ties, the formula for "W" becomes formula_12 where "Ri" is the sum of the ranks for object "i", and formula_13 is the sum of the values of "Tj" over all "m" sets of ranks. Steps of Weighted Kendall's W. In some cases, the importance of the raters (experts) might not be the same as each other. In this case, the Weighted Kendall's W should be used. Suppose that object formula_14 is given the rank formula_15 by judge number formula_16, where there are in total formula_17 objects and formula_18 judges. Also, the weight of judge formula_16 is shown by formula_19 (in real-world situation, the importance of each rater can be different). Indeed, the weight of judges is formula_20. Then, the total rank given to object formula_14 is formula_21 and the mean value of these total ranks is, formula_22 The sum of squared deviations, formula_23, is defined as, formula_24 and then Weighted Kendall's "W" is defined as, formula_25 The above formula is suitable when we do not have any tie rank. Correction for Ties. In case of tie rank, we need to consider it in the above formula. To correct for ties, we should compute the correction factors, formula_26 where formula_27 represents the number of tie ranks in judge formula_16 for object formula_14. formula_28 shows the total number of ties in judge formula_16. With the correction for ties, the formula for Weighted Kendall's "W" becomes, formula_29 If the weights of the raters are equal (the distribution of the weights is uniform), the value of Weighted Kendall's "W and Kendall's "W are equal. Significance Tests. In the case of complete ranks, a commonly used significance test for "W" against a null hypothesis of no agreement (i.e. random rankings) is given by Kendall and Gibbons (1990) formula_30 Where the test statistic takes a chi-squared distribution with formula_31 degrees of freedom. In the case of incomplete rankings (see above), this becomes formula_32 Where again, there are formula_31 degrees of freedom. Legendre compared via simulation the power of the chi-square and permutation testing approaches to determining significance for Kendall's "W". Results indicated the chi-square method was overly conservative compared to a permutation test when formula_33. Marozzi extended this by also considering the "F" test, as proposed in the original publication introducing the "W" statistic by Kendall &amp; Babington Smith (1939): formula_34 Where the test statistic follows an F distribution with formula_35 and formula_36 degrees of freedom. Marozzi found the "F" test performs approximately as well as the permutation test method, and may be preferred to when formula_18 is small, as it is computationally simpler. Software. Kendall's W and Weighted Kendall's W are implemented in MATLAB, SPSS, R, and other statistical software packages. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_i=\\sum_{j=1}^m r_{i,j} ," }, { "math_id": 1, "text": "\\bar R= \\frac{1}{n} \\sum_{i=1}^n R_i." }, { "math_id": 2, "text": "S=\\sum_{i=1}^n (R_i- \\bar R)^2 ," }, { "math_id": 3, "text": "W=\\frac{12 S}{m^2(n^3-n)}." }, { "math_id": 4, "text": "m \\choose{2} " }, { "math_id": 5, "text": "\\bar{r}_s = \\frac{mW-1}{m-1} " }, { "math_id": 6, "text": "p < n" }, { "math_id": 7, "text": "\\lambda \\ge 1" }, { "math_id": 8, "text": "W=\\frac{12 \\sum_{i=1}^n (R_i^2) - 3r^2n\\left(p+1\\right)^2}{\\lambda^2n(n^2-1)}." }, { "math_id": 9, "text": "p = n" }, { "math_id": 10, "text": "\\lambda = r = m" }, { "math_id": 11, "text": "T_j=\\sum_{i=1}^{g_j} (t_i^3-t_i)," }, { "math_id": 12, "text": "W=\\frac{12\\sum_{i=1}^n (R_i^2)-3m^2n(n+1)^2}{m^2n(n^2-1)-m\\sum_{j=1}^m (T_j)}," }, { "math_id": 13, "text": "\\sum_{j=1}^m (T_j)" }, { "math_id": 14, "text": "i" }, { "math_id": 15, "text": "r_{ij}" }, { "math_id": 16, "text": "j" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "m" }, { "math_id": 19, "text": "\\vartheta_{j}" }, { "math_id": 20, "text": "\\vartheta_{j} (j=1,2,...,m)" }, { "math_id": 21, "text": "R_i=\\sum_{j=1}^m \\vartheta_{j} r_{ij}" }, { "math_id": 22, "text": "\\bar R= \\frac{1}{n} \\sum_{i=1}^n R_i" }, { "math_id": 23, "text": "S" }, { "math_id": 24, "text": "S=\\sum_{i=1}^n (R_i- \\bar R)^2 " }, { "math_id": 25, "text": "W_{w}=\\frac{12 S}{(n^3-n)}" }, { "math_id": 26, "text": "T_j=\\sum_{i=1}^{n} (t_{ij}^3-t_{ij}) \\;\\;\\;\\;\\;\\;\\; \\forall j" }, { "math_id": 27, "text": "t_{ij}" }, { "math_id": 28, "text": "T_j" }, { "math_id": 29, "text": "W_{w}=\\frac{12 S}{(n^3-n)-\\sum_{j=1}^m \\vartheta_{j} T_j}" }, { "math_id": 30, "text": "\\chi^2 =m(n-1)W " }, { "math_id": 31, "text": "df = n-1 " }, { "math_id": 32, "text": "\\chi^2 =\\frac{\\lambda(n^2-1)}{k+1}W " }, { "math_id": 33, "text": "m<20" }, { "math_id": 34, "text": "F=\\frac{W(m-1)}{1-W}" }, { "math_id": 35, "text": "v_1=n-1-(2/m)" }, { "math_id": 36, "text": "v_2=(m-1)v_1" } ]
https://en.wikipedia.org/wiki?curid=6028339
6029210
Phased array ultrasonics
Testing method Phased array ultrasonics (PA) is an advanced method of ultrasonic testing that has applications in medical imaging and industrial nondestructive testing. Common applications are to noninvasively examine the heart or to find flaws in manufactured materials such as welds. Single-element (non-phased array) probes, known technically as "monolithic" probes, emit a beam in a fixed direction. To test or interrogate a large volume of material, a conventional probe must be physically scanned (moved or turned) to sweep the beam through the area of interest. In contrast, the beam from a phased array probe can be focused and swept electronically without moving the probe. The beam is controllable because a phased array probe is made up of multiple small elements, each of which can be pulsed individually at a computer-calculated timing. The term "phased" refers to the timing, and the term "array" refers to the multiple elements. Phased array ultrasonic testing is based on principles of wave physics, which also have applications in fields such as optics and electromagnetic antennae. Principle of operation. The PA probe consists of many small ultrasonic transducers, each of which can be pulsed independently. By varying the timing, for instance by making the pulse from each transducer progressively delayed going up the line, a pattern of constructive interference is set up that results in radiating a quasi-plane ultrasonic beam at a set angle depending on the progressive time delay. In other words, by changing the progressive time delay the beam can be steered electronically. It can be swept like a search-light through the tissue or object being examined, and the data from multiple beams are put together to make a visual image showing a slice through the object. Use in industry. Phased array is widely used for nondestructive testing (NDT) in several industrial sectors, such as construction, pipelines, and power generation. This method is an advanced NDT method that is used to detect discontinuities i.e. cracks or flaws and thereby determine component quality. Due to the possibility to control parameters such as beam angle and focal distance, this method is very efficient regarding the defect detection and speed of testing. Apart from detecting flaws in components, phased array can also be used for wall thickness measurements in conjunction with corrosion testing. Phased array can be used for the following industrial purposes: Standards. European Committee for Standardization (CEN) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Focal spot size} = F\\lambda/A\n\n" }, { "math_id": 1, "text": "\\text{Near Field} = A^2/4\\lambda\n\n" } ]
https://en.wikipedia.org/wiki?curid=6029210
60293153
Timed propositional temporal logic
In model checking, a field of computer science, timed propositional temporal logic (TPTL) is an extension of propositional linear temporal logic (LTL) in which variables are introduced to measure times between two events. For example, while LTL allows to state that each event "p" is eventually followed by an event "q", TPTL furthermore allows to give a time limit for "q" to occur. Syntax. The future fragment of TPTL is defined similarly to linear temporal logic, in which furthermore, clock variables can be introduced and compared to constants. Formally, given a set formula_0 of clocks, MTL is built up from: Furthermore, for formula_8 an interval, formula_9 is considered as an abbreviation for formula_10; and similarly for every other kind of intervals. The logic TPTL+Past is built as the future fragment of TLS and also contains Note that the next operator N is not considered to be a part of MTL syntax. It will instead be defined from other operators. A closed formula is a formula over an empty set of clocks. Models. Let formula_11, which intuitively represents a set of times. Let formula_12 a function that associates to each moment formula_13 a set of propositions from "AP". A model of a TPTL formula is such a function formula_14. Usually, formula_14 is either a timed word or a signal. In those cases, formula_15 is either a discrete subset or an interval containing 0. Semantics. Let formula_15 and formula_14 be as above. Let formula_0 be a set of clocks. Let formula_16 (a "clock valuation" over formula_0). We are now going to explain what it means for a TPTL formula formula_6 to hold at time formula_17 for a valuation formula_18. This is denoted by formula_19. Let formula_6 and formula_20 be two formulas over the set of clocks formula_0, formula_21 a formula over the set of clocks formula_22, formula_2, formula_23, formula_3 a number and formula_4 being a comparison operator such as &lt;, ≤, =, ≥ or &gt;: We first consider formulas whose main operator also belongs to LTL: Metric temporal logic. Metric temporal logic is another extension of LTL that allows measurement of time. Instead of adding variables, it adds an infinity of operators formula_42 and formula_43 for formula_44 an interval of non-negative number. The semantics of the formula formula_45 at some time formula_17 is essentially the same than the semantics of the formula formula_46, with the constraints that the time formula_47 at which formula_20 must hold occurs in the interval formula_48. TPTL is as least as expressive as MTL. Indeed, the MTL formula formula_49 is equivalent to the TPTL formula formula_50 where formula_51 is a new variable. It follows that any other operator introduced in the page MTL, such as formula_52 and formula_53 can also be defined as TPTL formulas. TPTL is strictly more expressive than MTL2 both over timed words and over signals. Over timed words, no MTL formula is equivalent to formula_54. Over signal, there are no MTL formula equivalent to formula_55, which states that the last atomic proposition before time point 1 is an formula_56. Comparison with LTL. A standard (untimed) infinite word formula_57 is a function from formula_58 to formula_59. We can consider such a word using the set of time formula_60, and the function formula_61. In this case, for formula_6 an arbitrary LTL formula, formula_62 if and only if formula_63, where formula_6 is considered as a TPTL formula with non-strict operator, and formula_18 is the only function defined on the empty set. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "x\\sim c" }, { "math_id": 2, "text": "x\\in X" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": "\\sim" }, { "math_id": 5, "text": "x.\\phi" }, { "math_id": 6, "text": "\\phi" }, { "math_id": 7, "text": "X\\cup\\{x\\}" }, { "math_id": 8, "text": "I=(a,b)" }, { "math_id": 9, "text": "x\\in I" }, { "math_id": 10, "text": "x>a\\land x<b" }, { "math_id": 11, "text": "T\\subseteq\\mathbb R_+" }, { "math_id": 12, "text": "\\gamma: T\\to \\mathcal P(AP)" }, { "math_id": 13, "text": "t\\in T" }, { "math_id": 14, "text": "\\gamma" }, { "math_id": 15, "text": "T" }, { "math_id": 16, "text": "\\nu:X\\to\\mathbb R_{\\ge0}" }, { "math_id": 17, "text": "t" }, { "math_id": 18, "text": "\\nu" }, { "math_id": 19, "text": "\\gamma,t,\\nu\\models\\phi" }, { "math_id": 20, "text": "\\psi" }, { "math_id": 21, "text": "\\xi" }, { "math_id": 22, "text": "X\\cup\\{y\\}" }, { "math_id": 23, "text": "l\\in\\mathtt{AP}" }, { "math_id": 24, "text": "\\gamma,t,\\nu\\models l" }, { "math_id": 25, "text": "l\\in\\gamma(t)" }, { "math_id": 26, "text": "\\gamma,t,\\nu\\models\\neg\\phi" }, { "math_id": 27, "text": "\\gamma,t,\\nu\\not\\models\\phi" }, { "math_id": 28, "text": "\\gamma,t,\\nu\\models\\phi\\lor\\psi" }, { "math_id": 29, "text": "\\gamma,t,\\nu\\models\\psi" }, { "math_id": 30, "text": "\\gamma,t,\\nu\\models\\phi\\mathbin\\mathcal U\\psi" }, { "math_id": 31, "text": "t<t''" }, { "math_id": 32, "text": "\\gamma,t'',\\nu\\models\\psi" }, { "math_id": 33, "text": "t< t'< t''" }, { "math_id": 34, "text": "\\gamma,t',\\nu\\models\\phi" }, { "math_id": 35, "text": "\\gamma,t,\\nu\\models\\phi\\mathbin\\mathcal S\\psi" }, { "math_id": 36, "text": "t''< t" }, { "math_id": 37, "text": "t''<t'<t" }, { "math_id": 38, "text": "\\gamma,t,\\nu\\models x\\sim c" }, { "math_id": 39, "text": "t-\\nu(y)\\sim c" }, { "math_id": 40, "text": "\\gamma,t,\\nu\\models y.\\xi" }, { "math_id": 41, "text": "\\gamma,t,\\nu[y\\to t]\\models\\phi" }, { "math_id": 42, "text": "\\mathcal U_I" }, { "math_id": 43, "text": "\\mathcal S_I" }, { "math_id": 44, "text": "I" }, { "math_id": 45, "text": "\\phi \\mathbin\\mathcal {U_I}\\psi" }, { "math_id": 46, "text": "\\phi \\mathbin\\mathcal U\\psi" }, { "math_id": 47, "text": "t''" }, { "math_id": 48, "text": "t+I" }, { "math_id": 49, "text": "\\phi\\mathbin\\mathcal {U_I}\\psi" }, { "math_id": 50, "text": "x.\\phi\\mathcal(x\\in I\\land\\psi)" }, { "math_id": 51, "text": "x" }, { "math_id": 52, "text": "\\Box" }, { "math_id": 53, "text": "\\Diamond" }, { "math_id": 54, "text": "\\Box(a\\implies x.\\Diamond(b\\land\\Diamond(c\\land x\\le 5)))" }, { "math_id": 55, "text": "x.\\Diamond(a\\land x\\le 1\\land\\Box(x\\le 1\\implies\\neg b))" }, { "math_id": 56, "text": "a" }, { "math_id": 57, "text": "w=a_0,a_1,\\dots," }, { "math_id": 58, "text": "\\mathbb N" }, { "math_id": 59, "text": "A" }, { "math_id": 60, "text": "T=\\mathbb N" }, { "math_id": 61, "text": "\\gamma(i)=a_i" }, { "math_id": 62, "text": "w,i\\models\\phi" }, { "math_id": 63, "text": "\\gamma,i,\\nu\\models\\phi" } ]
https://en.wikipedia.org/wiki?curid=60293153
6029446
Solid immersion lens
A solid immersion lens (SIL) has higher magnification and higher numerical aperture than common lenses by filling the object space with a high-refractive-index solid material. SIL was originally developed for enhancing the spatial resolution of optical microscopy. There are two types of SIL: Applications of SIL. Solid immersion lens microscopy. All optical microscopes are diffraction-limited because of the wave nature of light. Current research focuses on techniques to go beyond this limit known as the Rayleigh criterion. The use of SIL can achieve spatial resolution better than the diffraction limit in air, for both far-field imaging and near-field imaging. Optical data storage. Because SIL provides high spatial resolution, the spot size of laser beam through the SIL can be smaller than diffraction limit in air, and the density of the associated optical data storage can be increased. Photolithography. Similar to immersion lithography, the use of SIL can increase spatial resolution of projected photolithographic patterns, creating smaller components on wafers. Emission Microscopy Offers advantages for semiconductor wafer emission microscopy which detects faint emissions of light (Photons) from electron-hole recombination under the influence of electrical stimulation.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "(1+1/n)r" }, { "math_id": 2, "text": "n^2" } ]
https://en.wikipedia.org/wiki?curid=6029446
60295239
Signal automaton
Field of computer science In automata theory, a field of computer science, a signal automaton is a finite automaton extended with a finite set of real-valued clocks. During a run of a signal automaton, clock values increase all with the same speed. Along the transitions of the automaton, clock values can be compared to integers. These comparisons form guards that may enable or disable transitions and by doing so constrain the possible behaviors of the automaton. Further, clocks can be reset. Example. Before formally defining what a signal automaton is, an example will be given. Let one consider the language formula_0 of signals, over a binary alphabet formula_1, which contains signals formula_2 such that: This language can be accepted by the automaton pictured nearby. As for finite automaton, incoming arrows represents initial locations and double circle represents accepting locations. However, contrary to finite automata, letters occurs in locations and not in transition. This is because letters are emitted continuously and transitions are taken discretely. The symbol formula_5 represents a clock. This clock allow to measure the time since the last time where formula_3 was emitted. Thus formula_6 ensures that formula_3 is emitted discretely. And formula_7 ensures that no more than a unit of time can pass without formula_3 occurring. Formal definition. Signal automaton. Formally, a signal automaton is a tuple formula_8 that consists of the following components: An edge formula_19 from formula_20 is a transition from locations formula_21 to formula_22 which reset the clocks of formula_23. Extended state. A pair with a location formula_21 and a clock valuation formula_24 is called either an "extended state" or a "state". Note that the word state is thus ambiguous, since, depending on the author, it may means either a pair or an element of formula_11. For the sake of the clarity, this article will use the term location for element of formula_11 and the term extended location for pairs. Here lies one of the biggest difference between signal-automata and finite automata. In a finite automaton, at some point of the execution, the state is entirely described by the number of letter read and by a finite number of possible values, which are actually called "states". That means that, given a state and a suffix of the word to read, the remaining of the run is totally determined. Thus, the word "finite" in the name "finite automata". However, as it is explained in the section "run" below, in order to resume clocks are used to determine which transitions can be taken. Thus, in order to know the state of the automaton, you must both now in which location you are, and the clock valuation. Run. As for finite automata, a run is essentially a sequence of locations, such that there exists a transition between two locations. However, two differences must be emphasized. The letter is not determined by the transition but by the locations; this is due to the fact that the letters are emitted continuously while transitions are taken discretely. Some time elapses while in a location; the clock constraints labelling a location or its successor may constraint the time spent in a single location. A "run" is a sequence of the form formula_25 satisfying some constraints. Before stating those constraints, some notations are introduced. The sequences are discrete but represents continuous events. A continuous version of the sequences formula_26, formula_27, formula_28 are now introduced. Let formula_29 integral and formula_30, then The constraints satisfied by run are, for each formula_29 integral and formula_38 real: The signal defined by this run is the function formula_43 defined above. It is said that the run defined above is a run for the signal formula_43. The notion of accepting run is defined as in finite automata for finite words and as in Büchi automata for infinite words. That is, if formula_44 is finite of length formula_45, then the run is accepting if formula_46. If the word is infinite, then the run is accepting if and only if there exists an infinite number of position formula_47 such that formula_48. Accepted signals and language. A signal formula_2 is said to be accepted by a signal automaton formula_10 if there exists a run of formula_10 on formula_2 accepting it. The set of signals accepted by formula_10 is called the language accepted by formula_10 and is denoted by formula_49. Deterministic signal automaton. As in the case of finite and Büchi automaton, a signal-automaton may be deterministic or non-deterministic. Intuitively, being deterministic as the same meaning in each of those case. It means that the set of start locations is a singleton, and that, given an extended state formula_50, and a letter formula_51, there is only one possible extended state which can be reached from formula_50 by reading formula_51. More precisely, either it is possible to stay in the location longer, or there is at most one possible successor location. Formally, this can be defined as follows: Simplified signal automata. Depending on the authors, the exact definition of signal automata may be slightly different. Two such definitions are now given. Half-open intervals. In order to simplify the definition of a run, some authors requires that each interval of a run is right-closed and left-open. This restrict automata to accept only signals whose underlying partition satisfies the same property. However, it ensures that at each time formula_38, formula_62 for formula_63 representing any of the function formula_43, formula_64 or formula_22 introduced above. Bipartite signal automaton. A bipartite signal automaton is a signal automaton in which the run alternates between open intervals and singular intervals (i.e. intervals which are singletons). It ensures that the graph underlying the automaton is a bipartite graph, and thus that the set of locations can be partitioned into formula_65, the set of open locations and of singular locations. Since the first interval contains 0, it can not be an open location, it follows that formula_66. In order to ensure that each singular location is indeed singular, for each location formula_21, there must be a clock formula_67 which is reset when entering formula_21 and such that the clock constraint of formula_21 contains formula_6. Any signal automaton can be transformed into an equivalent bipartite signal automaton. It suffices to replace each location formula_21 by a pair of locations formula_68 and introduce a new clock formula_5, such that for each formula_21, formula_69. Nearby is pictured a bipartite automaton equivalent to the signal automaton from the example section. Rectangle states represents singular locations. Synchronization of automata. The notion of product of finite automaton is extended to signal automaton. However, such a product is called a synchronization of automaton to emphasize the fact that the time should pass similarly in both automata considered. The main difference between synchronization and product is that, when two finite automata read the same word, they take transition simultaneously. This is not the case anymore for signal automata, since they can take transition at arbitrary time. Thus, the transition relation of a signal automaton may allow transition to be taken in one or two automata. Let formula_70 and formula_71 two signal automata, their synchronization is the signal automaton formula_72, where formula_20 contains the following transitions: Difference with timed automata. Timed automata is another extension of finite automata, which adds a notion of time to words. We now state some of the main differences between timed automata and signal automata. In timed automata, letters are emitted on the transitions and not in the locations. As explained above when comparing signal automata to finite automata, letters are emitted on transitions when the words are emitted discretely, as for words and timed-words while they are emitted on locations when letters are emitted continuously, as for signals. In timed automata, guards are only checked on transitions. This simplifies the definition of deterministic automaton, since it means that the constraint must be satisfied before the clocks are restarted.
[ { "math_id": 0, "text": "\\mathcal L" }, { "math_id": 1, "text": "\\{A,B\\}" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\{t\\mid \\gamma(t)=A\\}" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "x=0" }, { "math_id": 7, "text": "1>x" }, { "math_id": 8, "text": "\\mathcal A=\\langle \\Sigma,L,L_0,C,F,\\alpha,\\beta,E\\rangle" }, { "math_id": 9, "text": "\\Sigma" }, { "math_id": 10, "text": "\\mathcal A" }, { "math_id": 11, "text": "L" }, { "math_id": 12, "text": "C" }, { "math_id": 13, "text": "L_0\\subseteq L" }, { "math_id": 14, "text": "F\\subseteq L" }, { "math_id": 15, "text": "\\alpha:L\\to\\Sigma" }, { "math_id": 16, "text": "\\beta:L\\to\\mathcal B(C)" }, { "math_id": 17, "text": "E\\subseteq L\\times\\mathcal P(C)\\times L" }, { "math_id": 18, "text": "\\mathcal P(C)" }, { "math_id": 19, "text": "(\\ell,r,\\ell')" }, { "math_id": 20, "text": "E" }, { "math_id": 21, "text": "\\ell" }, { "math_id": 22, "text": "\\ell'" }, { "math_id": 23, "text": "r" }, { "math_id": 24, "text": "\\nu" }, { "math_id": 25, "text": "\\xrightarrow[\\nu_0]{C}(\\ell_0,I_0)\\xrightarrow[\\nu_1]{r_1}(\\ell_1,I_1)\\dots" }, { "math_id": 26, "text": "(\\sigma_i)" }, { "math_id": 27, "text": "(\\nu_i)" }, { "math_id": 28, "text": "(\\ell_i)" }, { "math_id": 29, "text": "i\\ge0" }, { "math_id": 30, "text": "t\\in I_i" }, { "math_id": 31, "text": "\\sigma'_t" }, { "math_id": 32, "text": "\\sigma_i" }, { "math_id": 33, "text": "\\nu'_t" }, { "math_id": 34, "text": "\\nu_i+t-\\lceil I_i\\rceil" }, { "math_id": 35, "text": "\\lceil I_i\\rceil" }, { "math_id": 36, "text": "I_i" }, { "math_id": 37, "text": "\\ell'_t=\\ell_i" }, { "math_id": 38, "text": "t\\ge0" }, { "math_id": 39, "text": "\\ell_0\\in L_0" }, { "math_id": 40, "text": "(\\ell_i,r_i,\\ell_{i+1})\\in E" }, { "math_id": 41, "text": "\\nu_{i+1}=(\\nu_i+\\mid I_i\\mid)[r_i\\rightarrow 0]" }, { "math_id": 42, "text": "\\nu'_t\\models\\beta(\\ell'_t)" }, { "math_id": 43, "text": "\\sigma'" }, { "math_id": 44, "text": "w" }, { "math_id": 45, "text": "n" }, { "math_id": 46, "text": "\\ell_{n}\\in F" }, { "math_id": 47, "text": "i" }, { "math_id": 48, "text": "\\ell_i\\in F" }, { "math_id": 49, "text": "\\mathcal{S(A)}" }, { "math_id": 50, "text": "s" }, { "math_id": 51, "text": "a" }, { "math_id": 52, "text": "L_0" }, { "math_id": 53, "text": "\\ell\\in L" }, { "math_id": 54, "text": "(\\ell,r,\\ell')\\in E" }, { "math_id": 55, "text": "\\beta(\\ell)" }, { "math_id": 56, "text": "\\beta(\\ell')" }, { "math_id": 57, "text": "(\\ell,r',\\ell')\\in E" }, { "math_id": 58, "text": "(\\ell,r'',\\ell'')\\in E" }, { "math_id": 59, "text": "r'" }, { "math_id": 60, "text": "\\beta(\\ell'')" }, { "math_id": 61, "text": "r''" }, { "math_id": 62, "text": "\\lim_{t\\leftarrow x^+}f(x)=f(t)" }, { "math_id": 63, "text": "f" }, { "math_id": 64, "text": "\\nu'" }, { "math_id": 65, "text": "\\{L^o,L^s\\}" }, { "math_id": 66, "text": "L_0\\subseteq L^s" }, { "math_id": 67, "text": "x_\\ell" }, { "math_id": 68, "text": "(\\ell^o,\\ell^s)" }, { "math_id": 69, "text": "x_{\\ell^s}=x" }, { "math_id": 70, "text": "\\mathcal A^1=\\langle \\Sigma,L^1,L^1_0,C^1,F^1,\\alpha^1,\\beta^1,E^1\\rangle" }, { "math_id": 71, "text": "\\mathcal A^2=\\langle \\Sigma,L^2,L^2_0,C^2,F^2,\\alpha^2,\\beta^2,E^2\\rangle" }, { "math_id": 72, "text": "\\mathcal A 1\\otimes\\mathcal A^2=\\langle\\Sigma,\\{(\\ell^1,\\ell^2)\\in L^1\\otimes L^2\\mid \\alpha^1(\\ell^1)=\\alpha^2(\\ell^2)\\},L^1_0\\otimes L^2_0,C^1\\cup C^2,F^1\\otimes F^2,(\\ell^1,\\ell^2)\\mapsto\\alpha^1(\\ell^1),(\\ell^1,\\ell^2)\\mapsto\\beta^1(\\ell^1)\\land\\beta^2(\\ell^2),E\\rangle" }, { "math_id": 73, "text": "((\\ell^1,\\ell^2),r^1,(\\ell^{\\prime1},\\ell^2)" }, { "math_id": 74, "text": "(\\ell^1,r,\\ell^{\\prime1})\\in E^1" }, { "math_id": 75, "text": "E^2" }, { "math_id": 76, "text": "((\\ell^1,\\ell^2),r^1\\cup r^2,(\\ell^{\\prime1},\\ell^{\\prime2})" }, { "math_id": 77, "text": "(\\ell^2,r,\\ell^{\\prime2})\\in E^2" } ]
https://en.wikipedia.org/wiki?curid=60295239
60299094
Jeremiah 41
Book of Jeremiah, chapter 41 Jeremiah 41 is the forty-first chapter of the Book of Jeremiah in the Hebrew Bible or the Old Testament of the Christian Bible. This book contains prophecies attributed to the prophet Jeremiah, and is one of the Books of the Prophets. This chapter is part of a narrative section consisting of chapters 37 to 44. Chapter 41 recounts the murder of Gedaliah, the Babylonian governor of occupied Judah, and the chaotic situation which followed this event. Jeremiah himself is not mentioned in this chapter. Text. The original text was written in Hebrew. This chapter is divided into 18 verses. Verse numbering. The order of chapters and verses of the Book of Jeremiah in the English Bibles, Masoretic Text (Hebrew), and Vulgate (Latin), in some places differs from that in the Septuagint (LXX, the Greek Bible used in the Eastern Orthodox Church and others) according to Rahlfs or Brenton. The following table is taken with minor adjustments from "Brenton's Septuagint", page 971. The order of Computer Assisted Tools for Septuagint/Scriptural Study (CATSS) based on "Alfred Rahlfs' Septuaginta" (1935) differs in some details from Joseph Ziegler's critical edition (1957) in "Göttingen LXX". "Swete's Introduction" mostly agrees with Rahlfs edition (=CATSS). Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), the Petersburg Codex of the Prophets (916), Aleppo Codex (10th century), Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint (with a different chapter and verse numbering), made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Parashot. The "parashah" sections listed here are based on the Aleppo Codex. Jeremiah 41 is a part of the "Sixteenth prophecy (Jeremiah 40-45)" in the section of "Prophecies interwoven with narratives about the prophet's life (Jeremiah 26-45)". {P}: open "parashah"; {S}: closed "parashah". {P} 41:1-10 {S} 41:11-15 {S} 41:16-18 {P} "Now it came to pass in the seventh month that Ishmael the son of Nethaniah, the son of Elishama, of the royal family and of the officers of the king, came with ten men to Gedaliah the son of Ahikam, at Mizpah. And there they ate bread together in Mizpah." The assassination of Gedaliah (41:1–10). Verse 1. Ishmael and his men murdered Gedaliah and others in Mizpah during a mealtime when the covenant community is celebrated and the people were less guarded. "Then arose Ishmael the son of Nethaniah and the ten men that were with him, and smote Gedaliah the son of Ahikam the son of Shaphan with the sword and slew him whom the king of Babylon had made governor over the land." "Then Johanan the son of Kareah, and all the captains of the forces that were with him, took from Mizpah all the rest of the people whom he had recovered from Ishmael the son of Nethaniah after he had murdered Gedaliah the son of Ahikam—the mighty men of war and the women and the children and the eunuchs, whom he had brought back from Gibeon." Johanan rescues the captives (41:11–18). Verse 16. Johanan led a group to defeat Ishmael at Gibeon, southwest of Mizpah (), freeing their captives after Ishmael and 8 others escaped to Ammon (). See also. &lt;templatestyles src="Col-begin/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=60299094
60300752
Jeremiah 42
Book of Jeremiah, chapter 42 Jeremiah 42 is the forty-second chapter of the Book of Jeremiah in the Hebrew Bible or the Old Testament of the Christian Bible. This book contains prophecies attributed to the prophet Jeremiah, and is one of the Books of the Prophets. This chapter is part of a narrative section consisting of chapters 37 to 44. Chapters 42-44 describe the emigration to Egypt involving the remnant who remained in Judah after much of the population was exiled to Babylon. In this chapter, the leaders of the community ask Jeremiah to seek divine guidance as to whether they should go to Egypt or remain in Judah, but they are found to be hypocrites in asking for advice which they intended to ignore. Text. The original text was written in Hebrew. This chapter is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), the Petersburg Codex of the Prophets (916), Aleppo Codex (10th century), Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls, i.e., 2QJer (2Q13; 1st century CE), with extant verses 7‑11, 14. There is also a translation into Koine Greek known as the Septuagint (with a different chapter and verse numbering), made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Verse numbering. The order of chapters and verses of the Book of Jeremiah in the English Bibles, Masoretic Text (Hebrew), and Vulgate (Latin), in some places differs from that in the Septuagint (LXX, the Greek Bible used in the Eastern Orthodox Church and others) according to Rahlfs or Brenton. The following table is taken with minor adjustments from "Brenton's Septuagint", page 971. The order of Computer Assisted Tools for Septuagint/Scriptural Study (CATSS) based on "Alfred Rahlfs' Septuaginta" (1935) differs in some details from Joseph Ziegler's critical edition (1957) in "Göttingen LXX". "Swete's Introduction" mostly agrees with Rahlfs' edition (=CATSS). Parashot. The "parashah" sections listed here are based on the Aleppo Codex. Jeremiah 42 is a part of the "Sixteenth prophecy (Jeremiah 40-45)" in the section of "Prophecies interwoven with narratives about the prophet's life (Jeremiah 26-45)". {P}: open "parashah"; {S}: closed "parashah". {P} 42:1-6 {P} 42:7-22 {S} Verses 1-6. The survivors of Ishmael's rebellion came to Jeremiah, who might be among the captives freed by Johanan and his forces (Jeremiah 41:16), requesting him to intercede and ask God's will on their behalf, as they were uncertain what to do. "that the Lord your God may show us the way in which we should walk and the thing we should do." Verse 3. The people sought Jeremiah for advice with regard to their plan to escape to Egypt. Verse 10b. New King James Version: "... For I relent concerning the disaster that I have brought upon you." Alternative interpretations include: Biblical commentator A. W. Streane describes the King James Version's wording as "an anthropomorphic figure", as if God's intention was to change conduct towards the people of Judah, "which with men is commonly caused by change of purpose". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=60300752
60300773
Jeremiah 43
Book of Jeremiah, chapter 43 Jeremiah 43 is the forty-third chapter of the Book of Jeremiah in the Hebrew Bible or the Old Testament of the Christian Bible. This book contains prophecies attributed to the prophet Jeremiah, and is one of the Books of the Prophets. This chapter is part of a narrative section consisting of chapters 37 to 44. Chapters 42-44 describe the emigration to Egypt involving the remnant who remained in Judah after much of the population was exiled to Babylon. In this chapter, Jeremiah performs in Egypt one of the sign-acts distinctive of his prophetic style. Text. The original text was written in Hebrew. This chapter is divided into 13 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), the Petersburg Codex of the Prophets (916), Aleppo Codex (10th century), Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls, i.e., 4QJerd (4Q72a; mid 2nd century BCE) with extant verses 2‑10, and 2QJer (2Q13; 1st century CE), with extant verses 8‑11. There is also a translation into Koine Greek known as the Septuagint (with a different chapter numbering), made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Parashot. The "parashah" sections listed here are based on the Aleppo Codex. Jeremiah 43 is a part of the "Sixteenth prophecy (Jeremiah 40-45)" in the section of "Prophecies interwoven with narratives about the prophet's life (Jeremiah 26-45)". {P}: open "parashah"; {S}: closed "parashah". {S} 43:1 {S} 43:2-7 {S} 43:8-13 {P} Verse numbering. The order of chapters and verses of the Book of Jeremiah in the English Bibles, Masoretic Text (Hebrew), and Vulgate (Latin), in some places differs from that in the Septuagint (LXX, the Greek Bible used in the Eastern Orthodox Church and others) according to Rahlfs or Brenton. The following table is taken with minor adjustments from "Brenton's Septuagint", page 971. The order of Computer Assisted Tools for Septuagint/Scriptural Study (CATSS) based on "Alfred Rahlfs' Septuaginta" (1935) differs in some details from Joseph Ziegler's critical edition (1957) in "Göttingen LXX". "Swete's Introduction" mostly agrees with Rahlfs' edition (=CATSS). "5But Johanan the son of Kareah and all the captains of the forces took all the remnant of Judah who had returned to dwell in the land of Judah, from all nations where they had been driven— 6men, women, children, the king’s daughters, and every person whom Nebuzaradan the captain of the guard had left with Gedaliah the son of Ahikam, the son of Shaphan, and Jeremiah the prophet and Baruch the son of Neriah." "And they came into the land of Egypt, for they did not obey the voice of the Lord. And they arrived at Tahpanhes." " Then the word of the Lord came to Jeremiah in Tahpanhes." Verse 8. Jeremiah was in Egypt "not out of choice, but by constraint". "He shall break also the images of Bethshemesh, that is in the land of Egypt; and the houses of the gods of the Egyptians shall he burn with fire." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=60300773
603026
Parameterized complexity
Branch of computational complexity theory In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to "multiple" parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in . The first systematic work on parameterized complexity was done by . Under the assumption that P ≠ NP, there exist many natural problems that require super-polynomial running time when complexity is measured in terms of the input size only but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter k. Hence, if k is fixed at a small value and the growth of the function over k is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable". The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential (so in particular super-polynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a fixed-parameter tractable (FPT) algorithm, because the problem can be solved efficiently (i.e., in polynomial time) for constant values of the fixed parameter. Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an FPT algorithm is said to be a fixed-parameter tractable problem and belongs to the class FPT, and the early name of the theory of parameterized complexity was fixed-parameter tractability. Many problems have the following form: given an object x and a nonnegative integer k, does x have some property that depends on k? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm that is exponential "only" in k, and not in the input size. In this way, parameterized complexity can be seen as "two-dimensional" complexity theory. This concept is formalized as follows: A "parameterized problem" is a language formula_0, where formula_1 is a finite alphabet. The second component is called the "parameter" of the problem. A parameterized problem L is "fixed-parameter tractable" if the question "formula_2?" can be decided in running time formula_3, where f is an arbitrary function depending only on k. The corresponding complexity class is called FPT. For example, there is an algorithm that solves the vertex cover problem in formula_4 time, where n is the number of vertices and k is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter. Complexity classes. FPT. FPT contains the "fixed parameter tractable" problems, which are those that can be solved in time formula_5 for some computable function f. Typically, this function is thought of as single exponential, such as formula_6, but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the form formula_7, such as formula_8. The class FPL (fixed parameter linear) is the class of problems solvable in time formula_9 for some computable function f. FPL is thus a subclass of FPT. An example is the Boolean satisfiability problem, parameterised by the number of variables. A given formula of size m with k variables can be checked by brute force in time formula_10. A vertex cover of size k in a graph of order n can be found in time formula_11, so the vertex cover problem is also in FPL. An example of a problem that is thought not to be in FPT is graph coloring parameterised by the number of colors. It is known that 3-coloring is NP-hard, and an algorithm for graph k-coloring in time formula_12 for formula_13 would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, then P = NP. There are a number of alternative definitions of FPT. For example, the running-time requirement can be replaced by formula_14. Also, a parameterised problem is in FPT if it has a so-called kernel. Kernelization is a preprocessing technique that reduces the original instance to its "hard kernel", a possibly much smaller instance that is equivalent to the original instance but has a size that is bounded by a function in the parameter. FPT is closed under a parameterised notion of reductions called fpt-reductions. Such reductions transform an instance formula_15 of some problem into an equivalent instance formula_16 of another problem (with formula_17) and can be computed in time formula_18 where formula_19 is a polynomial. Obviously, FPT contains all polynomial-time computable problems. Moreover, it contains all optimisation problems in NP that allow an efficient polynomial-time approximation scheme (EPTAS). "W" hierarchy. The "W" hierarchy is a collection of computational complexity classes. A parameterized problem is in the class "W"["i"], if every instance formula_20 can be transformed (in fpt-time) to a combinatorial circuit that has weft at most "i", such that formula_21 if and only if there is a satisfying assignment to the inputs that assigns 1 to exactly "k" inputs. The weft is the largest number of logical units with fan-in greater than two on any path from an input to the output. The total number of logical units on the paths (known as depth) must be limited by a constant that holds for all instances of the problem. Note that formula_22 and formula_23 for all formula_24. The classes in the "W" hierarchy are also closed under fpt-reduction. A complete problem for "W"["i"] is Weighted "i"-Normalized Satisfiability: given a Boolean formula written as an AND of ORs of ANDs of ... of possibly negated variables, with formula_25 layers of ANDs or ORs (and "i" alternations between AND and OR), can it be satisfied by setting exactly "k" variables to 1? Many natural computational problems occupy the lower levels, "W"[1] and "W"[2]. "W"[1]. Examples of "W"[1]-complete problems include "W"[2]. Examples of "W"[2]-complete problems include "W"["t"]. formula_26 can be defined using the family of Weighted Weft-t-Depth-d SAT problems for formula_27: formula_28 is the class of parameterized problems that fpt-reduce to this problem, and formula_29. Here, Weighted Weft-t-Depth-d SAT is the following problem: It can be shown that for formula_30 the problem Weighted t-Normalize SAT is complete for formula_26 under fpt-reductions. Here, Weighted t-Normalize SAT is the following problem: "W"["P"]. "W"["P"] is the class of problems that can be decided by a nondeterministic formula_31-time Turing machine that makes at most formula_32 nondeterministic choices in the computation on formula_15 (a "k"-restricted Turing machine). It is known that FPT is contained in W[P], and the inclusion is believed to be strict. However, resolving this issue would imply a solution to the P versus NP problem. Other connections to unparameterised computational complexity are that FPT equals "W"["P"] if and only if circuit satisfiability can be decided in time formula_33, or if and only if there is a computable, nondecreasing, unbounded function f such that all languages recognised by a nondeterministic polynomial-time Turing machine using &amp;NoBreak;&amp;NoBreak; nondeterministic choices are in "P". "W"["P"] can be loosely thought of as the class of problems where we have a set S of n items, and we want to find a subset formula_34 of size k such that a certain property holds. We can encode a choice as a list of k integers, stored in binary. Since the highest any of these numbers can be is n, formula_35 bits are needed for each number. Therefore formula_36 total bits are needed to encode a choice. Therefore we can select a subset formula_37 with formula_38 nondeterministic choices. XP. XP is the class of parameterized problems that can be solved in time formula_39 for some computable function f. These problems are called slicewise polynomial, in the sense that each "slice" of fixed k has a polynomial algorithm, although possibly with a different exponent for each k. Compare this with FPT, which merely allows a different constant prefactor for each value of k. XP contains FPT, and it is known that this containment is strict by diagonalization. para-NP. para-NP is the class of parameterized problems that can be solved by a nondeterministic algorithm in time formula_3 for some computable function f. It is known that formula_40 if and only if formula_41. A problem is para-NP-hard if it is formula_42-hard already for a constant value of the parameter. That is, there is a "slice" of fixed k that is formula_42-hard. A parameterized problem that is formula_43-hard cannot belong to the class formula_44, unless formula_41. A classic example of a formula_43-hard parameterized problem is graph coloring, parameterized by the number k of colors, which is already formula_42-hard for formula_13 (see Graph coloring#Computational complexity). A hierarchy. The A hierarchy is a collection of computational complexity classes similar to the W hierarchy. However, while the W hierarchy is a hierarchy contained in NP, the A hierarchy more closely mimics the polynomial-time hierarchy from classical complexity. It is known that A[1] = W[1] holds.
[ { "math_id": 0, "text": "L \\subseteq \\Sigma^* \\times \\N" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "(x, k) \\in L" }, { "math_id": 3, "text": "f(k) \\cdot |x|^{O(1)}" }, { "math_id": 4, "text": "O(kn + 1.274^k)" }, { "math_id": 5, "text": "f(k) \\cdot {|x|}^{O(1)}" }, { "math_id": 6, "text": "2^{O(k)}" }, { "math_id": 7, "text": "f(n,k)" }, { "math_id": 8, "text": "k^n" }, { "math_id": 9, "text": "f(k) \\cdot |x|" }, { "math_id": 10, "text": "O(2^km)" }, { "math_id": 11, "text": "O(2^kn)" }, { "math_id": 12, "text": "f(k)n^{O(1)}" }, { "math_id": 13, "text": "k=3" }, { "math_id": 14, "text": "f(k) + |x|^{O(1)}" }, { "math_id": 15, "text": "(x,k)" }, { "math_id": 16, "text": "(x',k')" }, { "math_id": 17, "text": "k' \\leq g(k)" }, { "math_id": 18, "text": "f(k)\\cdot p(|x|)" }, { "math_id": 19, "text": "p" }, { "math_id": 20, "text": "(x, k)" }, { "math_id": 21, "text": "(x, k)\\in L" }, { "math_id": 22, "text": "\\mathsf{FPT} = W[0]" }, { "math_id": 23, "text": "W[i] \\subseteq W[j]" }, { "math_id": 24, "text": "i\\le j" }, { "math_id": 25, "text": "i+1" }, { "math_id": 26, "text": "W[t]" }, { "math_id": 27, "text": "d\\geq t" }, { "math_id": 28, "text": "W[t,d]" }, { "math_id": 29, "text": "W[t] = \\bigcup_{d\\geq t} W[t,d]" }, { "math_id": 30, "text": "t\\geq2" }, { "math_id": 31, "text": "h(k) \\cdot {|x|}^{O(1)}" }, { "math_id": 32, "text": "O(f(k)\\cdot \\log n)" }, { "math_id": 33, "text": "\\exp(o(n))m^{O(1)}" }, { "math_id": 34, "text": "T \\subset S" }, { "math_id": 35, "text": "\\lceil\\log_2 n\\rceil" }, { "math_id": 36, "text": "k \\cdot \\lceil\\log_2 n\\rceil " }, { "math_id": 37, "text": "T\\subset S" }, { "math_id": 38, "text": "O(k\\cdot \\log n)" }, { "math_id": 39, "text": "n^{f(k)}" }, { "math_id": 40, "text": "\\textsf{FPT}=\\textsf{para-NP}" }, { "math_id": 41, "text": "\\textsf{P}=\\textsf{NP}" }, { "math_id": 42, "text": "\\textsf{NP}" }, { "math_id": 43, "text": "\\textsf{para-NP}" }, { "math_id": 44, "text": "\\textsf{XP}" } ]
https://en.wikipedia.org/wiki?curid=603026
6030383
Seizure types
Classifications of epileptic seizures In the field of neurology, seizure types are categories of seizures defined by seizure behavior, symptoms, and diagnostic tests. The International League Against Epilepsy (ILAE) 2017 classification of seizures is the internationally recognized standard for identifying seizure types. The ILAE 2017 classification of seizures is a revision of the prior ILAE 1981 classification of seizures. Distinguishing between seizure types is important since different types of seizures may have different causes, outcomes, and treatments. History. In ~2500 B.C., the Sumerians provided the first writings about seizures. Later in ~1050 B.C., the Babylonian scholars developed the first seizure classification, inscribing their medical knowledge in the stone tablets called Sakikku or in English "All Diseases." This early classification identified febrile seizures, absence seizures, generalized tonic-clonic seizures, focal seizures, impaired awareness seizures, and status epilepticus. Samuel-Auguste Tissot (1728–1797) authored Traité de l’Epilepsie, a book describing grand état (generalized tonic-clonic seizures) and petit état (absence seizures). Jean-Étienne Dominique Esquirol (1772–1840) later introduced grand mal (generalized tonic-clonic seizures) and petit mal to describe these seizures. In 1937, Gibbs and Lennox introduced psychomotor seizures, seizures with "mental, emotional, motor, and autonomic phenomena." Henri Gastaut led the effort to develop the ILAE 1969 classification of seizures based on clinical seizure type, electroencephalogram (EEG), anatomical substrate, etiology, and age of onset. The ILAE 1981 classification of seizure included information from EEG-video seizure recordings, but excluded anatomical substrate, etiology, and age factors as these factors were "historical or speculative" rather than directly observed. The ILAE 2017 classification of seizures closely reflects clinical practice, using observed seizure behavior and additional data to identify seizure types. Focal vs. generalized seizure onset. A seizure is a paroxysmal episode of symptoms or altered behavior arising from abnormal excessive or synchronous brain neuronal activity. A focal onset seizure arises from a biological neural network within one cerebral hemisphere, while a generalized onset seizure arises from within the cerebral hemispheres rapidly involving both hemispheres. Seizure symptoms, seizure behavior, neuroimaging, seizure etiology, EEG, and video recordings help distinguish focal from generalized onset seizures. Unknown onset seizures occur if the available information is insufficient to distinguish focal from generalized onset seizures with a formula_0 confidence. Focal to bilateral tonic-clonic seizure indicates that the seizure begins as focal seizure then later evolves to a bilateral tonic-clonic seizure. Aware vs. impaired awareness. The classification distinguishes focal aware seizures from focal impaired awareness seizures. Aware means aware of self and surroundings during the seizure, verified when a person can recall events having occurred during the seizure. Impaired awareness occurs even if the recall of events is only partially impaired. Impaired awareness may occur at any time during the seizure. If the level of awareness cannot be determined, the level of awareness is unspecified; this usually occurs for atonic seizures and epileptic spasm seizures. Motor seizures. A motor seizure has prominent movement, increased muscle contraction, or decreased muscle contraction as the initial predominant seizure feature. Atonic seizures are a brief 0.5-2 second lapses in muscle tone commonly leading to a fall. Epileptic spasm seizures are brief 1-2 second proximal limb and truncal flexion or extension movements, often repeated. Hyperkinetic seizures occur as high amplitude truncal and limb movements such as pedaling, thrashing, and rocking movements. Myoclonic seizures are brief jerks of limbs or body lasting milliseconds. Tonic seizures are abrupt increases in muscle tone greater than 2 seconds in duration. Clonic seizures occur as rhythmic body jerks. Myoclonic-atonic seizures begins with one or more jerks (myoclonic phase) followed by a loss of muscle tone (atonic phase). Myoclonic-tonic-clonic seizures begin with one or more jerks (myoclonic phase), then body stiffening (tonic-phase), then rhythmic jerks (clonic phase). Tonic-clonic seizures begin as symmetrical bilateral body stiffening (tonic phase) followed by rhythmic jerks (clonic phase). Myoclonic, atonic, tonic, and myoclonic-atonic seizures may cause abrupt falls, called drop attacks, similar to cataplexy. Automatism seizures occur with repetitive stereotyped behaviors. Non-motor seizures. A non-motor seizure may begin with a sensory, cognitive, autonomic, or emotional symptom, behavioral arrest of activity, or impaired awareness with minor motor activity as the initial predominant seizure feature. Sensory seizures occur with somatosensory, olfactory, visual, gustatory, vestibular, or thermal sensations. Cognitive seizures occur with language impairment (e.g. aphasia, dysphasia, anomia), memory impairments (deja vu, jamais vu), hallucinations, persistent thought (forced thinking), and neglect. Autonomic seizures occur with palpitations, heart rate changes, nausea, vomiting, piloerection, lacrimation, pupil size changes or urge to urinate or defecate. Emotional seizures occur with fear, anxiety, laughing, crying, pleasure, or anger sensations. These initial symptoms are seizure auras. Behavioral arrest seizures occur as an abrupt cessation of movement. Absence seizures occur with a sudden brief impairment in awareness, commonly less than 45 seconds. Typical absence seizures may be accompanied by rhythmic facial 3 per second facial movements. Atypical absence seizures occur with a less sudden impairment in awareness, often accompanied by a gradual head, limb, or truncal slumping. Myoclonic absence seizures occur with myoclonic jerks of arms and shoulders. Absence with eyelid myoclonia seizures occur with 4-6 per second eyelid myoclonic jerks and upward eye movement. Descriptors. Descriptors are additional seizure behaviors or symptoms that are appended to the seizure diagnosis. Descriptors may be a non-predominant or non-initial seizure feature. Descriptors provide a more complete description of the seizure. ILAE 2017 classification of seizure types. 1 - Classify focal seizures as focal aware, focal impaired awareness, or focal unspecified awareness. Examples. Focal onset seizure. During the typical 1 minute seizure, a person experiences a familiar (déjà vu) sensation, followed by picking and fumbling hand movements. After this seizure, the person cannot recall what was said during the seizure. Brain magnetic resonance imaging (MRI) shows left hippocampal sclerosis, a brain abnormality associated with focal seizures. This is a focal impaired awareness cognitive seizure with déjà vu. Appending a descriptor, this is a focal impaired awareness cognitive seizure with déjà vu followed by hand automatisms. Generalized onset seizure. During the typical 10 second seizure, a child abruptly stops and stares with 3 per second rhythmic eye fluttering movements. After the seizure, the child cannot recall what occurred during the seizure. An EEG test shows 3 per second spike-wave pattern, an EEG pattern indicating a generalized onset seizure. This generalized onset non-motor (absence) seizure is a typical absence seizure. Appending a descriptor, this is a typical absence seizure with 3 per second eye fluttering movements. Comparison of ILAE 2017 and ILAE 1981 classifications. Dyscognitive, simple partial, complex partial, psychic, and secondarily generalized are terms that apply only to the ILAE 1981 classification of seizures. The ILAE 2017 classification relies on intact awareness of self and surroundings, but the ILAE 1981 classification relies on intact consciousness, defined as a normal response to an external stimulus due to intact awareness and intact ability to respond. Unlike the ILAE 2017 classification, the ILAE 1981 classification specifies specific EEG patterns for each seizure type. ILAE 1981 classification of seizure types. The associated EEG patterns are not included. Partial seizures. Simple partial seizures: consciousness is not impaired. * With motor signs * With somatosensory or special-sensory symptoms * With autonomic symptoms or signs * With psychic symptoms Complex partial seizures: consciousness is impaired. * Simple partial onset, followed by impairment of consciousness * With impairment of consciousness at onset Partial seizures evolving to secondarily generalized seizures * Simple partial seizures evolving to generalized seizures * Complex partial seizures evolving to generalized seizures * Simple partial seizures evolving to complex partial seizures evolving to generalized seizures Generalized seizures. Absence seizures * Absence seizures * Atypical absence seizures Myoclonic seizures Clonic seizures Tonic seizures Tonic-clonic seizures Atonic seizures Unclassified epileptic seizures Continuous and subclinical seizures. Status epilepticus is a seizure "lasting longer than 30 minutes or a series of seizures without return to the baseline level of alertness between seizures." Epilepsia partialis continua is a rare type of focal motor seizure, commonly involving the hands or face, which recurs with intervals of seconds or minutes, lasting for extended periods of days or years. Common causes are strokes in adults, and focal cortical inflammation in children: Rasmussen's encephalitis, chronic viral infections, or autoimmune encephalitis. Subclinical seizures cause no symptoms and either no altered behavior or very minimal behavioral changes; the clinician recognizes these seizures as an evolving seizure pattern on an EEG recording. Based on unusual symptoms. Some forms of seizures have been defined based on symptoms, such as ecstatic seizures and orgasmic seizures. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ge 80%" } ]
https://en.wikipedia.org/wiki?curid=6030383
60304188
Jeremiah 45
Book of Jeremiah, chapter 45 Jeremiah 45 is the forty-fifth chapter of the Book of Jeremiah in the Hebrew Bible or the Old Testament of the Christian Bible. This book contains prophecies attributed to the prophet Jeremiah, and is one of the Books of the Prophets. This chapter closes the section comprising chapters 26–44 with the message that the prophetic word will survive through Baruch. In the New Revised Standard Version, this chapter is described as "a word of comfort to Baruch". Biblical commentator A. W. Streane calls it "a rebuke and a promise to Baruch". Text. The original text was written in Hebrew. This chapter, the shortest in the Book of Jeremiah, is divided into 5 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), the Petersburg Codex of the Prophets (916), Aleppo Codex (10th century), Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint (with a different chapter and verse numbering), made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Parashot. The "parashah" sections listed here are based on the Aleppo Codex. Jeremiah 45 is a part of the "Sixteenth prophecy (Jeremiah 40-45)" in the section of "Prophecies interwoven with narratives about the prophet's life (Jeremiah 26-45)". {P}: open "parashah"; {S}: closed "parashah". {S} 45:1-5 {P} Verse numbering. The order of chapters and verses of the Book of Jeremiah in the English Bibles, Masoretic Text (Hebrew), and Vulgate (Latin), in some places differs from that in the Septuagint (LXX, the Greek Bible used in the Eastern Orthodox Church and others) according to Rahlfs or Brenton. The following table is taken with minor adjustments from "Brenton's Septuagint", page 971. The order of Computer Assisted Tools for Septuagint/Scriptural Study (CATSS) based on "Alfred Rahlfs' Septuaginta" (1935) differs in some details from Joseph Ziegler's critical edition (1957) in "Göttingen LXX". "Swete's Introduction" mostly agrees with Rahlfs' edition (=CATSS). "The word that Jeremiah the prophet spoke to Baruch the son of Neriah, when he had written these words in a book at the instruction of Jeremiah, in the fourth year of Jehoiakim the son of Josiah, king of Judah, saying," (NKJV) Verse 1. "The fourth year of Jehoiakim" is 605 BCE. This part should have followed chapter 36. Volz found the trace of Baruch's involvement in forming chapter 1-45 from three major sections: 1-25, 26–36, and 37–45, as each section concludes with "a reference to the dictation of a scroll" (; chapter 36; chapter 45). Verse 5. New King James Version: "I will give your life to you as a prize in all places, wherever you go." The New International Version offers, as a more manageable translation: "Wherever you go I will let you escape with your life." Streane suggests that "the age is one in which [Baruch] must not expect great things for himself, but must be content if he escapes with his bare life." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=60304188
60311833
Resolution (chromatography)
In chromatography, resolution is a measure of the separation of two peaks of different retention time "t" in a chromatogram. Expression. Chromatographic peak resolution is given by formula_0 where tR is the retention time and wb is the peak width at baseline. Here compound 1 elutes before compound 2. If the peaks have the same width formula_1. Plate number. The theoretical plate height is given by formula_2 where L is the column length and N the number of theoretical plates. The relation between plate number and peak width at the base is given by formula_3. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_s = 2\\cfrac{t_{R2}-t_{R1}}{w_{b1}+w_{b2}} " }, { "math_id": 1, "text": "R_s = \\cfrac{t_{R2}-t_{R1}}{w_b} " }, { "math_id": 2, "text": "H = \\frac{L}{N} " }, { "math_id": 3, "text": "N = 16 \\cdot \\left(\\frac{t_R}{W_b}\\right)^2 \\," } ]
https://en.wikipedia.org/wiki?curid=60311833
603121
Gravestone
Stele or marker, usually stone, placed over a grave A gravestone or tombstone is a marker, usually stone, that is placed over a grave. A marker set at the head of the grave may be called a headstone. An especially old or elaborate stone slab may be called a funeral stele, stela, or slab. The use of such markers is traditional for Chinese, Jewish, Christian, and Islamic burials, as well as other traditions. In East Asia, the tomb's spirit tablet is the focus for ancestral veneration and may be removable for greater protection between rituals. Ancient grave markers typically incorporated funerary art, especially details in stone relief. With greater literacy, more markers began to include inscriptions of the deceased's name, date of birth, and date of death, often along with a personal message or prayer. The presence of a frame for photographs of the deceased is also increasingly common. Use. The stele (plural: stelae), as it is called in an archaeological context, is one of the oldest forms of funerary art. Originally, a tombstone was the stone lid of a stone coffin, or the coffin itself, and a gravestone was the stone slab (or ledger stone) that was laid flat over a grave. Now, all three terms ("stele", "tombstone" or "gravestone") are also used for markers set (usually upright) at the head of the grave. Some graves in the 18th century also contained footstones to demarcate the foot end of the grave. This sometimes developed into full kerb sets that marked the whole perimeter of the grave. Footstones were rarely annotated with more than the deceased's initials and year of death, and sometimes a memorial mason and plot reference number. Many cemeteries and churchyards have removed those extra stones to ease grass cutting by machine mower. In some UK cemeteries, the principal, and indeed only, marker is placed at the foot of the grave. Owing to soil movement and downhill creep on gentle slopes, older headstones and footstones can often be found tilted at an angle. Over time, this movement can result in the stones being sited several metres away from their original location. Graves and any related memorials are a focus for mourning and remembrance. The names of relatives are often added to a gravestone over the years, so that one marker may chronicle the passing of an entire family spread over decades. Since gravestones and a plot in a cemetery or churchyard cost money, they are also a symbol of wealth or prominence in a community. Some gravestones were even commissioned and erected to their own memory by people who were still living, as a testament to their wealth and status. In a Christian context, the very wealthy often erected elaborate memorials within churches rather than having simply external gravestones. Crematoria frequently offer similar alternatives to families who do not have a grave to mark, but who want a focus for their mourning and for remembrance. Carved or cast commemorative plaques inside the crematorium for example may serve this purpose. Materials. A cemetery may follow national codes of practice or independently prescribe the size and use of certain materials, especially in a conservation area. Some may limit the placing of a wooden memorial to six months after burial, after which a more permanent memorial must be placed. Others may require stones of a certain shape or position to facilitate grass-cutting. Headstones of granite, marble and other kinds of stone are usually created, installed, and repaired by monumental masons. Cemeteries require regular inspection and maintenance, as stones may settle, topple and, on rare occasions, fall and injure people; or graves may simply become overgrown and their markers lost or vandalised. Restoration is a specialized job for a monumental mason. Even overgrowth removal requires care to avoid damaging the carving. For example, ivy should only be cut at the base roots and left to naturally die off, never pulled off forcefully. Many materials have been used as markers. Inscriptions. Markers sometimes bear inscriptions. The information on the headstone generally includes the name of the deceased and their date of birth and death. Such information can be useful to genealogists and local historians. Larger cemeteries may require a discreet reference code as well to help accurately fix the location for maintenance. The cemetery owner, church, or, as in the UK, national guidelines might encourage the use of 'tasteful' and accurate wording in inscriptions. The placement of inscriptions is traditionally placed on the forward-facing side of the memorial but can also be seen in some cases on the reverse and around the edges of the stone itself. Some families request that an inscription be made on the portion of the memorial that will be underground. In addition, some gravestones also bear epitaphs in praise of the deceased or quotations from religious texts, such as "requiescat in pace". In a few instances the inscription is in the form of a plea, admonishment, testament of faith, claim to fame or even a curse – William Shakespeare's inscription famously declares &lt;poem&gt;Good friend, for Jesus' sake forbear, To dig the dust enclosèd here. Blest be the man that spares these stones, And cursed be he that moves my bones.&lt;/poem&gt; Or a warning about mortality, such as this Persian poetry carved on an ancient tombstone in the Tajiki capital of Dushanbe. &lt;poem&gt;I heard that mighty Jamshed the King Carved on a stone near a spring of water these words: "Many – like us – sat here by this spring And left this life in the blink of an eye. We captured the whole world through our courage and strength, Yet could take nothing with us to our grave."&lt;/poem&gt; Or a simpler warning of inevitability of death: &lt;poem&gt;Remember me as you pass by, As you are now, so once was I, As I am now, so you will be, Prepare for death and follow me.&lt;/poem&gt; Headstone engravers faced their own "year 2000 problem" when still-living people, as many as 500,000 in the United States alone, pre-purchased headstones with pre-carved death years beginning with 19–. Bas-relief carvings of a religious nature or of a profile of the deceased can be seen on some headstones, especially up to the 19th century. Since the invention of photography, a gravestone might include a framed photograph or cameo of the deceased; photographic images or artwork (showing the loved one, or some other image relevant to their life, interests or achievements) are sometimes now engraved onto smooth stone surfaces. Some headstones use lettering made of white metal fixed into the stone, which is easy to read but can be damaged by ivy or frost. Deep carvings on a hard-wearing stone may weather many centuries exposed in graveyards and still remain legible. Those fixed on the inside of churches, on the walls, or on the floor (often as near the altar as possible) may last much longer: such memorials were often embellished with a monumental brass. The choice of language and/or script on gravestones has been studied by sociolinguists as indicators of language choices and language loyalty. For example, by studying cemeteries used by immigrant communities, some languages were found to be carved "long after the language ceased to be spoken" in the communities. In other cases, a language used in the inscription may indicate a religious affiliation. Marker inscriptions have also been used for political purposes, such as the grave marker installed in January 2008 at Cave Hill Cemetery in Louisville, Kentucky by Mathew Prescott, an employee of PETA. The grave marker is located near the grave of KFC founder Harland Sanders and bears the acrostic message "KFC tortures birds". The group placed its grave marker to promote its contention that KFC is cruel to chickens. Form and decoration. Gravestones may be simple upright slabs with semi-circular, rounded, gabled, pointed-arched, pedimental, square or other shaped tops. During the 18th century, they were often decorated with "memento mori" (symbolic reminders of death) such as skulls or winged skulls, winged cherub heads, heavenly crowns, or the picks and shovels of the gravedigger. Somewhat unusual were more elaborate allegorical figures, such as Old Father Time, or emblems of trade or status, or even some event from the life of the deceased (particularly how they died). Large tomb chests, false sarcophagi as the actual remains were in the earth below, or smaller coped chests were commonly used by the gentry as a means of commemorating a number of members of the same family. In the 19th century, headstone styles became very diverse, ranging from plain to highly decorated, and often using crosses on a base or other shapes differing from the traditional slab. By this time popular designs were shifting from symbols of death like Winged heads and Skulls to Urns and Willow trees. Marble also became overwhelmingly popular as a grave material during the 1800s in the United States. More elaborately carved markers, such as crosses or angels also became popular during this time. Simple curb surrounds, sometimes filled with glass chippings, were popular during the mid-20th century. Islamic headstones are traditionally more a rectangular upright shaft, often topped with a carved topknot symbolic of a turban; but in Western countries more local styles are often used. Some form of simple decoration may be employed. Special emblems on tombstones indicate several familiar themes in many faiths. Some examples are: &lt;templatestyles src="Div col/styles.css"/&gt; Greek letters might also be used: Safety. Over time a headstone may settle or its fixings weaken. After several instances where unstable stones have fallen in dangerous circumstances, some burial authorities "topple test" headstones by firm pressure to check for stability. They may then tape them off or flatten them. This procedure has proved controversial in the UK, where an authority's duty of care to protect visitors is complicated because it often does not have any ownership rights over the dangerous marker. Authorities that have knocked over stones during testing or have unilaterally lifted and laid flat any potentially hazardous stones have been criticised, after grieving relatives have discovered that their relative's marker has been moved. Since 2007 Consistory Court and local authority guidance now restricts the force used in a topple test and requires an authority to consult relatives before moving a stone. In addition, before laying a stone flat, it must be recorded for posterity. Gravestone Cleaning. Gravestone cleaning is a practice that both professionals and volunteers can do to preserve gravestones and increase their life spans. Before cleaning any gravestones, permission must be given to the cleaner by a "descendant, the sexton, cemetery superintendent or the town, in that order. If unsure who to ask, go to your town cemetery keeper and inquire." A gravestone can be cleaned to remove human vandalism and graffiti, biological growth such as algae or lichen, and other minerals, soiling, or staining. One of the most important tenets of gravestone cleaning is "do no harm." In the United States, the National Park Service has published a list of guidelines that outline the best practices of gravestone cleaning: Do Don't References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha \\omega" }, { "math_id": 1, "text": "\\chi \\rho" } ]
https://en.wikipedia.org/wiki?curid=603121
60312102
Shivaji Sondhi
Indian-born theoretical physicist Shivaji Lal Sondhi is an Indian-born theoretical physicist who is currently the Wykeham Professor of Physics in the Rudolf Peierls Centre for Theoretical Physics at the University of Oxford, known for contributions to the field of quantum condensed matter. He is son of former Lok Sabha MP Manohar Lal Sondhi. Early life and career. Sondhi was brought up in Delhi, India, where he was educated through high school at Sardar Patel Vidyalaya. He received a B.Sc. in physics from Hindu College, University of Delhi in 1984. He enrolled in the doctoral program in physics at the State University of New York at Stony Brook and began working under the supervision of Steven Kivelson. Around 1988–89, Sondhi moved with his advisor to the University of California, Los Angeles, where he received his PhD in 1992. He spent three years as a postdoctoral researcher at the University of Illinois, Urbana-Champaign (formally under the joint supervision of Gordon Baym, Eduardo Fradkin, Paul Goldbart, and Michael Stone at what is now the Institute for Condensed Matter Theory), before taking up an assistant professorship at Princeton in 1995. At Princeton, Sondhi was promoted to associate professor in 2001, and to professor of physics in 2005. He served as a Senior Fellow of the Princeton Center for Theoretical Science (which he co-founded) from 2006–08. Sondhi remained at Princeton until 2021, when he was appointed to the Wykeham Professorship at the University of Oxford, succeeding David Sherrington. Research. Sondhi has worked extensively across a wide range of topics in theoretical condensed matter physics, notably in the areas of topological phases of matter, strongly correlated electrons, and quantum magnetism. His recent research activity focuses on the study of many-body quantum dynamics. Sondhi's most significant contributions include the discovery of skyrmions in the quantum Hall effect (with A. Karlhede, S. Kivelson and E. Rezayi), the identification of a resonating valence bond liquid phase in the triangular lattice quantum dimer model (with R. Moessner), the theoretical prediction of magnetic monopoles in spin ice (with C. Castelnovo and R. Moessner), and for proposing the formula_0-spin glass/time crystal state of periodically driven (Floquet) systems (with V. Khemani, A. Lazarides and R. Moessner). Awards and honors. In 1996, Sondhi was awarded the William L. McMillan Award in condensed matter physics from the University of Illinois. He is a recipient of both the Alfred P. Sloan Fellowship (1996) and of a David and Lucile Packard Fellowship (1998), and was elected a Fellow of the American Physical Society in 2008. He also received a Humboldt Research Award from the Alexander von Humboldt Foundation in 2015. Sondhi was awarded a 2020 Leverhulme International Professorship to be held at the University of Oxford. In 2012, Sondhi shared the EPS Europhysics Prize with Steven T. Bramwell, Claudio Castelnovo, Santiago Grigera, Roderich Moessner, and Alan Tennant, for the prediction and experimental observation of magnetic monopoles in spin ice. Other activities. Sondhi also directed a program on India and the World at the Center for International Security Studies at the Woodrow Wilson School of Princeton University. Previously, he co-founded and co-directed a program on Oil, Energy and the Middle East at Princeton. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=60312102
60313447
Schneider flow
Schneider flow describes the axisymmetric outer flow induced by a laminar or turbulent jet having a large jet Reynolds number or by a laminar plume with a large Grashof number, in the case where the fluid domain is bounded by a wall. When the jet Reynolds number or the plume Grashof number is large, the full flow field constitutes two regions of different extent: a thin boundary-layer flow that may identified as the jet or as the plume and a slowly moving fluid in the large outer region encompassing the jet or the plume. The Schneider flow describing the latter motion is an exact solution of the Navier-Stokes equations, discovered by Wilhelm Schneider in 1981. The solution was discovered also by A. A. Golubinskii and V. V. Sychev in 1979, however, was never applied to flows entrained by jets. The solution is an extension of Taylor's potential flow solution to arbitrary Reynolds number. Mathematical description. For laminar or turbulent jets and for laminar plumes, the volumetric entertainment rate per unit axial length is constant as can be seen from the solution of Schlichting jet and Yih plume. Thus, the jet or plume can be considered as a line sink that drives the motion in the outer region, as was first done by G. I. Taylor. Prior to Schneider, it was assumed that this outer fluid motion is also a large Reynolds number flow, hence the outer fluid motion is assumed to be a potential flow solution, which was solved by G. I. Taylor in 1958. For turbulent plume, the entrainment is not constant, nevertheless, the outer fluid is still governed by Taylors solution. Though Taylor's solution is still true for turbulent jet, for laminar jet or laminar plume, the effective Reynolds number for outer fluid is found to be of order unity since the entertainment by the sink in these cases is such that the flow is not inviscid. In this case, full Navier-Stokes equations has to be solved for the outer fluid motion and at the same time, since the fluid is bounded from the bottom by a solid wall, the solution has to satisfy the non-slip condition. Schneider obtained a self-similar solution for this outer fluid motion, which naturally reduced to Taylor's potential flow solution as the entrainment rate by the line sink is increased. Suppose a conical wall of semi-angle formula_2 with polar axis along the cone-axis and assume the vertex of the solid cone sits at the origin of the spherical coordinates formula_3 extending along the negative axis. Now, put the line sink along the positive side of the polar axis. Set this way, formula_1 represents the common case of flat wall with jet or plume emerging from the origin. The case formula_4 corresponds to jet/plume issuing from a thin injector. The flow is axisymmetric with zero azimuthal motion, i.e., the velocity components are formula_5. The usual technique to study the flow is to introduce the Stokes stream function formula_6 such that formula_7 Introducing formula_8 as the replacement for formula_9 and introducing the self-similar form formula_10 into the axisymmetric Navier-Stokes equations, we obtain formula_11 where the constant formula_0 is such that the volumetric entrainment rate per unit axial length is equal to formula_12. For laminar jet, formula_13 and for laminar plume, it depends on the Prandtl number formula_14, for example with formula_15, we have formula_16 and with formula_17, we have formula_13. For turbulent jet, this constant is the order of the jet Reynolds number, which is a large number. The above equation can easily be reduced to a Riccati equation by integrating thrice, a procedure that is same as in the Landau–Squire jet (main difference between Landau-Squire jet and the current problem are the boundary conditions). The boundary conditions on the conical wall formula_18 become formula_19 and along the line sink formula_20, we have formula_21 The problem has been solved numerically from here. The numerical solution also provides the values formula_22 (the radial velocity at the axis), which must be accounted in the first-order boundary analysis of the inner jet problem at the axis. Taylor's potential flow. For turbulent jet, formula_23, the linear terms in the equation can be neglected everywhere except near a small boundary layer along the wall. Then neglecting the non-slip conditions (formula_24) at the wall, the solution, which was provided by G. I. Taylor in 1958, is given by formula_25 In the case of axisymmetric turbulent plumes where the entrainment rate per unit axial length of the plume increases like formula_26, Taylor's solution is given by formula_27 where formula_28 is a constant, formula_29 is the specific buoyancy flux and formula_30 in which formula_31 denotes the associated Legendre function of the first kind with degree formula_32 and order formula_33. Composite solution for laminar jets and plumes. The Schneider flow describes the outer motion driven by the jets or plumes and it becomes invalid in a thin region encompassing the axis where the jet or plume resides. For laminrar jets, the inner solution is described by the Schlichting jet and for laminar plumes, the inner solution is prescribed by Yih plume. A composite solution by stitching the inner thin Schlichting solution and the outer Schneider solution can be constructed by the method of matched asymptotic expansions. For the laminar jet, the composite solution is given by formula_34 in which the first term respresents the Schlichting jet (with a characteristic jet thickness formula_35), the second term represents the Schneider flow and the third term is the subtraction of the matching conditions. Here formula_36 is the Reynolds number of the jet and formula_37 is the kinematic momentum flux of the jet. A similar composite solution can be constructed for the laminar plumes. Other considerations. The exact solution of the Navier-Stokes solutions was verified experimentally by Zauner in 1985. Further analysis showed that the axial momentum flux decays slowly along the axis unlike the Schlichting jet solution and it is found that the Schneider flow becomes invalid when distance from the origin increases to a distance of the order exponential of square of the jet Reynolds number, thus the domain of validity of Schneider solution increases with increasing jet Reynolds number. Presence of swirl. The presence of swirling motion, i.e., formula_38 is shown not to influence the axial motion given by formula_39 provided formula_40. If formula_0 is very large, the presence of swirl completely alters the motion on the axial plane. For formula_40, the azimuthal solution can be solved in terms of the circulation formula_41, where formula_42. The solution can be described in terms of the self-similar solution of the second kind, formula_43, where formula_44 is an unknown constant and formula_45 is an eigenvalue. The function formula_46 satisfies formula_47 subjected to the boundary conditions formula_48 and formula_49 as formula_50. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "\\alpha=\\pi/2" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "(r,\\theta,\\phi)" }, { "math_id": 4, "text": "\\alpha=\\pi" }, { "math_id": 5, "text": "(v_r,v_\\theta,0)" }, { "math_id": 6, "text": "\\psi" }, { "math_id": 7, "text": "v_r = \\frac{1}{r^2\\sin\\theta}\\frac{\\partial\\psi}{\\partial\\theta}, \\quad v_\\theta = - \\frac{1}{r\\sin\\theta}\\frac{\\partial\\psi}{\\partial r}." }, { "math_id": 8, "text": "\\xi =\\cos\\theta" }, { "math_id": 9, "text": "\\theta" }, { "math_id": 10, "text": "\\psi = K\\nu r f(\\xi)" }, { "math_id": 11, "text": "K^{-1}[(1-\\xi^2)f'''' - 4\\xi f'''] - ff''' - 3f'f'' = 0." }, { "math_id": 12, "text": "2\\pi K\\nu" }, { "math_id": 13, "text": "K=4" }, { "math_id": 14, "text": "Pr" }, { "math_id": 15, "text": "Pr=1" }, { "math_id": 16, "text": "K=6" }, { "math_id": 17, "text": "Pr=2" }, { "math_id": 18, "text": "\\xi=\\xi_w=\\cos\\alpha" }, { "math_id": 19, "text": "f(\\xi_w)=f'(\\xi_w)=0" }, { "math_id": 20, "text": "\\xi=1" }, { "math_id": 21, "text": "f(1)=1, \\quad \\lim_{\\xi\\rightarrow 1} (1-\\xi)^{1/2} f''\\rightarrow 0." }, { "math_id": 22, "text": "f'(1)" }, { "math_id": 23, "text": "K\\gg 1" }, { "math_id": 24, "text": "f'(\\xi_w)=0" }, { "math_id": 25, "text": "f= \\frac{\\xi-\\xi_w}{1-\\xi_w}." }, { "math_id": 26, "text": "r^{2/3}" }, { "math_id": 27, "text": "\\psi = C B^{1/3} r^{5/3}g(\\xi)" }, { "math_id": 28, "text": "C" }, { "math_id": 29, "text": "B" }, { "math_id": 30, "text": "g=\\frac{\\pi}{\\sqrt 3} \\sqrt{1-\\xi^2}\\left[\\frac{P_{2/3}^1(-\\xi_w)}{P_{2/3}^1(\\xi_w)}P_{2/3}^1(\\xi)-P_{2/3}^1(-\\xi)\\right]" }, { "math_id": 31, "text": "P_{2/3}^1" }, { "math_id": 32, "text": "2/3" }, { "math_id": 33, "text": "1" }, { "math_id": 34, "text": "\\psi = 4\\nu r\\left[\\frac{(Re\\, \\theta)^2}{32/3+ (Re\\, \\theta)^2}+ f(\\cos\\theta) -1\\right]" }, { "math_id": 35, "text": "Re\\,\\theta" }, { "math_id": 36, "text": "Re= \\nu^{-1}(J/2\\pi \\rho)^{1/2}" }, { "math_id": 37, "text": "J/\\rho" }, { "math_id": 38, "text": "v_\\phi\\neq 0" }, { "math_id": 39, "text": "\\psi=K\\nu r f(\\xi)" }, { "math_id": 40, "text": "K\\sim O(1)" }, { "math_id": 41, "text": "2\\pi \\Gamma" }, { "math_id": 42, "text": "\\Gamma=r\\sin\\theta v_\\phi" }, { "math_id": 43, "text": "\\Gamma=Ar^\\lambda \\Lambda(\\xi)" }, { "math_id": 44, "text": "A" }, { "math_id": 45, "text": "\\lambda" }, { "math_id": 46, "text": "\\Lambda(\\xi)" }, { "math_id": 47, "text": "K^{-1}[(1-\\xi^2)\\Lambda''+\\lambda(\\lambda-1)\\Lambda]-f\\Lambda'+\\lambda f'\\Lambda =0" }, { "math_id": 48, "text": "\\Lambda(\\xi_w)=0" }, { "math_id": 49, "text": "(1-\\xi)^{1/2}\\Lambda'\\rightarrow 0" }, { "math_id": 50, "text": "\\xi\\rightarrow 1" } ]
https://en.wikipedia.org/wiki?curid=60313447
60314915
Particle method
Class of numerical methods in scientific computing Particle methods is a widely used class of numerical algorithms in scientific computing. Its application ranges from computational fluid dynamics (CFD) over molecular dynamics (MD) to discrete element methods. History. One of the earliest particle methods is smoothed particle hydrodynamics, presented in 1977. Libersky "et al." were the first to apply SPH in solid mechanics. The main drawbacks of SPH are inaccurate results near boundaries and tension instability that was first investigated by Swegle. In the 1990s a new class of particle methods emerged. The reproducing kernel particle method (RKPM) emerged, the approximation motivated in part to correct the kernel estimate in SPH: to give accuracy near boundaries, in non-uniform discretizations, and higher-order accuracy in general. Notably, in a parallel development, the Material point methods were developed around the same time which offer similar capabilities. During the 1990s and thereafter several other varieties were developed including those listed below. List of methods and acronyms. The following numerical methods are generally considered to fall within the general class of "particle" methods. Acronyms are provided in parentheses. Definition. The mathematical definition of particle methods captures the structural commonalities of all particle methods. It, therefore, allows for formal reasoning across application domains. The definition is structured into three parts: First, the particle method algorithm structure, including structural components, namely data structures, and functions. Second, the definition of a particle method instance. A particle method instance describes a specific problem or setting, which can be solved or simulated using the particle method algorithm. Third, the definition of the particle state transition function. The state transition function describes how a particle method proceeds from the instance to the final state using the data structures and functions from the particle method algorithm. A particle method algorithm is a 7-tuple formula_0, consisting of the two data structures formula_1 such that formula_2 is the state space of the particle method, and five functions: formula_3 An initial state defines a particle method instance for a given particle method algorithm formula_0: formula_4 The instance consists of an initial value for the global variable formula_5 and an initial tuple of particles formula_6. In a specific particle method, the elements of the tuple formula_0 need to be specified. Given a specific starting point defined by an instance formula_7, the algorithm proceeds in iterations. Each iteration corresponds to one state transition step formula_8 that advances the current state of the particle method formula_9 to the next state formula_10. The state transition uses the functions formula_11 to determine the next state. The state transition function formula_12 generates a series of state transition steps until the stopping function formula_13 is formula_14. The so-calculated final state is the result of the state transition function. The state transition function is identical for every particle method. The state transition function is defined as formula_15 with formula_16. The pseudo-code illustrates the particle method state transition function: 1 formula_17 2 while formula_18 3 for formula_19 to formula_20 4 formula_21 5 for formula_22 to formula_23 6 formula_24 7 formula_25 8 for formula_19 to formula_20 9 formula_26 10 formula_27 11 formula_28 12 formula_29 13 formula_30 The fat symbols are tuples, formula_31 are particle tuples and formula_32 is an index tuple. formula_33 is the empty tuple. The operator formula_34 is the concatenation of the particle tuples, e.g. formula_35. And formula_36 is the number of elements in the tuple formula_37, e.g. formula_38. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(P, G, u, f, i, e, \\overset{\\circ}{e})" }, { "math_id": 1, "text": "\n\\begin{align}\n &P := A_1 \\times A_2 \\times ... \\times A_n \n &&\\text{the particle space,}\\\\\n &G := B_1 \\times B_2 \\times ... \\times B_m \n &&\\text{the global variable space,}\n\\end{align}\n" }, { "math_id": 2, "text": "[G\\times P^*]" }, { "math_id": 3, "text": "\n\\begin{align}\n &u: [G \\times P^*] \\times \\mathbb N \\rightarrow \\mathbb N^*\n &&\\text{the neighborhood function,}\\\\\n &f: G \\rightarrow \\{ \\top,\\bot \\} \n &&\\text{the stopping condition,}\\\\\n &i: G \\times P \\times P \\rightarrow P\\times P \n &&\\text{the interact function,}\\\\\n &e: G \\times P\\rightarrow G \\times P^* \\ \n &&\\text{the evolve function,} \\\\\n &\\overset{\\circ}{e} : G \\rightarrow G \n &&\\text{the evolve function of the global variable.}\n\\end{align}\n" }, { "math_id": 4, "text": "\n[g^1,\\mathbf{p}^1] \\in [G\\times P^*].\n" }, { "math_id": 5, "text": "g^1 \\in G" }, { "math_id": 6, "text": "\\mathbf p^1 \\in P^*" }, { "math_id": 7, "text": "[g^{1},\\mathbf{p}^{1}]" }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "[g^{t},\\mathbf{p}^{t}]" }, { "math_id": 10, "text": "[g^{t+1},\\mathbf{p}^{t+1}]" }, { "math_id": 11, "text": "u, i, e, \\overset{\\circ}{e}" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "true" }, { "math_id": 15, "text": "\n\tS : [G\\times P^*] \\rightarrow [G\\times P^*]\n" }, { "math_id": 16, "text": "\n\t[g^T, \\mathbf p^T]:=S([g^1, \\mathbf p^1])\n" }, { "math_id": 17, "text": "[g, \\mathbf p] = [g^1, \\mathbf p^1]" }, { "math_id": 18, "text": "f(g)=false" }, { "math_id": 19, "text": "j = 1" }, { "math_id": 20, "text": "|\\mathbf p|" }, { "math_id": 21, "text": "\\mathbf k=u([g,\\mathbf p],j)" }, { "math_id": 22, "text": "l = 1" }, { "math_id": 23, "text": "|\\mathbf k|" }, { "math_id": 24, "text": "(p_j,p_{k_j})=i(g,p_j,p_{k_j})" }, { "math_id": 25, "text": "\\mathbf q = ()" }, { "math_id": 26, "text": "(g,\\overline{\\mathbf q})=e(g,p_j)" }, { "math_id": 27, "text": "\\mathbf q=\\mathbf q\\circ\\overline{\\mathbf q}" }, { "math_id": 28, "text": "\\mathbf p=\\mathbf q" }, { "math_id": 29, "text": "g=\\overset{\\circ}{e}(g)" }, { "math_id": 30, "text": "[g^T, \\mathbf p^T] = [g, \\mathbf p]" }, { "math_id": 31, "text": "\\mathbf p, \\mathbf q" }, { "math_id": 32, "text": " \\mathbf k" }, { "math_id": 33, "text": " ()" }, { "math_id": 34, "text": "\\circ" }, { "math_id": 35, "text": "(p_1,p_2)\\circ(p_3,p_4,p_5)=(p_1,p_2,p_3,p_4,p_5)" }, { "math_id": 36, "text": " |\\mathbf p|" }, { "math_id": 37, "text": " \\mathbf p" }, { "math_id": 38, "text": "|(p_1,p_2)|=2" } ]
https://en.wikipedia.org/wiki?curid=60314915
60316348
Finiteness properties of groups
Mathematical property In mathematics, finiteness properties of a group are a collection of properties that allow the use of various algebraic and topological tools, for example group cohomology, to study the group. It is mostly of interest for the study of infinite groups. Special cases of groups with finiteness properties are finitely generated and finitely presented groups. Topological finiteness properties. Given an integer "n" ≥ 1, a group formula_0 is said to be "of type" "F""n" if there exists an aspherical CW-complex whose fundamental group is isomorphic to formula_0 (a classifying space for formula_0) and whose "n"-skeleton is finite. A group is said to be of type "F"∞ if it is of type "F""n" for every "n". It is of type "F" if there exists a finite aspherical CW-complex of which it is the fundamental group. For small values of "n" these conditions have more classical interpretations: It is known that for every "n" ≥ 1 there are groups of type "F""n" which are not of type "F""n"+1. Finite groups are of type "F"∞ but not of type "F". Thompson's group formula_1 is an example of a torsion-free group which is of type "F"∞ but not of type "F". A reformulation of the "F""n" property is that a group has it if and only if it acts properly discontinuously, freely and cocompactly on a CW-complex whose homotopy groups formula_2 vanish. Another finiteness property can be formulated by replacing homotopy with homology: a group is said to be of type "FH"n if it acts as above on a CW-complex whose "n" first homology groups vanish. Algebraic finiteness properties. Let formula_0 be a group and formula_3 its group ring. The group formula_0 is said to be of type FP"n" if there exists a resolution of the trivial formula_3-module formula_4 such that the "n" first terms are finitely generated projective formula_3-modules. The types "FP"∞ and "FP" are defined in the obvious way. The same statement with projective modules replaced by free modules defines the classes "FL""n" for "n" ≥ 1, "FL"∞ and "FL". It is also possible to define classes "FP""n"("R") and "FL""n"("R") for any commutative ring "R", by replacing the group ring formula_3 by formula_5 in the definitions above. Either of the conditions "F"n or "FH""n" imply "FP""n" and "FL""n" (over any commutative ring). A group is of type "FP"1 if and only if it is finitely generated, but for any "n" ≥ 2 there exists groups which are of type "FP""n" but not "F""n". Group cohomology. If a group is of type "FP""n" then its cohomology groups formula_6 are finitely generated for formula_7. If it is of type "FP" then it is of finite cohomological dimension. Thus finiteness properties play an important role in the cohomology theory of groups. Examples. Finite groups. A finite cyclic group formula_8 acts freely on the unit sphere in formula_9, preserving a CW-complex structure with finitely many cells in each dimension. Since this unit sphere is contractible, every finite cyclic group is of type F∞. The standard resolution for a group formula_8 gives rise to a contractible CW-complex with a free formula_8-action in which the cells of dimension formula_10 correspond to formula_11-tuples of elements of formula_8. This shows that every finite group is of type F∞. A non-trivial finite group is never of type "F" because it has infinite cohomological dimension. This also implies that a group with a non-trivial torsion subgroup is never of type "F". Nilpotent groups. If formula_0 is a torsion-free, finitely generated nilpotent group then it is of type F. Geometric conditions for finiteness properties. Negatively curved groups (hyperbolic or CAT(0) groups) are always of type "F"∞. Such a group is of type "F" if and only if it is torsion-free. As an example, cocompact S-arithmetic groups in algebraic groups over number fields are of type F∞. The Borel–Serre compactification shows that this is also the case for non-cocompact arithmetic groups. Arithmetic groups over function fields have very different finiteness properties: if formula_0 is an arithmetic group in a simple algebraic group of rank formula_12 over a global function field (such as formula_13) then it is of type Fr but not of type Fr+1. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "\\pi_0, \\ldots, \\pi_{n-1}" }, { "math_id": 3, "text": "\\mathbb Z\\Gamma" }, { "math_id": 4, "text": "\\mathbb Z" }, { "math_id": 5, "text": "R\\Gamma" }, { "math_id": 6, "text": "H^i(\\Gamma)" }, { "math_id": 7, "text": "0 \\le i \\le n" }, { "math_id": 8, "text": "G" }, { "math_id": 9, "text": "\\mathbb R^{\\mathbb N}" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "(n+1)" }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": "\\mathbb F_q(t)" } ]
https://en.wikipedia.org/wiki?curid=60316348
60320598
Inceptionv3
Convolutional Neural Network Inception v3 is a convolutional neural network (CNN) for assisting in image analysis and object detection, and got its start as a module for GoogLeNet. It is the third edition of Google's Inception Convolutional Neural Network, originally introduced during the ImageNet Recognition Challenge. The design of Inceptionv3 was intended to allow deeper networks while also keeping the number of parameters from growing too large: it has "under 25 million parameters", compared against 60 million for AlexNet. Just as ImageNet can be thought of as a database of classified visual objects, Inception helps classification of objects in the world of computer vision. The Inceptionv3 architecture has been reused in many different applications, often used "pre-trained" from ImageNet. One such use is in life sciences, where it aids in the research of leukemia. Version history. Inception v1. In 2014, a team at Google developed GoogLeNet, which won the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The name came from the LeNet of 1998, since both LeNet and GoogLeNet are CNNs. They also called it "Inception" after a "we need to go deeper" internet meme, a phrase from "Inception" (2010) the film. Because later, more versions were released, the original Inception architecture was renamed again as "Inception v1". The models and the code were released under Apache 2.0 license on GitHub. The Inception v1 architecture is a deep CNN composed of 22 layers. Most of these layers were "Inception modules". The original paper stated that Inception modules are a "logical culmination" of and. Since Inception v1 is deep, it suffered from the vanishing gradient problem. The team solved it by using two "auxiliary classifiers", which are linear-softmax classifiers inserted at 1/3-deep and 2/3-deep within the network, and the loss function is a weighted sum of all three:formula_0 These were removed after training was complete. This was later solved by the ResNet architecture. Inception v2. Inception v2 was released in. It improves on Inception v1 by using factorized convolutions. For example, a single 5×5 convolution can be factored into 3×3 stacked on top of another 3×3. Both has a receptive field of size 5×5. The 5×5 convolution kernel has 25 parameters, compared to just 18 in the factorized version. Thus, the 5×5 convolution is strictly more powerful than the factorized version. However, this power is not necessarily needed. Empirically, the research team found that factorized convolutions help. Inception v3. Inception v3 was also released in. It improves on Inception v2 by using Inception v4. In the team released Inception v4, Inception ResNet v1, and Inception ResNet v2. Inception v4 is an incremental update with even more factorized convolutions, and other complications that were empirically found to improve benchmarks. Inception ResNet v1 and v2 are both modifications of Inception v4, where residual connections are added to each Inception module. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = 0.3 L_{aux, 1} + 0.3 L_{aux, 2} + L_{real}" } ]
https://en.wikipedia.org/wiki?curid=60320598
60325344
Absorption heat transformer
Device which transfers heat from an intermediate temperature to a high temperature An absorption heat transformer (AHT) is a device which transfers heat from an intermediate temperature level to a high temperature level by means of an absorption process. It is driven by the temperature difference between the intermediate temperature and a low temperature level. The absorption heat transformer splits a heat flow formula_0 at an intermediate temperature level formula_1 in two heat flows, formula_2 at a higher (revaluated) temperature level formula_3 and formula_4 at a lower temperature level formula_5 (rejection heat). Such a device is also denominated type II absorption heat pump or booster heat pump. Absorption heat transformers are especially suitable for heat recovery from industrial processes, its main advantage being the capacity to upgrade to a usable level the temperature of waste heat streams using only negligible quantities of electrical energy and no additional primary energy. formula_6 Definition of the thermal coefficient of performance. It revaluates approximately 50% of the driving heat flow. The temperature lift from the intermediate to the high temperature level is up to 50K. The simplest construction, a single effect absorption heat transformer, consists of one condenser, one evaporator, one absorber and one generator. In contrast to a type I absorption heat pump an absorption heat transformer operates in reverse. The difference with absorption heat pump is that the absorber and evaporator now operate at high pressure and the condenser and generator at low pressure. The most common working pairs are water/lithium bromide (refrigerant = water, absorbent = LiBr) and ammonia/water (refrigerant = ammonia, absorbent = water). Process. A single absorption heat transformer consists of an absorber, generator, evaporator and condenser. In addition, there are a refrigerant pump, solution pump, solution throttle and solution heat exchanger for internal heat recovery. At the evaporator the refrigerant evaporates by the heat input at intermediate temperature level. The refrigerant vapour is absorbed in the absorber. Due to the released heat of absorption the process delivers heat at the high temperature level. This heat is the revalued heat. Consequently, the absorbent is diluted while absorbing the refrigerant vapour. That diluted absorbent streams through the throttle to the generator. In the generator the refrigerant desorbs from the diluted solution. This process is driven by the heat input at the intermediate temperature level. The refrigerant vapour is condensed in the condenser and it is pumped by the refrigerant pump into the evaporator. The heat at the condenser occurs at low temperature level. The concentrated absorbent is pumped into the absorber by the solution pump. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dot Q_1 " }, { "math_id": 1, "text": "T_1 " }, { "math_id": 2, "text": "\\dot Q_2 " }, { "math_id": 3, "text": "T_2 " }, { "math_id": 4, "text": "\\dot Q_0 " }, { "math_id": 5, "text": "T_0 " }, { "math_id": 6, "text": "COP_{th} = \\frac{\\dot Q_2}{\\dot Q_1} = \\frac{revalued \\ heat}{total \\ driving \\ heat}" } ]
https://en.wikipedia.org/wiki?curid=60325344
60325936
Historical models of the Solar System
Historical models of the Solar System first appeared during prehistoric periods and are being updated to this day. The models of the Solar System throughout history were first represented in the early form of cave markings and drawings, calendars and astronomical symbols. Then books and written records became the main source of information that expressed the way the people of the time thought of the Solar System. New models of the Solar System are usually built on previous models, thus, the early models are kept track of by intellectuals in astronomy, an extended progress from trying to perfect the geocentric model eventually using the heliocentric model of the Solar System. The use of the Solar System model began as a resource to signify particular periods during the year which was exploited by many leaders from the past. Astronomers and great thinkers of the past were able to record observations and attempt to formulate a model that accurately interprets the recordings. This scientific method of deriving a model of the Solar System is what enabled progress towards more accurate models to have a better understanding of the Solar System that we are in. Early astronomy. Navagraha. Around the 3rd millennium BC, Hindu astrology is developed, based on the Navagraha, nine heavenly bodies and deities that influence human life on Earth: the Sun, the Moon, the then known five planets, plus the ascending and descending nodes of the Moon. The Nebra Sky Disc. The Nebra Sky Disc is a bronze dish with symbols that are interpreted generally as the Sun or full moon, a lunar crescent, and stars (including a cluster of seven stars interpreted as the Pleiades). The disc has been attributed to a site in present-day Germany near Nebra, Saxony-Anhalt, and was originally dated by archaeologists to c. 1600 BCE, based on the provenance provided by the looters who found it. Researchers initially suggested the disc is an artefact from the Bronze Age Unetice culture, although a later dating to the Iron Age has also been proposed. Firmament. The ancient Hebrews, like all the ancient peoples of the Near East, believed the sky was a solid dome with the Sun, Moon, planets and stars embedded in it. In biblical cosmology, the firmament is the vast solid dome created by God during his creation of the world to divide the primal sea into upper and lower portions so that the dry land could appear. Babylonian interpretation. Babylonians thought the universe revolved around heaven and Earth. They used methodological observations of the patterns of planets and stars movements to predict future possibilities such as eclipses. Babylonians were able to make use of periodic appearances of the Moon to generate a time source - a calendar. This was developed as the appearance of the full moon was visible every month. The 12 months came about by dividing the ecliptic into 12 equal segments of 30 degrees and were given zodiacal constellation names which were later used by the Greeks. Chinese theories. The Chinese had multiple theories of the structure of the universe. The first theory is the "Gaitian" (celestial lid) theory, mentioned in an old mathematical text called "Zhou bei suan jing" in 100 BCE, in which the Earth is within the heaven, where the heaven acts as a dome or a lid. The second theory is the "Huntian" (Celestial sphere) theory during 100 BCE. This theory claims that the Earth floats on the water that the Heaven contains, which was accepted as the default theory until 200 AD. The "Xuanye" (Ubiquitous darkness) theory attempts to simplify the structure by implying that the Sun, Moon and the stars are just a highly dense vapour that floats freely in space with no periodic motion. Greek astronomy. Since 600 BCE, Greek thinkers noticed the periodic fashion of the Solar System (then regarded as the "whole universe") but, like their contemporaries, they were puzzled about the forward and retrograde motion of the planets, the "wanderer stars", long taken as heavenly deities. Many theories were announced during this period, mostly purely speculative, but progressively supported by geometry. Thales of Miletus alleged to have predicted the solar eclipse of 586 BCE. Around 475 BCE, Parmenides claimed that the universe is spherical and moonlight is a reflection of sunlight. Shortly after, circa 450 BCE, Anaxagoras was the first philosopher to consider the Sun as a huge object (larger than the land of Peloponnesus), and consequently, to realize how far from Earth it might be. He also suggested that the Moon is rocky, thus opaque, and closer to the Earth than the Sun, giving a correct explanation of eclipses. To him, comets are formed by collisions of planets and that the motion of planets is controlled by the "nous" (mind). Anaximander cosmology. Anaximander, around 560 BCE, was the first to conceive a mechanical model of the world. In his model, the Earth floats very still in the centre of the infinite, not supported by anything. Its curious shape is that of a cylinder with a height one-third of its diameter. The flat top forms the inhabited world, which is surrounded by a circular oceanic mass. At the origin, after the separation of hot and cold, a ball of flame appeared that surrounded Earth like bark on a tree. This ball broke apart to form the rest of the Universe. It resembled a system of hollow concentric wheels, filled with fire, with the rims pierced by holes like those of a flute. Consequently, the Sun was the fire that one could see through a hole the same size as the Earth on the farthest wheel, and an eclipse corresponded with the occlusion of that hole. The diameter of the solar wheel was twenty-seven times that of the Earth (or twenty-eight, depending on the sources) and the lunar wheel, whose fire was less intense, eighteen (or nineteen) times. Its hole could change shape, thus explaining lunar phases. The stars and the planets, located closer, followed the same model. Anaximander was the first philosopher to present a system where the celestial bodies turned at different distances. Pythagorean astronomical system. Around 400 BCE, Pythagoras' students believed the motion of planets is caused by an out-of-sight "fire" at the centre of the universe (not the Sun) that powers them, and Sun and Earth orbit that "Central Fire" at different distances. The Earth's inhabited side is always opposite to the Central Fire, rendering it invisible to people. So, the Earth rotates around itself synchronously with a daily orbit around that Central Fire, while the Sun revolves it yearly in a higher orbit. That way, the inhabited side of Earth faces the Sun once every 24 hours. They also claimed that the Moon and the planets orbit the Earth. This model is usually attributed to Philolaus. This model is the first one that depicts a moving Earth, simultaneously self-rotating and orbiting around an external point (but not around the Sun), thus not being geocentrical, contrary to common intuition. Due to philosophical concerns about the number 10 (a "perfect number" for the Pythagorians), they also added a tenth "hidden body" or Counter-Earth ("Antichthon"), always in the opposite side of the invisible Central Fire and therefore also invisible from Earth. Platonic geocentrism. Around 360 BCE when Plato wrote in his "Timaeus" his idea to account for the motions. He claimed that circles and spheres were the preferred shape of the universe and that the Earth was at the centre and the stars forming the outermost shell, followed by planets, the Sun and the Moon. This is the so-called geocentric model. In Plato's cosmogony, the demiurge gave the primacy to the motion of Sameness and left it undivided; but he divided the motion of Difference in six parts, to have seven unequal circles. He prescribed these circles to move in opposite directions, three of them with equal speeds, the others with unequal speeds, but always in proportion. These circles are the orbits of the heavenly bodies: the three moving at equal speeds are the Sun, Venus and Mercury, while the four moving at unequal speeds are the Moon, Mars, Jupiter and Saturn. The complicated pattern of these movements is bound to be repeated again after a period called a 'complete' or 'perfect' year. So, Plato arranged these celestial orbs in the order (outwards from the center): Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, and fixed stars, with the fixed stars located on the celestial sphere. However, this did not suffice to explain the observed planetary motion. Concentric spheres. Eudoxus of Cnidus, student of Plato in around 380 BCE, introduced a technique to describe the motion of the planets called the "method of exhaustion". Eudoxus reasoned that since the distances of the stars, the Moon, the Sun and all known planets do not appear to be changing, they are fixed in a sphere in which the bodies move around the sphere but with a constant radius and the Earth is at the centre of the sphere. To explain the complexity of the movements of the planets, Eudoxus thought they move as if they were attached to a number of concentrical, invisible spheres, every of them rotating around its own and different axis and at different paces. Eudoxus’ model had twenty-seven homocentric spheres with each sphere explaining a type of observable motion for each celestial object. Eudoxus assigns one sphere for the fixed stars which is supposed to explain their daily movement. He assigns three spheres to both the Sun and the Moon with the first sphere moving in the same manner as the sphere of the fixed stars. The second sphere explains the movement of the Sun and the Moon on the ecliptic plane. The third sphere was supposed to move on a “latitudinally inclined” circle and explain the latitudinal motion of the Sun and the Moon in the cosmos. Four spheres were assigned to Mercury, Venus, Mars, Jupiter, and Saturn, the only known planets at that time. The first and second spheres of the planets moved exactly like the first two spheres of the Sun and the Moon. According to Simplicius, the third and fourth sphere of the planets were supposed to move in a way that created a curve known as a hippopede. The hippopede was a way to try and explain the retrograde motions of planets. Eudoxus emphasised that this is a purely mathematical construct of the model in the sense that the spheres of each celestial body do not exist, it just shows the possible positions of the bodies. Around 350 BCE Aristotle, in his chief cosmological treatise "De Caelo" (On the Heavens), modified Eudoxus' model by supposing the spheres were material and crystalline. He was able to articulate the spheres for most planets, however, the spheres for Jupiter and Saturn crossed each other. Aristotle solved this complication by introducing an unrolled sphere in between, increasing the number of spheres needed well above Eudoxus'. Historians are unsure about how many spheres Aristotle thought there were in the cosmos with theories ranging from 43 up to 55. Aristotle also tried to determine whether the Earth moves and concluded that all the celestial bodies fall towards Earth by natural tendency and since Earth is the centre of that tendency, it is stationary. Incipient heliocentrism. By 330 BCE, Heraclides of Pontus said that the rotation of the Earth on its axis, from west to east, once every 24 hours, explained the apparent daily motion of the celestial sphere. Simplicius says that Heraclides proposed that the irregular movements of the planets can be explained if the Earth moves while the Sun stays still, but these statements are disputed. Around 280 BCE, Aristarchus of Samos offers the first definite discussion of the possibility of a heliocentric cosmos, and uses the size of the Earth's shadow on the Moon to estimate the Moon's orbital radius at 60 Earth radii, and its physical radius as one-third that of the Earth. He made an inaccurate attempt to measure the distance to the Sun, but sufficient to assert that the Sun is bigger than Earth and it is further away than the Moon. So the minor body, the Earth, must orbit the major one, the Sun, and not the opposite. Following the heliocentric ideas of Aristarcus (but not explicitly supporting them), around 250 BCE Archimedes in his work "The Sand Reckoner" computes the diameter of the universe centered around the Sun to be about 1014 stadia (in modern units, about 2 light years, , ). In Archimedes' own words: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;His [Aristarchus'] hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface. However, Aristarchus' views were not widely adopted, and the geocentric notion will remain for centuries. Further developments. Around 210 BCE, Apollonius of Perga shows the equivalence of two descriptions of the apparent retrograde planet motions (assuming the geocentric model), one using eccentrics and another deferent and epicycles. The latter will be a key feature for future models. The epicycle is described as a small orbit within a greater one, called the "deferent": as a planet orbits the Earth, it also orbits the original orbit, so its trajectory resembles an epitrochoid, as shown in the illustration at left. This could explain how the planet seems to move as viewed from Earth. In the following century, measures of sizes and distances of the Earth and the Moon are improved. Around 200 BCE Eratosthenes determines that the radius of the Earth is roughly 6,400 km. Circa 150 BCE Hipparchus uses parallax to determine that the distance to the Moon is roughly 380,000 km. The work of Hipparchus about the Earth-Moon system was so accurate that he could forecast solar and lunar eclipses for the next six centuries. Also, he discovers the precession of the equinoxes, and compiles a star catalog of about 850 entries. During the period 127 to 141 AD, Ptolemy deduced that the Earth is spherical based on the fact that not everyone records the solar eclipse at the same time and that observers from the North can not see the Southern stars. Ptolemy attempted to resolve the Planetary motion dilemma in which the observations were not consistent with the perfect circular orbits of the bodies. Ptolemy adopted the Apollonius' epicycles as solution. Ptolemy emphasised that the epicycle motion does not apply to the Sun. His main contribution to the model was the equant points. He also re-arranged the heavenly spheres in a different order than Plato did (from Earth outward): Moon, Mercury, Venus, Sun, Mars, Jupiter, Saturn and fixed stars, following a long astrological tradition and the decreasing orbital periods. Ptolemy's work "Almagest" cemented the geocentric model in the West, and it remained the most authoritative text on astronomy for more than 1,500 years. His methods were accurate enough to keep them largely undisputed. Medieval astronomy. Capellas' astronomical model. Around 420 AD Martianus Capella describes a modified geocentric model, in which the Earth is at rest in the center of the universe and circled by the Moon, the Sun, three planets and the stars, while Mercury and Venus circle the Sun. His model was not widely accepted, despite his authority; he was one of the earliest developers of the system of the seven liberal arts, the trivium (grammar, logic, and rhetoric) and the quadrivium (arithmetic, geometry, music, astronomy), that structured early medieval education. Nonetheless, his single encyclopedic work, "De nuptiis Philologiae et Mercurii" ("On the Marriage of Philology and Mercury"), also called "De septem disciplinis" ("On the seven disciplines") was read, taught, and commented upon throughout the early Middle Ages and shaped European education during the early medieval period and the Carolingian Renaissance. This model implies some knowledge about the transits of Mercury and Venus in front of the Sun, and the fact they pass also behind it periodically, which cannot be explained with Ptolemy's model. But it is unclear how that knowledge could be achieved by that times, due to the difficult to see these transits at naked eye; indeed, there is no evidence that any ancient culture knew of the transits. Alternatively, as seen from Earth, Mercury never departs from the Sun, neither East nor West, more than 28° and Venus no more than 47°, facts both known from antiquity that also could not be explained by Ptolemy. So it could be inferred that they orbit the Sun, and hence, there should be these transits. Islamic astronomy. The Islamic golden age period in Baghdad, picking off from Ptolemy's work, had more accurate measurements taken followed by interpretations. In 1021 A.D, Ibn Al Haytham adjusted Ptolemy's geocentric model to his specialty in optics in his book "Al-shukuk 'ata Batlamyus" which translates to "Doubts about Ptolemy". Ibn al-Haytham claimed that the epicycles Ptolemy introduced are inclined planes, not in a flat motion which settled further conflicting disputes. However, Ibn Al Haytham agreed with the Earth being in the centre of the Solar System at a fixed position. Nasir al-Din, during the 13th century, was able to combine two possible methods for a planet to orbit and as a result, derived a rotational aspect of planets within their orbits. Copernicus arrived to the same conclusion in the 16th century. Ibn al-Shatir, during the 14th century, in an attempt to resolve Ptolemy's inconsistent lunar theory, applied a double epicycle model to the Moon which reduced the predicted displacement of the Moon from the Earth. Copernicus also arrived at the same conclusion in the 16th century. Chinese Astronomy. In 1051, Shen Kua, a Chinese scholar in applied mathematics, rejected the circular planetary motion. He substituted it with a different motion described by the term ‘willow-leaf’. This is when a planet has a circular orbit but then it encounters another small circular orbit within or outside the original orbit and then returns to its original orbit which is demonstrated by the figure on the right. Renaissance. Copernicus's Heliocentric Model. During the 16th century Nicholas Copernicus, in reflecting on Ptolemy and Aristotle's interpretations of the Solar System, believed that all the orbits of the planets and Moon must be a perfect uniform circular motion despite the observations showing the complex retrograde motion. Copernicus introduced a new model which was consistent with the observations and allowed for perfect circular motion. This is known as the Heliocentric model where the Sun is placed at the centre of the universe (hence, the Solar System) and the Earth is, like all the other planets, orbiting it. The heliocentric model also resolved the varying brightness of planets problem. Copernicus also supported the spherical Earth theory with the idea that nature prefers spherical limits which are seen in the Moon, the Sun, and also the orbits of planets. Copernicus furthermore believed that the universe had a spherical limit. Copernicus contributed further to practical Astronomy by producing advanced techniques of observations and measurements and providing instructional procedure. The heliocentric model implies that the Earth is also a planet, the third from the Sun after Mercury and Venus, and before Mars, Jupiter and Saturn. And also implicitly, that planets are "worlds", like Earth is, not "stars". But the Moon still orbits the Earth. The Tychonic system. The heliocentric model was not immediately adopted. Conservatism along to numerous observational, philosophical and religious concerns prevented it for more than a century. In 1588, Tycho Brahe publishes his own Tychonic system, a blend between the Ptolemy's classical geocentric model and Copernicus' heliocentric model, in which the Sun and the Moon revolve around the Earth, in the center of universe, and all other planets revolve around the Sun. It was an attempt to conciliate his religious beliefs with heliocentrism. This was the so-called geoheliocentric model, and it was adopted by some astronomers during the geocentrism vs heliocentrism disputes. Kepler's model. In 1609, Johannes Kepler, an advocate of the heliocentric model, using his patron Tycho Brahe's accurate measurements, noticed the inconsistency of a heliocentric model where the Sun is exactly in the centre. Instead Kepler developed a more accurate and consistent model where the Sun is located not in the centre but at one of the two foci of an elliptic orbit. Kepler derived the three laws of planetary motion which changed the model of the Solar System and the orbital path of planets. These three laws of planetary motion are: In modern notation, formula_0 where "a" is the radius of the orbit, "T" is the period, "G" is the gravitational constant and "M" is the mass of the Sun. The third law explains the periods that occur during the year which relates the distance between the Earth and the Sun. Along with unprecedent accuracy, the Keplerian model also allows put the Solar System into scale. If a reliable measure between planetary bodies would be taken, the whole size of the system could be computed. By this time, the Solar System started to be conceived as something smaller than the rest of the universe. (Yet, up to 1596 Kepler himself still believed in the sphere of fixed stars, as it was illustrated in his book "Mysterium Cosmopgraphicum".) Galileo's discoveries. With the help of the telescope providing a closer look into the sky, Galileo Galilei proved the most part of the heliocentric model of the Solar System. Galileo observed the phases of Venus's appearance with the telescope and was able to confirm Kepler's first law of planetary motion and Copernicus's heliocentric model, of which Galileo was an advocate. Galileo claimed that the Solar System is not only made up of the Sun, the Moon and the planets but also comets. By observing movements around Jupiter, Galileo initially thought that these were the actions of stars. However, after a week of observing, he noticed changes in the patterns of motion in which he concluded that they are moons, four moons. Shortly after, it was proved by Kepler himself that the Jupiter's moons move around the planet the same way planets orbit the Sun, thus making Kepler's laws universal. Enlightenment to Victorian Era. Newton's interpretation. After all these theories, people still did not know what made the planets orbit the Sun, nor why the Moon tracks the Earth. Until the 17th century when Isaac Newton introduced The Law of Universal Gravitation. He claimed that between any two masses, there is an attractive force between them proportional to the inverse of the distance squared. formula_1 where m1 is the mass of the Sun and m2 is the mass of the planet, G is the gravitational constant and r is the distance between them. This theory was able to calculate the force on each planet by the Sun, which consequently, explained the planets elliptical motion. The term "Solar System" entered the English language by 1704, when John Locke used it to refer to the Sun, planets, and comets as a whole. By then it had been established beyond doubt that planets are other worlds, and stars are other distant suns, so the whole Solar System is actually only a small part of an immensely large universe, and definitively something distinct. Derived measurements. In 1672 Jean Richer and Giovanni Domenico Cassini measure the astronomical unit (AU), the mean distance Earth-Sun, to be about 138,370,000 km, (later refined by others up to the current value of 149,597,870 km). This gave for first time ever a well estimated size of the then known Solar System (that is, up to Saturn), following the scale derived from Kepler's third law. In 1798 Henry Cavendish accurately measures the gravitational constant in the laboratory, which allows the mass of the Earth to be derived thru Newton's law of universal gravitation, and hence the masses of all bodies in the Solar System. New members of the Solar System. Telescopic observations found new moons around Jupiter and Saturn, as well as an impressive ring system around the latter. In 1705 Edmond Halley asserted that the comet of 1682 is periodical with a highly elongated elliptical orbit around the Sun, and predicts its return in 1757. Johann Palitzsch observed in 1758 the return of the comet that Halley had anticipated. The interference of Jupiter's orbit had slowed the return by 618 days. Parisian astronomer La Caille suggests it should be named "Halley's Comet". Comets became a popular target for astronomers, and were recognized as members of the Solar System. In 1766 Johann Titius found a numeric progression for planetary distances, published in 1772 by Johann Bode, the so-called Titius-Bode rule. When in 1781 William Herschel discovered a new planet, Uranus, it was found it lies at a distance beyond Saturn that approximately matches that predicted by the Titius-Bode rule. That rule observed a gap between Mars and Jupiter void of any known planet. In 1801 Giuseppe Piazzi discovered Ceres, a body that filled the gap and was regarded as a new planet, and in 1802 Heinrich Wilhelm Olbers discovered Pallas, at roughly the same distance to the Sun than Ceres. He proposed that the two objects were the remnants of a destroyed planet, and predicted that more of these pieces would be found. Due their star-like apparience, William Herschel suggested Ceres and Pallas, and similar objects if found, be placed into a separate category, named asteroids, although they were still counted among the planets for some decades. In 1804 Karl Ludwig Harding discovered the asteroid Juno, and in 1807 Olbers discovered the asteroid Vesta. In 1845 Karl Ludwig Hencke discovered a fifth body between Mars and Jupiter, Astraea, and in 1849 Annibale de Gasparis discovers the asteroid Hygiea, the fourth largest asteroid in the Solar System by both volume and mass. As new objects of that kind were found there at an accelerating rate, counting them among the planets became increasingly cumbersome. Eventually, they were dropped from the planet list (as first suggested by Alexander von Humboldt in the early 1850s) and Herschel's coinage, "asteroids", gradually came into common use. Since then, the region they occupy between Mars and Jupiter is known as the asteroid belt. Alexis Bouvard detected irregularities in the orbit of Uranus in 1821. Later, between 1845 and 1846, John Adams and Urbain Le Verrier separately predicted the existence and location of a new planet from irregularities in the orbit of Uranus. This new planet was finally found by Johann Galle and eventually named Neptune, following the predicted position gave to him by Le Verrier. This fact marked the climax of the Newtonian mechanics applied to astronomy, but the Neptune's orbit does not fit with the Titius-Bode rule, so it has been deprecated from then on. Eventually, new moons were discovered also around Uranus starting in 1787 by Herschel, around Neptune starting in 1846 by William Lassell and around Mars in 1877 by Asaph Hall. 20th century add-ons. In 1919 Arthur Stanley Eddington uses a solar eclipse to successfully test Albert Einstein's General Theory of Relativity, which in turn explains the observed irregularities in the orbital motion of Mercury, and disproves the existence of the hypothesized inner planet Vulcan. General Theory of Relativity supersedes Newton's celestial mechanics. Instead of forces of attraction, gravity is seen as a "bend" of the tissue of the continuum space-time produced by the bodies' masses. Clyde Tombaugh discovered Pluto in 1930. It was regarded for decades as the ninth planet of the Solar System. In 1978 James Christy discovers Charon, the large moon of Pluto. In 1950 Jan Oort suggested the presence of a cometary reservoir in the outer limits of the Solar System, the Oort cloud, and in 1951 Gerard Kuiper argued for an annular reservoir of comets between 40 and 100 astronomical units from the Sun having formed early in the Solar System's evolution, but he did not think that such a belt still existed today. Decades later, this region was named after him, the Kuiper belt. New asteroid populations were discovered, as Trojans since 1906 by Max Wolf, and Centaurs since 1977 by Charles Kowal, among many other. From 1957 on, technology allowed for direct space exploration of the Solar System's bodies. To date, all their known main bodies have been visited at least once by robotic spacecraft, providing firsthand scientific data and closeup imaging. In some instances, robotic probes and rovers have landed on satellites, planets, asteroids and comets. And even some samples have been returned. Current model. The Sun is a lone, G-type main-sequence star inside the galaxy of the Milky Way, surrounded by eight major planets orbiting the star by the influence of gravity, most of them with a cohort of satellites, or moons, orbiting them. The biggest planets also have rings, consisting of a multitude of tiny solid objects and dust. The planets are, in order of distance from the Sun: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. There are three main belts of minor bodies: The biggest of these minor bodies are regarded as dwarf planets: Ceres in the asteroid belt, and Pluto, Eris, Haumea, Makemake, Gonggong, Quaoar, Sedna, and Orcus (along with other candidates) in the Kuiper belt. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{a^3}{T^2}=\\frac{GM}{4\\pi^2}" }, { "math_id": 1, "text": "F = G \\frac{m_1 m_2}{r^2}" } ]
https://en.wikipedia.org/wiki?curid=60325936
60327286
Whitehead's algorithm
Whitehead's algorithm is a mathematical algorithm in group theory for solving the automorphic equivalence problem in the finite rank free group "Fn". The algorithm is based on a classic 1936 paper of J. H. C. Whitehead. It is still unknown (except for the case "n" = 2) if Whitehead's algorithm has polynomial time complexity. Statement of the problem. Let formula_0 be a free group of rank formula_1 with a free basis formula_2. The automorphism problem, or the automorphic equivalence problem for formula_3 asks, given two freely reduced words formula_4 whether there exists an automorphism formula_5 such that formula_6. Thus the automorphism problem asks, for formula_4 whether formula_7. For formula_4 one has formula_7 if and only if formula_8, where formula_9 are conjugacy classes in formula_3 of formula_10 accordingly. Therefore, the automorphism problem for formula_3 is often formulated in terms of formula_11-equivalence of conjugacy classes of elements of formula_3. For an element formula_12, formula_13 denotes the freely reduced length of formula_14 with respect to formula_15, and formula_16 denotes the cyclically reduced length of formula_14 with respect to formula_15. For the automorphism problem, the length of an input formula_14 is measured as formula_13 or as formula_16, depending on whether one views formula_14 as an element of formula_3 or as defining the corresponding conjugacy class formula_17 in formula_3. History. The automorphism problem for formula_3 was algorithmically solved by J. H. C. Whitehead in a classic 1936 paper, and his solution came to be known as Whitehead's algorithm. Whitehead used a topological approach in his paper. Namely, consider the 3-manifold formula_18, the connected sum of formula_19 copies of formula_20. Then formula_21, and, moreover, up to a quotient by a finite normal subgroup isomorphic to formula_22, the mapping class group of formula_23 is equal to formula_11; see. Different free bases of formula_3 can be represented by isotopy classes of "sphere systems" in formula_23, and the cyclically reduced form of an element formula_24, as well as the Whitehead graph of formula_17, can be "read-off" from how a loop in general position representing formula_17 intersects the spheres in the system. Whitehead moves can be represented by certain kinds of topological "swapping" moves modifying the sphere system. Subsequently, Rapaport, and later, based on her work, Higgins and Lyndon, gave a purely combinatorial and algebraic re-interpretation of Whitehead's work and of Whitehead's algorithm. The exposition of Whitehead's algorithm in the book of Lyndon and Schupp is based on this combinatorial approach. Culler and Vogtmann, in their 1986 paper that introduced the Outer space, gave a hybrid approach to Whitehead's algorithm, presented in combinatorial terms but closely following Whitehead's original ideas. Whitehead's algorithm. Our exposition regarding Whitehead's algorithm mostly follows Ch.I.4 in the book of Lyndon and Schupp, as well as. Overview. The automorphism group formula_25 has a particularly useful finite generating set formula_26 of Whitehead automorphisms or Whitehead moves. Given formula_27 the first part of Whitehead's algorithm consists of iteratively applying Whitehead moves to formula_28 to take each of them to an ``automorphically minimal" form, where the cyclically reduced length strictly decreases at each step. Once we find automorphically these minimal forms formula_29 of formula_28, we check if formula_30. If formula_31 then formula_28 are not automorphically equivalent in formula_3. If formula_30, we check if there exists a finite chain of Whitehead moves taking formula_32 to formula_33 so that the cyclically reduced length remains constant throughout this chain. The elements formula_28 are not automorphically equivalent in formula_3 if and only if such a chain exists. Whitehead's algorithm also solves the "search automorphism problem" for formula_3. Namely, given formula_27, if Whitehead's algorithm concludes that formula_7, the algorithm also outputs an automorphism formula_34 such that formula_35. Such an element formula_34 is produced as the composition of a chain of Whitehead moves arising from the above procedure and taking formula_14 to formula_36. Whitehead automorphisms. A Whitehead automorphism, or Whitehead move, of formula_3 is an automorphism formula_37 of formula_3 of one of the following two types: (i) There is a permutation formula_38 of formula_39 such that for formula_40 formula_41 Such formula_42 is called a Whitehead automorphism of the first kind. (ii) There is an element formula_43, called the multiplier, such that for every formula_44 formula_45 Such formula_42 is called a Whitehead automorphism of the second kind. Since formula_42 is an automorphism of formula_3, it follows that formula_46 in this case. Often, for a Whitehead automorphism formula_47, the corresponding outer automorphism in formula_11 is also called a Whitehead automorphism or a Whitehead move. Examples. Let formula_48. Let formula_49 be a homomorphism such that formula_50 Then formula_42 is actually an automorphism of formula_51, and, moreover, formula_42 is a Whitehead automorphism of the second kind, with the multiplier formula_52. Let formula_53 be a homomorphism such that formula_54 Then formula_55 is actually an inner automorphism of formula_51 given by conjugation by formula_56, and, moreover, formula_55is a Whitehead automorphism of the second kind, with the multiplier formula_57. Automorphically minimal and Whitehead minimal elements. For formula_24, the conjugacy class formula_17 is called automorphically minimal if for every formula_34 we have formula_58. Also, a conjugacy class formula_17 is called Whitehead minimal if for every Whitehead move formula_37 we have formula_59. Thus, by definition, if formula_17 is automorphically minimal then it is also Whitehead minimal. It turns out that the converse is also true. Whitehead's "Peak Reduction Lemma". The following statement is referred to as Whitehead's "Peak Reduction Lemma", see Proposition 4.20 in and Proposition 1.2 in: Let formula_24. Then the following hold: (1) If formula_17 is not automorphically minimal, then there exists a Whitehead automorphism formula_37 such that formula_60. (2) Suppose that formula_17 is automorphically minimal, and that another conjugacy class formula_61 is also automorphically minimal. Then formula_7 if and only if formula_62 and there exists a finite sequence of Whitehead moves formula_63 such that formula_64 and formula_65 Part (1) of the Peak Reduction Lemma implies that a conjugacy class formula_17 is Whitehead minimal if and only if it is automorphically minimal. The automorphism graph. The automorphism graph formula_66 of formula_3 is a graph with the vertex set being the set of conjugacy classes formula_67 of elements formula_68. Two distinct vertices formula_69 are adjacent in formula_66 if formula_70 and there exists a Whitehead automorphism formula_42 such that formula_71. For a vertex formula_67 of formula_66, the connected component of formula_67 in formula_66 is denoted formula_72. Whitehead graph. For formula_73 with cyclically reduced form formula_32, the Whitehead graph formula_74 is a labelled graph with the vertex set formula_75, where for formula_76 there is an edge joining formula_77 and formula_78 with the label or "weight" formula_79 which is equal to the number of distinct occurrences of subwords formula_80 read cyclically in formula_32. (In some versions of the Whitehead graph one only includes the edges with formula_81.) If formula_47 is a Whitehead automorphism, then the length change formula_82 can be expressed as a linear combination, with integer coefficients determined by formula_42, of the weights formula_79 in the Whitehead graph formula_74. See Proposition 4.16 in Ch. I of. This fact plays a key role in the proof of Whitehead's peak reduction result. Whitehead's minimization algorithm. Whitehead's minimization algorithm, given a freely reduced word formula_24, finds an automorphically minimal formula_83 such that formula_84 This algorithm proceeds as follows. Given formula_24, put formula_85. If formula_86 is already constructed, check if there exists a Whitehead automorphism formula_47 such that formula_87. (This condition can be checked since the set of Whitehead automorphisms of formula_3 is finite.) If such formula_42 exists, put formula_88 and go to the next step. If no such formula_42 exists, declare that formula_89 is automorphically minimal, with formula_90, and terminate the algorithm. Part (1) of the Peak Reduction Lemma implies that the Whitehead's minimization algorithm terminates with some formula_91, where formula_92, and that then formula_93 is indeed automorphically minimal and satisfies formula_94. Whitehead's algorithm for the automorphic equivalence problem. Whitehead's algorithm for the automorphic equivalence problem, given formula_95 decides whether or not formula_7. The algorithm proceeds as follows. Given formula_95, first apply the Whitehead minimization algorithm to each of formula_28 to find automorphically minimal formula_96 such that formula_97 and formula_98. If formula_99, declare that formula_100 and terminate the algorithm. Suppose now that formula_101. Then check if there exists a finite sequence of Whitehead moves formula_63 such that formula_102 and formula_103 This condition can be checked since the number of cyclically reduced words of length formula_104 in formula_3 is finite. More specifically, using the breadth-first approach, one constructs the connected components formula_105 of the automorphism graph and checks if formula_106. If such a sequence exists, declare that formula_7, and terminate the algorithm. If no such sequence exists, declare that formula_107 and terminate the algorithm. The Peak Reduction Lemma implies that Whitehead's algorithm correctly solves the automorphic equivalence problem in formula_108. Moreover, if formula_7, the algorithm actually produces (as a composition of Whitehead moves) an automorphism formula_34 such that formula_35. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F_n=F(x_1,\\dots, x_n)" }, { "math_id": 1, "text": "n\\ge 2" }, { "math_id": 2, "text": " X=\\{x_1,\\dots, x_n\\}" }, { "math_id": 3, "text": "F_n" }, { "math_id": 4, "text": " w, w'\\in F_n" }, { "math_id": 5, "text": "\\varphi\\in \\operatorname{Aut}(F_n)" }, { "math_id": 6, "text": " \\varphi(w)=w'" }, { "math_id": 7, "text": "\\operatorname{Aut}(F_n)w=\\operatorname{Aut}(F_n)w'" }, { "math_id": 8, "text": "\\operatorname{Out}(F_n)[w]=\\operatorname{Out}(F_n)[w']" }, { "math_id": 9, "text": "[w], [w']" }, { "math_id": 10, "text": " w, w'" }, { "math_id": 11, "text": "\\operatorname{Out}(F_n)" }, { "math_id": 12, "text": " w\\in F_n" }, { "math_id": 13, "text": "|w|_X" }, { "math_id": 14, "text": "w" }, { "math_id": 15, "text": "X" }, { "math_id": 16, "text": "\\|w\\|_X" }, { "math_id": 17, "text": "[w]" }, { "math_id": 18, "text": "M_n=\\#_{i=1}^n \\mathbb S^2\\times \\mathbb S^1" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "\\mathbb S^2\\times \\mathbb S^1" }, { "math_id": 21, "text": "\\pi_1(M_n)\\cong F_n" }, { "math_id": 22, "text": "\\mathbb Z_2^n" }, { "math_id": 23, "text": "M_n" }, { "math_id": 24, "text": "w\\in F_n" }, { "math_id": 25, "text": "\\operatorname{Aut}(F_n)" }, { "math_id": 26, "text": " \\mathcal W" }, { "math_id": 27, "text": "w,w'\\in F_n " }, { "math_id": 28, "text": "w,w'" }, { "math_id": 29, "text": "u,u'" }, { "math_id": 30, "text": "\\|u\\|_X=\\|u'\\|_X" }, { "math_id": 31, "text": "\\|u\\|_X\\ne \\|u'\\|_X" }, { "math_id": 32, "text": "u" }, { "math_id": 33, "text": "u'" }, { "math_id": 34, "text": "\\varphi\\in\\operatorname{Aut}(F_n)" }, { "math_id": 35, "text": "\\varphi(w)=w'" }, { "math_id": 36, "text": "w'" }, { "math_id": 37, "text": "\\tau\\in \\operatorname{Aut}(F_n)" }, { "math_id": 38, "text": "\\sigma\\in S_n" }, { "math_id": 39, "text": "\\{1,2,\\dots, n\\}" }, { "math_id": 40, "text": "i=1,\\dots, n" }, { "math_id": 41, "text": " \\tau(x_i)=x_{\\sigma(i)}^{\\pm 1}" }, { "math_id": 42, "text": "\\tau" }, { "math_id": 43, "text": "a\\in X^{\\pm 1}" }, { "math_id": 44, "text": "x\\in X^{\\pm 1}" }, { "math_id": 45, "text": "\\tau(x)\\in \\{x, xa, a^{-1}x, a^{-1}xa\\}." }, { "math_id": 46, "text": "\\tau(a)=a" }, { "math_id": 47, "text": "\\tau\\in\\operatorname{Aut}(F_n)" }, { "math_id": 48, "text": "F_4=F(x_1,x_2,x_3,x_4)" }, { "math_id": 49, "text": "\\tau: F_4\\to F_4" }, { "math_id": 50, "text": " \\tau(x_1)=x_2x_1, \\quad \\tau(x_2)=x_2, \\quad\\tau(x_3)=x_2x_3x_2^{-1},\\quad \\tau(x_4)=x_4" }, { "math_id": 51, "text": "F_4" }, { "math_id": 52, "text": "a=x_2^{-1}" }, { "math_id": 53, "text": "\\tau': F_4\\to F_4" }, { "math_id": 54, "text": " \\tau'(x_1)=x_1, \\quad \\tau'(x_2)=x_1^{-1}x_2x_1, \\quad\\tau'(x_3)=x_1^{-1}x_3x_1,\\quad \\tau'(x_4)=x_1^{-1}x_4x_1" }, { "math_id": 55, "text": "\\tau'" }, { "math_id": 56, "text": "x_1" }, { "math_id": 57, "text": "a=x_1" }, { "math_id": 58, "text": "\\|w\\|_X\\le \\|\\varphi(w)\\|_X" }, { "math_id": 59, "text": "\\|w\\|_X\\le \\|\\tau(w)\\|_X" }, { "math_id": 60, "text": "\\|\\tau(w)\\|_X< \\|w\\|_X" }, { "math_id": 61, "text": "[w']" }, { "math_id": 62, "text": "\\|w\\|_X=\\|w'\\|_X" }, { "math_id": 63, "text": "\\tau_1,\\dots, \\tau_k\\in\\operatorname{Aut}(F_n)" }, { "math_id": 64, "text": "\\tau_k\\cdots \\tau_1(w)=w'" }, { "math_id": 65, "text": " \\|\\tau_i\\cdots\\tau_1(w)\\|_X=\\|w\\|_X \\text{ for } i=1,\\dots, k." }, { "math_id": 66, "text": "\\mathcal A" }, { "math_id": 67, "text": "[u]" }, { "math_id": 68, "text": "u\\in F_n" }, { "math_id": 69, "text": "[u], [v]" }, { "math_id": 70, "text": "\\|u\\|_X=\\|v\\|_X" }, { "math_id": 71, "text": "[\\tau(u)]=[v]" }, { "math_id": 72, "text": "\\mathcal A[u]" }, { "math_id": 73, "text": "1\\ne w\\in F_n" }, { "math_id": 74, "text": "\\Gamma_{[w]}" }, { "math_id": 75, "text": "X^{\\pm 1}" }, { "math_id": 76, "text": "x,y\\in X^{\\pm 1}, x\\ne y" }, { "math_id": 77, "text": "x" }, { "math_id": 78, "text": "y" }, { "math_id": 79, "text": "n(\\{x,y\\};[w])" }, { "math_id": 80, "text": "x^{-1}y, y^{-1}x" }, { "math_id": 81, "text": "n(\\{x,y\\};[w])>0" }, { "math_id": 82, "text": "\\|\\tau(w)\\|_X-\\|w\\|_X" }, { "math_id": 83, "text": "[v]" }, { "math_id": 84, "text": "\\operatorname{Aut}(F_n)w=\\operatorname{Aut}(F_n)v." }, { "math_id": 85, "text": "w_1=w" }, { "math_id": 86, "text": "w_i" }, { "math_id": 87, "text": "\\|\\tau(w_i)\\|_X<\\|w_i\\|_X" }, { "math_id": 88, "text": "w_{i+1}=\\tau(w_i)" }, { "math_id": 89, "text": "[w_i]" }, { "math_id": 90, "text": "\\operatorname{Aut}(F_n)w=\\operatorname{Aut}(F_n)w_i" }, { "math_id": 91, "text": "w_m" }, { "math_id": 92, "text": "m\\le\\|w\\|_X" }, { "math_id": 93, "text": "[w_m]" }, { "math_id": 94, "text": "\\operatorname{Aut}(F_n)w=\\operatorname{Aut}(F_n)w_m" }, { "math_id": 95, "text": "w,w'\\in F_n" }, { "math_id": 96, "text": "[v],[v']" }, { "math_id": 97, "text": "\\operatorname{Aut}(F_n)w=\\operatorname{Aut}(F_n)v" }, { "math_id": 98, "text": "\\operatorname{Aut}(F_n)w'=\\operatorname{Aut}(F_n)v'" }, { "math_id": 99, "text": "\\|v\\|_X\\ne \\|v'\\|_X" }, { "math_id": 100, "text": "\\operatorname{Aut}(F_n)w\\ne\\operatorname{Aut}(F_n)w'" }, { "math_id": 101, "text": "\\|v\\|_X=\\|v'\\|_X=t\\ge 0" }, { "math_id": 102, "text": "\\tau_k\\dots \\tau_1(v)=v'" }, { "math_id": 103, "text": " \\|\\tau_i\\dots\\tau_1(v)\\|_X=\\|v\\|_X=t \\text{ for } i=1,\\dots, k." }, { "math_id": 104, "text": "t" }, { "math_id": 105, "text": "\\mathcal A[v], \\mathcal A[v']" }, { "math_id": 106, "text": "\\mathcal A[v]\\cap \\mathcal A[v']=\\varnothing" }, { "math_id": 107, "text": "\\operatorname{Aut}(F_n)w\\neq\\operatorname{Aut}(F_n)w'" }, { "math_id": 108, "text": " F_n" }, { "math_id": 109, "text": "O(|w|_X^2)" }, { "math_id": 110, "text": "\\operatorname{Aut}(F_n)w=\\operatorname{Aut}(F_n)u" }, { "math_id": 111, "text": "O(|w|_X^2 n^3)" }, { "math_id": 112, "text": "n\\ge 3" }, { "math_id": 113, "text": "O\\left(\\|u\\|_X\\cdot \\#V\\mathcal A[u]\\right)" }, { "math_id": 114, "text": "|u|_X)" }, { "math_id": 115, "text": "\\max\\{|w|_X,|w'|_X\\}" }, { "math_id": 116, "text": "n=2" }, { "math_id": 117, "text": "u\\in F_2" }, { "math_id": 118, "text": "O\\left(\\|u\\|_X\\right)" }, { "math_id": 119, "text": "|u|_X" }, { "math_id": 120, "text": "F_2" }, { "math_id": 121, "text": "w,w'\\in F_2" }, { "math_id": 122, "text": "m\\ge 1" }, { "math_id": 123, "text": " \\operatorname{Stab}_{\\operatorname{Out}(F_n)}([w])" }, { "math_id": 124, "text": " S,S'\\subseteq F_n" }, { "math_id": 125, "text": "H=\\langle S\\rangle, H'=\\langle S'\\rangle\\le F_n" }, { "math_id": 126, "text": "\\varphi(H)=H'" }, { "math_id": 127, "text": "\\operatorname{Aut}(G)" }, { "math_id": 128, "text": "G=\\ast_{i=1}^m G_i" }, { "math_id": 129, "text": "G=\\pi_1(S)" }, { "math_id": 130, "text": "S" }, { "math_id": 131, "text": "w\\in F_n=F(X)" }, { "math_id": 132, "text": "F(X)" }, { "math_id": 133, "text": "m\\to\\infty" }, { "math_id": 134, "text": "\\#V\\mathcal A[w]=O\\left(\\|w\\|_X\\right)=O(m)" }, { "math_id": 135, "text": "u\\in F_n=F(X)" }, { "math_id": 136, "text": "x_i,x_j, i<j" }, { "math_id": 137, "text": "u^{-1}" }, { "math_id": 138, "text": "x_i^{\\pm 1}" }, { "math_id": 139, "text": "\\#V\\mathcal A[u]" }, { "math_id": 140, "text": "2n-3" }, { "math_id": 141, "text": "w,w'\\in F_n, n\\ge 3" }, { "math_id": 142, "text": "O\\left(\\max\\{|w|_X^{2n-3},|w'|_X^2\\}\\right)" }, { "math_id": 143, "text": "Z\\subseteq F_n" }, { "math_id": 144, "text": "H=\\langle Z\\rangle\\le F_n" }, { "math_id": 145, "text": "F_n," }, { "math_id": 146, "text": "F_n." } ]
https://en.wikipedia.org/wiki?curid=60327286
603273
Magnification
Process of enlarging the apparent size of something Magnification is the process of enlarging the apparent size, not physical size, of something. This enlargement is quantified by a size ratio called optical magnification. When this number is less than one, it refers to a reduction in size, sometimes called de-magnification. Typically, magnification is related to scaling up visuals or images to be able to see more detail, increasing resolution, using microscope, printing techniques, or digital processing. In all cases, the magnification of the image does not change the perspective of the image. Examples of magnification. Some optical instruments provide visual aid by magnifying small or distant subjects. Size ratio (optical magnification). Optical magnification is the ratio between the apparent size of an object (or its size in an image) and its true size, and thus it is a dimensionless number. Optical magnification is sometimes referred to as "power" (for example "10× power"), although this can lead to confusion with optical power. Linear or transverse magnification. For real images, such as images projected on a screen, "size" means a linear dimension (measured, for example, in millimeters or inches). Angular magnification. For optical instruments with an eyepiece, the linear dimension of the image seen in the eyepiece (virtual image at infinite distance) cannot be given, thus "size" means the angle subtended by the object at the focal point (angular size). Strictly speaking, one should take the tangent of that angle (in practice, this makes a difference only if the angle is larger than a few degrees). Thus, angular magnification is given by: formula_0 where formula_1 is the angle subtended by the object at the front focal point of the objective and formula_2 is the angle subtended by the image at the rear focal point of the eyepiece. For example, the mean angular size of the Moon's disk as viewed from Earth's surface is about 0.52°. Thus, through binoculars with 10× magnification, the Moon appears to subtend an angle of about 5.2°. By convention, for magnifying glasses and optical microscopes, where the size of the object is a linear dimension and the apparent size is an angle, the magnification is the ratio between the apparent (angular) size as seen in the eyepiece and the angular size of the object when placed at the conventional closest distance of distinct vision: from the eye. By instrument. Single lens. The linear magnification of a thin lens is formula_3 where formula_4 is the focal length and formula_5 is the distance from the lens to the object. For real images, formula_6 is negative and the image is inverted. For virtual images, formula_6 is positive and the image is upright. With formula_7 being the distance from the lens to the image, formula_8 the height of the image and formula_9 the height of the object, the magnification can also be written as: formula_10 Note again that a negative magnification implies an inverted image. Photography. The image recorded by a photographic film or image sensor is always a real image and is usually inverted. When measuring the height of an inverted image using the cartesian sign convention (where the x-axis is the optical axis) the value for "h"i will be negative, and as a result M will also be negative. However, the traditional sign convention used in photography is "real is positive, virtual is negative". Therefore, in photography: Object height and distance are always real and positive. When the focal length is positive the image's height, distance and magnification are real and positive. Only if the focal length is negative, the image's height, distance and magnification are virtual and negative. Therefore, the "&lt;dfn &gt;photographic magnification&lt;/dfn&gt;" formulae are traditionally presented as formula_11 Magnifying glass. The maximum angular magnification (compared to the naked eye) of a magnifying glass depends on how the glass and the object are held, relative to the eye. If the lens is held at a distance from the object such that its front focal point is on the object being viewed, the relaxed eye (focused to infinity) can view the image with angular magnification formula_12 Here, formula_4 is the focal length of the lens in centimeters. The constant 25 cm is an estimate of the "near point" distance of the eye—the closest distance at which the healthy naked eye can focus. In this case the angular magnification is independent from the distance kept between the eye and the magnifying glass. If instead the lens is held very close to the eye and the object is placed closer to the lens than its focal point so that the observer focuses on the near point, a larger angular magnification can be obtained, approaching formula_13 A different interpretation of the working of the latter case is that the magnifying glass changes the diopter of the eye (making it myopic) so that the object can be placed closer to the eye resulting in a larger angular magnification. Microscope. The angular magnification of a microscope is given by formula_14 where formula_15 is the magnification of the objective and formula_16 the magnification of the eyepiece. The magnification of the objective depends on its focal length formula_17 and on the distance formula_18 between objective back focal plane and the focal plane of the eyepiece (called the tube length): formula_19 The magnification of the eyepiece depends upon its focal length formula_20 and is calculated by the same equation as that of a magnifying glass (above). Note that both astronomical telescopes as well as simple microscopes produce an inverted image, thus the equation for the magnification of a telescope or microscope is often given with a minus sign. Telescope. The angular magnification of an optical telescope is given by formula_21 in which formula_17 is the focal length of the objective lens in a refractor or of the primary mirror in a reflector, and formula_20 is the focal length of the eyepiece. Measurement of telescope magnification. Measuring the actual angular magnification of a telescope is difficult, but it is possible to use the reciprocal relationship between the linear magnification and the angular magnification, since the linear magnification is constant for all objects. The telescope is focused correctly for viewing objects at the distance for which the angular magnification is to be determined and then the object glass is used as an object the image of which is known as the exit pupil. The diameter of this may be measured using an instrument known as a Ramsden dynameter which consists of a Ramsden eyepiece with micrometer hairs in the back focal plane. This is mounted in front of the telescope eyepiece and used to evaluate the diameter of the exit pupil. This will be much smaller than the object glass diameter, which gives the linear magnification (actually a reduction), the angular magnification can be determined from formula_22 Maximum usable magnification. With any telescope or microscope, or a lens a maximum magnification exists beyond which the image looks bigger but shows no more detail. It occurs when the finest detail the instrument can resolve is magnified to match the finest detail the eye can see. Magnification beyond this maximum is sometimes called "empty magnification". For a good quality telescope operating in good atmospheric conditions, the maximum usable magnification is limited by diffraction. In practice it is considered to be 2× the aperture in millimetres or 50× the aperture in inches; so, a diameter telescope has a maximum usable magnification of 120×. With an optical microscope having a high numerical aperture and using oil immersion, the best possible resolution is corresponding to a magnification of around 1200×. Without oil immersion, the maximum usable magnification is around 800×. For details, see limitations of optical microscopes. Small, cheap telescopes and microscopes are sometimes supplied with the eyepieces that give magnification far higher than is usable. The maximum relative to the minimum magnification of an optical system is known as zoom ratio. "Magnification" of displayed images. Magnification figures on pictures displayed in print or online can be misleading. Editors of journals and magazines routinely resize images to fit the page, making any magnification number provided in the figure legend incorrect. Images displayed on a computer screen change size based on the size of the screen. A scale bar (or micron bar) is a bar of stated length superimposed on a picture. When the picture is resized the bar will be resized in proportion. If a picture has a scale bar, the actual magnification can easily be calculated. Where the scale (magnification) of an image is important or relevant, including a scale bar is preferable to stating magnification. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_A=\\frac{\\tan \\varepsilon}{\\tan \\varepsilon_0}\\approx \\frac{\\varepsilon}{ \\varepsilon_0}" }, { "math_id": 1, "text": "\\varepsilon_0" }, { "math_id": 2, "text": "\\varepsilon" }, { "math_id": 3, "text": "M = {f \\over f-d_\\mathrm{o}}" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "d_\\mathrm{o}" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "d_\\mathrm{i}" }, { "math_id": 8, "text": "h_\\mathrm{i}" }, { "math_id": 9, "text": "h_\\mathrm{o}" }, { "math_id": 10, "text": "M = -{d_\\mathrm{i} \\over d_\\mathrm{o}} = {h_\\mathrm{i} \\over h_\\mathrm{o}}" }, { "math_id": 11, "text": "\\begin{align}\nM &= {d_\\mathrm{i} \\over d_\\mathrm{o}} = {h_\\mathrm{i} \\over h_\\mathrm{o}} \\\\\n &= {f \\over d_\\mathrm{o}-f} = {d_\\mathrm{i}-f \\over f}\n\\end{align}" }, { "math_id": 12, "text": "M_\\mathrm{A}={25\\ \\mathrm{cm}\\over f}" }, { "math_id": 13, "text": "M_\\mathrm{A}={25\\ \\mathrm{cm}\\over f}+1" }, { "math_id": 14, "text": "M_\\mathrm{A} = M_\\mathrm{o} \\times M_\\mathrm{e}" }, { "math_id": 15, "text": "M_\\mathrm{o}" }, { "math_id": 16, "text": "M_\\mathrm{e}" }, { "math_id": 17, "text": "f_\\mathrm{o}" }, { "math_id": 18, "text": "d" }, { "math_id": 19, "text": "M_\\mathrm{o}={d \\over f_\\mathrm{o}}" }, { "math_id": 20, "text": "f_\\mathrm{e}" }, { "math_id": 21, "text": "M_\\mathrm{A}= {f_\\mathrm{o} \\over f_\\mathrm{e}}" }, { "math_id": 22, "text": "M_\\mathrm{A} = {1 \\over M} = {D_{\\mathrm{Objective}} \\over {D_\\mathrm{Ramsden}}}\\,." } ]
https://en.wikipedia.org/wiki?curid=603273
60330183
Structural reliability
Structural reliability is about applying reliability engineering theories to buildings and, more generally, structural analysis. Reliability is also used as a probabilistic measure of structural safety. The reliability of a structure is defined as the probability of complement of failure formula_0. The failure occurs when the total applied load is larger than the total resistance of the structure. Structural reliability has become known as a design philosophy in the twenty-first century, and it might replace traditional deterministic ways of design and maintenance. Theory. In structural reliability studies, both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated. When loads and resistances are explicit and have their own independent function, the probability of failure could be formulated as follows. where formula_1 is the probability of failure, formula_2 is the cumulative distribution function of resistance (R), and formula_3 is the probability density of load (S). However, in most cases, the distribution of loads and resistances are not independent and the probability of failure is defined via the following more general formula. where 𝑋 is the vector of the basic variables, and "G(X) that" is called is the limit state function could be a line, surface or volume that the integral is taken on its surface. Solution approaches. Analytical solutions. In some cases when load and resistance are explicitly expressed (such as equation (1) above), and their distributions are normal , the integral of equation (1) has a closed-form solution as follows. Simulation. In most cases load and resistance are not normally distributed. Therefore, solving the integrals of equations (1) and (2) analytically is impossible. Using Monte Carlo simulation is an approach that could be used in such cases.
[ { "math_id": 0, "text": "(\\text{Reliability} = 1 - \\text{Probability of Failure})" }, { "math_id": 1, "text": "P_f" }, { "math_id": 2, "text": "F_R(s)" }, { "math_id": 3, "text": "f_s(s)" } ]
https://en.wikipedia.org/wiki?curid=60330183
60330511
Quasi-unmixed ring
Noetherian ring in algebra In algebra, specifically in the theory of commutative rings, a quasi-unmixed ring (also called a formally equidimensional ring in EGA) is a Noetherian ring formula_0 such that for each prime ideal "p", the completion of the localization "Ap" is equidimensional, i.e. for each minimal prime ideal "q" in the completion formula_1, formula_2 = the Krull dimension of "Ap". Equivalent conditions. A Noetherian integral domain is quasi-unmixed if and only if it satisfies Nagata's altitude formula. (See also: #formally catenary ring below.) Precisely, a quasi-unmixed ring is a ring in which the unmixed theorem, which characterizes a Cohen–Macaulay ring, holds for integral closure of an ideal; specifically, for a Noetherian ring formula_0, the following are equivalent: Formally catenary ring. A Noetherian local ring formula_0 is said to be formally catenary if for every prime ideal formula_5, formula_6 is quasi-unmixed. As it turns out, this notion is redundant: Ratliff has shown that a Noetherian local ring is formally catenary if and only if it is universally catenary. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\widehat{A_p}" }, { "math_id": 2, "text": "\\dim \\widehat{A_p}/q = \\dim A_p" }, { "math_id": 3, "text": "\\overline{I}" }, { "math_id": 4, "text": "\\overline{I^n}" }, { "math_id": 5, "text": "\\mathfrak{p}" }, { "math_id": 6, "text": "A/\\mathfrak{p}" } ]
https://en.wikipedia.org/wiki?curid=60330511
60336113
Kerr–Schild perturbations
Concept in general relativity Kerr–Schild perturbations are a special type of perturbation to a spacetime metric which only appear linearly in the Einstein field equations which describe general relativity. They were found by Roy Kerr and Alfred Schild in 1965. Form. A generalised Kerr–Schild perturbation has the form formula_0, where formula_1 is a scalar and formula_2 is a null vector with respect to the background spacetime. It can be shown that any perturbation of this form will only appear quadratically in the Einstein equations, and only linearly if the condition formula_3, where formula_4 is a scalar, is imposed. This condition is equivalent to requiring that the orbits of formula_5 are geodesics. Applications. While the form of the perturbation may appear very restrictive, there are several black hole metrics which can be written in Kerr–Schild form, such as Schwarzschild (stationary black hole), Kerr (rotating), Reissner–Nordström (charged) and Kerr–Newman (both charged and rotating). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h_{ab}=V l_a l_b" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "l_a" }, { "math_id": 3, "text": "l^a \\nabla_a l_b =\\phi l_b" }, { "math_id": 4, "text": "\\phi" }, { "math_id": 5, "text": "l^a" } ]
https://en.wikipedia.org/wiki?curid=60336113
60342612
Noise-induced order
Mathematical phenomenon Noise-induced order is a mathematical phenomenon appearing in the Matsumoto-Tsuda model of the Belosov-Zhabotinski reaction. In this model, adding noise to the system causes a transition from a "chaotic" behaviour to a more "ordered" behaviour; this article was a seminal paper in the area and generated a big number of citations and gave birth to a line of research in applied mathematics and physics. This phenomenon was later observed in the Belosov-Zhabotinsky reaction. Mathematical background. Interpolating experimental data from the Belosouv-Zabotinsky reaction, Matsumoto and Tsuda introduced a one-dimensional model, a random dynamical system with uniform additive noise, driven by the map: formula_0 where This random dynamical system is simulated with different noise amplitudes using floating-point arithmetic and the Lyapunov exponent along the simulated orbits is computed; the Lyapunov exponent of this simulated system was found to transition from positive to negative as the noise amplitude grows. The behavior of the floating point system and of the original system may differ; therefore, this is not a rigorous mathematical proof of the phenomenon. A computer assisted proof of noise-induced order for the Matsumoto-Tsuda map with the parameters above was given in 2017. In 2020 a sufficient condition for noise-induced order was given for one dimensional maps: the Lyapunov exponent for small noise sizes is positive, while the average of the logarithm of the derivative with respect to Lebesgue is negative. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nT(x)=\\begin{cases} \n(a+(x-\\frac{1}{8})^{\\frac{1}{3}})e^{-x}+b, & 0\\leq x\\leq 0.3 \\\\ \nc(10xe^{\\frac{-10x}{3}})^{19}+b & 0.3\\leq x\\leq 1\n\\end{cases}\n" }, { "math_id": 1, "text": "a=\\frac{19}{42}\\cdot\\bigg(\\frac{7}{5}\\bigg)^{1/3}" }, { "math_id": 2, "text": "T'(0.3^-)=0" }, { "math_id": 3, "text": "b=0.02328852830307032054478158044023918735669943648088852646123182739831022528_{158}^{213}" }, { "math_id": 4, "text": "T^5 (0.3)" }, { "math_id": 5, "text": "c=\\frac{20}{3^{20}\\cdot 7}\\cdot\\bigg(\\frac{7}{5}\\bigg)^{1/3}\\cdot e^{187/10}" }, { "math_id": 6, "text": "T(0.3^-)=T(0.3^+)" } ]
https://en.wikipedia.org/wiki?curid=60342612
60343
Sediment
Particulate solid matter that is deposited on the surface of land Sediment is a naturally occurring material that is broken down by processes of weathering and erosion, and is subsequently transported by the action of wind, water, or ice or by the force of gravity acting on the particles. For example, sand and silt can be carried in suspension in river water and on reaching the sea bed deposited by sedimentation; if buried, they may eventually become sandstone and siltstone (sedimentary rocks) through lithification. Sediments are most often transported by water (fluvial processes), but also wind (aeolian processes) and glaciers. Beach sands and river channel deposits are examples of fluvial transport and deposition, though sediment also often settles out of slow-moving or standing water in lakes and oceans. Desert sand dunes and loess are examples of aeolian transport and deposition. Glacial moraine deposits and till are ice-transported sediments. Classification. Sediment can be classified based on its grain size, grain shape, and composition. Grain size. Sediment size is measured on a log base 2 scale, called the "Phi" scale, which classifies particles by size from "colloid" to "boulder". Shape. The shape of particles can be defined in terms of three parameters. The "form" is the overall shape of the particle, with common descriptions being spherical, platy, or rodlike. The "roundness" is a measure of how sharp grain corners are. This varies from well-rounded grains with smooth corners and edges to poorly rounded grains with sharp corners and edges. Finally, "surface texture" describes small-scale features such as scratches, pits, or ridges on the surface of the grain. Form. Form (also called "sphericity") is determined by measuring the size of the particle on its major axes. William C. Krumbein proposed formulas for converting these numbers to a single measure of form, such as formula_0 where formula_1, formula_2, and formula_3 are the long, intermediate, and short axis lengths of the particle. The form formula_4 varies from 1 for a perfectly spherical particle to very small values for a platelike or rodlike particle. An alternate measure was proposed by Sneed and Folk: formula_5 which, again, varies from 0 to 1 with increasing sphericity. Roundness. Roundness describes how sharp the edges and corners of particle are. Complex mathematical formulas have been devised for its precise measurement, but these are difficult to apply, and most geologists estimate roundness from comparison charts. Common descriptive terms range from very angular to angular to subangular to subrounded to rounded to very rounded, with increasing degree of roundness. Surface texture. Surface texture describes the small-scale features of a grain, such as pits, fractures, ridges, and scratches. These are most commonly evaluated on quartz grains, because these retain their surface markings for long periods of time. Surface texture varies from polished to frosted, and can reveal the history of transport of the grain; for example, frosted grains are particularly characteristic of aeolian sediments, transported by wind. Evaluation of these features often requires the use of a scanning electron microscope. Composition. Composition of sediment can be measured in terms of: This leads to an ambiguity in which clay can be used as both a size-range and a composition (see clay minerals). Sediment transport. Sediment is transported based on the strength of the flow that carries it and its own size, volume, density, and shape. Stronger flows will increase the lift and drag on the particle, causing it to rise, while larger or denser particles will be more likely to fall through the flow. Aeolian processes: wind. Wind results in the transportation of fine sediment and the formation of sand dune fields and soils from airborne dust. Glacial processes. Glaciers carry a wide range of sediment sizes, and deposit it in moraines. Mass balance. The overall balance between sediment in transport and sediment being deposited on the bed is given by the Exner equation. This expression states that the rate of increase in bed elevation due to deposition is proportional to the amount of sediment that falls out of the flow. This equation is important in that changes in the power of the flow change the ability of the flow to carry sediment, and this is reflected in the patterns of erosion and deposition observed throughout a stream. This can be localized, and simply due to small obstacles; examples are scour holes behind boulders, where flow accelerates, and deposition on the inside of meander bends. Erosion and deposition can also be regional; erosion can occur due to dam removal and base level fall. Deposition can occur due to dam emplacement that causes the river to pool and deposit its entire load, or due to base level rise. Shores and shallow seas. Seas, oceans, and lakes accumulate sediment over time. The sediment can consist of "terrigenous" material, which originates on land, but may be deposited in either terrestrial, marine, or lacustrine (lake) environments, or of sediments (often biological) originating in the body of water. Terrigenous material is often supplied by nearby rivers and streams or reworked marine sediment (e.g. sand). In the mid-ocean, the exoskeletons of dead organisms are primarily responsible for sediment accumulation. Deposited sediments are the source of sedimentary rocks, which can contain fossils of the inhabitants of the body of water that were, upon death, covered by accumulating sediment. Lake bed sediments that have not solidified into rock can be used to determine past climatic conditions. Key marine depositional environments. The major areas for deposition of sediments in the marine environment include: One other depositional environment which is a mixture of fluvial and marine is the turbidite system, which is a major source of sediment to the deep sedimentary and abyssal basins as well as the deep oceanic trenches. Any depression in a marine environment where sediments accumulate over time is known as a sediment trap. The null point theory explains how sediment deposition undergoes a hydrodynamic sorting process within the marine environment leading to a seaward fining of sediment grain size. Environmental issues. Erosion and agricultural sediment delivery to rivers. One cause of high sediment loads is slash and burn and shifting cultivation of tropical forests. When the ground surface is stripped of vegetation and then seared of all living organisms, the upper soils are vulnerable to both wind and water erosion. In a number of regions of the earth, entire sectors of a country have become erodible. For example, on the Madagascar high central plateau, which constitutes approximately ten percent of that country's land area, most of the land area is devegetated, and gullies have eroded into the underlying soil to form distinctive gulleys called "lavakas". These are typically wide, long and deep. Some areas have as many as 150 lavakas/square kilometer, and lavakas may account for 84% of all sediments carried off by rivers. This siltation results in discoloration of rivers to a dark red brown color and leads to fish kills. In addition, sedimentation of river basins implies sediment management and siltation costs.The cost of removing an estimated 135 million m3 of accumulated sediments due to water erosion only is likely exceeding 2.3 billion euro (€) annually in the EU and UK, with large regional differences between countries. Erosion is also an issue in areas of modern farming, where the removal of native vegetation for the cultivation and harvesting of a single type of crop has left the soil unsupported. Many of these regions are near rivers and drainages. Loss of soil due to erosion removes useful farmland, adds to sediment loads, and can help transport anthropogenic fertilizers into the river system, which leads to eutrophication. The Sediment Delivery Ratio (SDR) is fraction of gross erosion (interill, rill, gully and stream erosion) that is expected to be delivered to the outlet of the river. The sediment transfer and deposition can be modelled with sediment distribution models such as WaTEM/SEDEM. In Europe, according to WaTEM/SEDEM model estimates the Sediment Delivery Ratio is about 15%. Coastal development and sedimentation near coral reefs. Watershed development near coral reefs is a primary cause of sediment-related coral stress. The stripping of natural vegetation in the watershed for development exposes soil to increased wind and rainfall, and as a result, can cause exposed sediment to become more susceptible to erosion and delivery to the marine environment during rainfall events. Sediment can negatively affect corals in many ways, such as by physically smothering them, abrading their surfaces, causing corals to expend energy during sediment removal, and causing algal blooms that can ultimately lead to less space on the seafloor where juvenile corals (polyps) can settle. When sediments are introduced into the coastal regions of the ocean, the proportion of land, marine and organic-derived sediment that characterizes the seafloor near sources of sediment output is altered. In addition, because the source of sediment (i.e. land, ocean, or organically) is often correlated with how coarse or fine sediment grain sizes that characterize an area are on average, grain size distribution of sediment will shift according to relative input of land (typically fine), marine (typically coarse), and organically-derived (variable with age) sediment. These alterations in marine sediment characterize the amount of sediment that is suspended in the water column at any given time and sediment-related coral stress. Biological considerations. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi_l = \\sqrt[3]{\\frac{D_S D_I}{D_L^2}}" }, { "math_id": 1, "text": "D_L" }, { "math_id": 2, "text": "D_I" }, { "math_id": 3, "text": "D_S" }, { "math_id": 4, "text": "\\psi_l" }, { "math_id": 5, "text": "\\psi_p = \\sqrt[3]{\\frac{D_S^2}{D_L D_I}}" } ]
https://en.wikipedia.org/wiki?curid=60343
60358
Internal rate of return
Method of calculating an investment’s rate of return Internal rate of return (IRR) is a method of calculating an investment's rate of return. The term "internal" refers to the fact that the calculation excludes external factors, such as the risk-free rate, inflation, the cost of capital, or financial risk. The method may be applied either ex-post or ex-ante. Applied ex-ante, the IRR is an estimate of a future annual rate of return. Applied ex-post, it measures the actual achieved investment return of a historical investment. It is also called the discounted cash flow rate of return (DCFROR) or yield rate. Definition (IRR). The IRR of an investment or project is the "annualized effective compounded return rate" or rate of return that sets the net present value (NPV) of all cash flows (both positive and negative) from the investment equal to zero. Equivalently, it is the interest rate at which the net present value of the future cash flows is equal to the initial investment, and it is also the interest rate at which the total present value of costs (negative cash flows) equals the total present value of the benefits (positive cash flows). IRR represents the return on investment achieved when a project reaches its breakeven point, meaning that the project is only marginally justified as valuable. When NPV demonstrates a positive value, it indicates that the project is expected to generate value. Conversely, if NPV shows a negative value, the project is expected to lose value. In essence, IRR signifies the rate of return attained when the NPV of the project reaches a neutral state, precisely at the point where NPV breaks even. IRR accounts for the time preference of money and investments. A given return on investment received at a given time is worth more than the same return received at a later time, so the latter would yield a lower IRR than the former, if all other factors are equal. A fixed income investment in which money is deposited once, interest on this deposit is paid to the investor at a specified interest rate every time period, and the original deposit neither increases nor decreases, would have an IRR equal to the specified interest rate. An investment which has the same total returns as the preceding investment, but delays returns for one or more time periods, would have a lower IRR. Uses. Savings and loans. In the context of savings and loans, the IRR is also called the effective interest rate. Profitability of an investment. The IRR is an indicator of the profitability, efficiency, quality, or yield of an investment. This is in contrast with the NPV, which is an indicator of the net value or magnitude added by making an investment. To maximize the value of a business, an investment should be made only if its profitability, as measured by the internal rate of return, is greater than a minimum acceptable rate of return. If the estimated IRR of a project or investment - for example, the construction of a new factory - exceeds the firm's cost of capital invested in that project, the investment is profitable. If the estimated IRR is less than the cost of capital, the proposed project should not be undertaken. The selection of investments may be subject to budget constraints. There may be mutually exclusive competing projects, or limits on a firm's ability to manage multiple projects. For these reasons, corporations use IRR in capital budgeting to compare the profitability of a set of alternative capital projects. For example, a corporation will compare an investment in a new plant versus an extension of an existing plant based on the IRR of each project. To maximize returns, the higher a project's IRR, the more desirable it is to undertake the project. There are at least two different ways to measure the IRR for an investment: the project IRR and the equity IRR. The project IRR assumes that the cash flows directly benefit the project, whereas equity IRR considers the returns for the shareholders of the company after the debt has been serviced. Even though IRR is one of the most popular metrics used to test the viability of an investment and compare returns of alternative projects, looking at the IRR in isolation might not be the best approach for an investment decision. Certain assumptions made during IRR calculations are not always applicable to the investment. In particular, IRR assumes that the project will have either no interim cash flows or the interim cash flows are reinvested into the project which is not always the case. This discrepancy leads to overestimation of the rate of return which might be an incorrect representation of the value of the project. Fixed income. IRR is used to evaluate investments in fixed income securities, using metrics such as the yield to maturity and yield to call. Liabilities. Both IRR and net present value can be applied to liabilities as well as investments. For a liability, a lower IRR is preferable to a higher one. Capital management. Corporations use IRR to evaluate share issues and stock buyback programs. A share repurchase proceeds if returning capital to shareholders has a higher IRR than candidate capital investment projects or acquisition projects at current market prices. Funding new projects by raising new debt may also involve measuring the cost of the new debt in terms of the yield to maturity (internal rate of return). Private equity. IRR is also used for private equity, from the limited partners' perspective, as a measure of the general partner's performance as investment manager. This is because it is the general partner who controls the cash flows, including the limited partners' draw-downs of committed capital. Calculation. Given a collection of pairs (time, cash flow) representing a project, the NPV is a function of the rate of return. The internal rate of return is a rate for which this function is zero, i.e. the internal rate of return is a solution to the equation NPV = 0 (assuming no arbitrage conditions exist). Given the (period, cash flow) pairs (formula_0, formula_1) where formula_0 is a non-negative integer, the total number of periods formula_2, and the formula_3, (net present value); the internal rate of return is given by formula_4 in: formula_5 This rational polynomial can be converted to an ordinary polynomial having the same roots by substituting "g" (gain) for formula_6 and multiplying by formula_7 to yield the equivalent but simpler condition formula_8 The possible IRR's are the real values of "r" satisfying the first condition, and 1 less than the real roots of the second condition (that is, formula_9 for each root "g"). Note that in both formulas, formula_10 is the negation of the initial investment at the start of the project while formula_11 is the cash value of the project at the end, equivalently the cash withdrawn if the project were to be liquidated and paid out so as to reduce the value of the project to zero. In the second condition formula_10 is the leading coefficient of the ordinary polynomial in "g" while formula_11 is the constant term. The period formula_0 is usually given in years, but the calculation may be made simpler if formula_4 is calculated using the period in which the majority of the problem is defined (e.g., using months if most of the cash flows occur at monthly intervals) and converted to a yearly period thereafter. Any fixed time can be used in place of the present (e.g., the end of one interval of an annuity); the value obtained is zero if and only if the NPV is zero. In the case that the cash flows are random variables, such as in the case of a life annuity, the expected values are put into the above formula. Often, the value of formula_4 that satisfies the above equation cannot be found analytically. In this case, numerical methods or graphical methods must be used. Example. If an investment may be given by the sequence of cash flows then the IRR formula_4 is given by formula_12 In this case, the answer is 5.96% (in the calculation, that is, r = .0596). Numerical solution. Since the above is a manifestation of the general problem of finding the roots of the equation formula_13, there are many numerical methods that can be used to estimate formula_4. For example, using the secant method, formula_4 is given by formula_14 where formula_15 is considered the formula_0th approximation of the IRR. This formula_4 can be found to an arbitrary degree of accuracy. Different accounting packages may provide functions for different accuracy levels. For example, Microsoft Excel and Google Sheets have built-in functions to calculate IRR for both fixed and variable time-intervals; "=IRR(...)" and "=XIRR(...)". The convergence behaviour of by the following: Having formula_18 when formula_19 or formula_20 when formula_21 may speed up convergence of formula_15 to formula_4. Numerical solution for single outflow and multiple inflows. Of particular interest is the case where the stream of payments consists of a single outflow, followed by multiple inflows occurring at equal periods. In the above notation, this corresponds to: formula_22 In this case the NPV of the payment stream is a convex, strictly decreasing function of interest rate. There is always a single unique solution for IRR. Given two estimates formula_23 and formula_24 for IRR, the secant method equation (see above) with formula_25 always produces an improved estimate formula_26. This is sometimes referred to as the Hit and Trial (or Trial and Error) method. More accurate interpolation formulas can also be obtained: for instance the secant formula with correction formula_27 (which is most accurate when formula_28) has been shown to be almost 10 times more accurate than the secant formula for a wide range of interest rates and initial guesses. For example, using the stream of payments {−4000, 1200, 1410, 1875, 1050} and initial guesses formula_29 and formula_30 the secant formula with correction gives an IRR estimate of 14.2% (0.7% error) as compared to IRR = 13.2% (7% error) from the secant method. If applied iteratively, either the secant method or the improved formula always converges to the correct solution. Both the secant method and the improved formula rely on initial guesses for IRR. The following initial guesses may be used: formula_31 formula_32 where formula_33 formula_34 Here, formula_35 refers to the NPV of the inflows only (that is, set formula_36 and compute NPV). Exact dates of cash flows. A cash flow formula_1 may occur at any time formula_37 years after the beginning of the project. formula_37 may not be a whole number. The cash flow should still be discounted by a factor formula_38. And the formula is formula_39 For numerical solution we can use Newton's method formula_40 where formula_41 is the derivative of formula_3 and given by formula_42 An initial value formula_23 can be given by formula_43 Problems with use. Comparison with NPV investment selection criterion. As a tool applied to making an investment decision on whether a project adds value or not, comparing the IRR of a single project with the required rate of return, in isolation from any other projects, is equivalent to the NPV method. If the appropriate IRR (if such can be found correctly) is greater than the required rate of return, using the required rate of return to discount cash flows to their present value, the NPV of that project will be positive, and vice versa. However, using IRR to sort projects in order of preference does not result in the same order as using NPV. Maximizing NPV. One possible investment objective is to maximize the total NPV of projects. When the objective is to maximize total value, the calculated IRR should not be used to choose between mutually exclusive projects. In cases where one project has a higher initial investment than a second mutually exclusive project, the first project may have a lower IRR (expected return), but a higher NPV (increase in shareholders' wealth) and should thus be accepted over the second project (assuming no capital constraints). When the objective is to maximize total value, IRR should not be used to compare projects of different duration. For example, the NPV added by a project with longer duration but lower IRR could be greater than that of a project of similar size, in terms of total net cash flows, but with shorter duration and higher IRR. Practitioner preference for IRR over NPV. Despite a strong academic preference for NPV, surveys indicate that executives prefer IRR over NPV. Apparently, managers prefer to compare investments of different sizes in terms of forecast investment performance, using IRR, rather than maximize value to the firm, in terms of NPV. This preference makes a difference when comparing mutually exclusive projects. Maximizing long-term return. Maximizing total value is not the only conceivable possible investment objective. An alternative objective would for example be to maximize long-term return. Such an objective would rationally lead to accepting first those new projects within the capital budget which have the highest IRR, because adding such projects would tend to maximize overall long-term return. Example. To see this, consider two investors, Max Value and Max Return. Max Value wishes her net worth to grow as large as possible, and will invest every last cent available to achieve this, whereas Max Return wants to maximize his rate of return over the long term, and would prefer to choose projects with smaller capital outlay but higher returns. Max Value and Max Return can each raise "up to" 100,000 US dollars from their bank at an annual interest rate of 10 percent paid at the end of the year. Investors Max Value and Max Return are presented with two possible projects to invest in, called Big-Is-Best and Small-Is-Beautiful. Big-Is-Best requires a capital investment of 100,000 US dollars today, and the lucky investor will be repaid 132,000 US dollars in a year's time. Small-Is-Beautiful only requires 10,000 US dollars capital to be invested today, and will repay the investor 13,750 US dollars in a year's time. Solution. The cost of capital for both investors is 10 percent. Both Big-Is-Best and Small-Is-Beautiful have positive NPVs: formula_44 formula_45 and the IRR of each is (of course) greater than the cost of capital: formula_46 so the IRR of Big-Is-Best is 32 percent, and formula_47 so the IRR of Small-Is-Beautiful is 37.5 percent. Both investments would be acceptable to both investors, but the twist in the tale is that these are mutually exclusive projects for both investors, because their capital budget is limited to 100,000 US dollars. How will the investors choose rationally between the two? The happy outcome is that Max Value chooses Big-Is-Best, which has the higher NPV of 20,000 US dollars, over Small-Is-Beautiful, which only has a modest NPV of 2,500, whereas Max Return chooses Small-Is-Beautiful, for its superior 37.5 percent return, over the attractive (but not as attractive) return of 32 percent offered on Big-Is-Best. So there is no squabbling over who gets which project, they are each happy to choose different projects. How can this be rational for both investors? The answer lies in the fact that the investors do not have to invest the full 100,000 US dollars. Max Return is content to invest only 10,000 US dollars for now. After all, Max Return may rationalize the outcome by thinking that maybe tomorrow there will be new opportunities available to invest the remaining 90,000 US dollars the bank is willing to lend Max Return, at even higher IRRs. Even if only seven more projects come along which are identical to Small-Is-Beautiful, Max Return would be able to match the NPV of Big-Is-Best, on a total investment of only 80,000 US dollars, with 20,000 US dollars left in the budget to spare for truly unmissable opportunities. Max Value is also happy, because she has filled her capital budget straight away, and decides she can take the rest of the year off investing. Multiple IRRs. When the sign of the cash flows changes more than once, for example when positive cash flows are followed by negative ones and then by positive ones (+ + − − − +), the IRR may have multiple real values. In a series of cash flows like (−10, 21, −11), one initially invests money, so a high rate of return is best, but then receives more than one possesses, so then one owes money, so now a low rate of return is best. In this case, it is not even clear whether a high or a low IRR is better. There may even be multiple real IRRs for a single project, like in the example 0% as well as 10%. Examples of this type of project are strip mines and nuclear power plants, where there is usually a large cash outflow at the end of the project. The IRR satisfies a polynomial equation. Sturm's theorem can be used to determine if that equation has a unique real solution. In general the IRR equation cannot be solved analytically but only by iteration. With multiple internal rates of return, the IRR approach can still be interpreted in a way that is consistent with the present value approach if the underlying investment stream is correctly identified as net investment or net borrowing. See for a way of identifying the relevant IRR from a set of multiple IRR solutions. Limitations in the context of private equity. In the context of survivorship bias which makes the high IRR of large private equity firms a poor representation of the average, according to Ludovic Phalippou, "...a headline figure that is often shown prominently as a rate of return in presentations and documents is, in fact, an IRR. IRRs are not rates of return. Something large PE firms have in common is that their early investments did well. These early winners have set up those firms' since-inception IRR at an artificially sticky and high level. The mathematics of IRR means that their IRRs will stay at this level forever, as long as the firms avoid major disasters. In passing, this generates some stark injustice because it is easier to game IRRs on LBOs in Western countries than in any other PE investments. That means that the rest of the PE industry (e.g. emerging market growth capital) is sentenced to look relatively bad forever, for no reason other than the use of a game-able performance metric." Also, "Another problem with the presentation of pension fund performance is that for PE, time-weighted returns...are not the most pertinent measure of performance. Asking how much pension funds gave and got back in dollar terms from PE, i.e. MoM, would be more pertinent. I went through the largest 15 funds websites to collect information on their performance. Few of them post their PE fund returns online. In most cases, they post information on their past performance in PE, but nothing that enables any meaningful benchmarking. "E.g.", CalSTRS [a California public pension fund] provide only the net IRR for each fund they invest in. As IRR is often misleading and can never be aggregated or compared to stock-market returns, such information is basically useless for gauging performance." Modified internal rate of return (MIRR). Modified Internal Rate of Return (MIRR) considers cost of capital, and is intended to provide a better indication of a project's probable return. It applies a discount rate for borrowing cash, and the IRR is calculated for the investment cash flows. This applies in real life for example when a customer makes a deposit before a specific machine is built. When a project has multiple IRRs it may be more convenient to compute the IRR of the project with the benefits reinvested. Accordingly, MIRR is used, which has an assumed reinvestment rate, usually equal to the project's cost of capital. Average internal rate of return (AIRR). Magni (2010) introduced a new approach, named AIRR approach, based on the intuitive notion of mean, that solves the problems of the IRR. However, the above-mentioned difficulties are only some of the many flaws incurred by the IRR. Magni (2013) provided a detailed list of 18 flaws of the IRR and showed how the AIRR approach does not incur the IRR problems. Mathematics. Mathematically, the value of the investment is assumed to undergo exponential growth or decay according to some rate of return (any value greater than −100%), with discontinuities for cash flows, and the IRR of a series of cash flows is defined as any rate of return that results in a NPV of zero (or equivalently, a rate of return that results in the correct value of zero after the last cash flow). Thus, internal rate(s) of return follow from the NPV as a function of the rate of return. This function is continuous. Towards a rate of return of −100% the NPV approaches infinity with the sign of the last cash flow, and towards a rate of return of positive infinity the NPV approaches the first cash flow (the one at the present). Therefore, if the first and last cash flow have a different sign there exists an IRR. Examples of time series without an IRR: In the case of a series of exclusively negative cash flows followed by a series of exclusively positive ones, the resulting function of the rate of return is continuous and monotonically decreasing from positive infinity (when the rate of return approaches -100%) to the value of the first cash flow (when the rate of return approaches infinity), so there is a unique rate of return for which it is zero. Hence, the IRR is also unique (and equal). Although the NPV-function itself is not necessarily monotonically decreasing on its whole domain, it "is" at the IRR. Similarly, in the case of a series of exclusively positive cash flows followed by a series of exclusively negative ones the IRR is also unique. Finally, by Descartes' rule of signs, the number of internal rates of return can never be more than the number of changes in sign of cash flow. The reinvestment debate. It is often stated that IRR assumes reinvestment of all cash flows until the very end of the project. This assertion has been a matter of debate in the literature. Sources stating that there is such a hidden assumption have been cited below. Other sources have argued that there is no IRR reinvestment assumption. To understand the source of this confusion let's consider an example with a 3-year bond of $1000 face value and coupon rate of 5% (or $50). As can be seen, although the total return is different the IRR is still the same. Put in other words, IRR is neutral to reinvestments made at the same rate. No matter whether the cash is taken out early or reinvested at the same rate and taken out late - the rate is the same. To understand why, we need to calculate the present value (PV) of our future cash flows, effectively reproducing IRR calculations manually: In personal finance. The IRR can be used to measure the money-weighted performance of financial investments such as an individual investor's brokerage account. For this scenario, an equivalent, more intuitive definition of the IRR is, "The IRR is the annual interest rate of the fixed rate account (like a somewhat idealized savings account) which, when subjected to the same deposits and withdrawals as the actual investment, has the same ending balance as the actual investment." This fixed rate account is also called the "replicating fixed rate account" for the investment. There are examples where the replicating fixed rate account encounters negative balances despite the fact that the actual investment did not. In those cases, the IRR calculation assumes that the same interest rate that is paid on positive balances is charged on negative balances. It has been shown that this way of charging interest is the root cause of the IRR's multiple solutions problem. If the model is modified so that, as is the case in real life, an externally supplied cost of borrowing (possibly varying over time) is charged on negative balances, the multiple solutions issue disappears. The resulting rate is called the "fixed rate equivalent" ("FREQ"). Unannualized internal rate of return. In the context of investment performance measurement, there is sometimes ambiguity in terminology between the periodic rate of return, such as the IRR as defined above, and a holding period return. The term "internal rate of return" ("IRR)" or "Since Inception Internal Rate of Return" ("SI-IRR)" is in some contexts used to refer to the unannualized return over the period, particularly for periods of less than a year. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "C_n" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "\\operatorname{NPV}" }, { "math_id": 4, "text": "r" }, { "math_id": 5, "text": "\\operatorname{NPV} = \\sum_{n=0}^N \\frac{C_n}{(1+r)^n} = 0" }, { "math_id": 6, "text": "1+r" }, { "math_id": 7, "text": "g^N" }, { "math_id": 8, "text": "\\sum_{n=0}^N C_ng^{N-n} = 0" }, { "math_id": 9, "text": "r = g-1" }, { "math_id": 10, "text": "C_0" }, { "math_id": 11, "text": "C_N" }, { "math_id": 12, "text": "\\operatorname{NPV} = -123400+\\frac{36200}{(1+r)^1} + \\frac{54800}{(1+r)^2} + \\frac{48100}{(1+r)^3} = 0." }, { "math_id": 13, "text": "\\operatorname{NPV}(r)=0" }, { "math_id": 14, "text": "r_{n+1} = r_n-\\operatorname{NPV}_n \\cdot \\left(\\frac{r_n-r_{n-1}}{\\operatorname{NPV}_n-\\operatorname{NPV}_{n-1}}\\right)." }, { "math_id": 15, "text": "r_n" }, { "math_id": 16, "text": "\\operatorname{NPV}(i)" }, { "math_id": 17, "text": "\\scriptstyle r_1,r_2,\\dots,r_n" }, { "math_id": 18, "text": "\\scriptstyle{r_1 > r_0}" }, { "math_id": 19, "text": "\\operatorname{NPV}_0>0" }, { "math_id": 20, "text": "\\scriptstyle{r_1 < r_0}" }, { "math_id": 21, "text": "\\operatorname{NPV}_0<0" }, { "math_id": 22, "text": " C_0<0,\\quad C_n\\ge 0\\text{ for }n\\ge 1. \\, " }, { "math_id": 23, "text": "r_1" }, { "math_id": 24, "text": "r_2" }, { "math_id": 25, "text": "n=2" }, { "math_id": 26, "text": "r_3" }, { "math_id": 27, "text": "r_{n+1} = r_n-\\operatorname{NPV}_n\\left(\\frac{r_n-r_{n-1}}{\\operatorname{NPV}_n-\\operatorname{NPV}_{n-1}}\\right)\\left(1 - 1.4 \\frac{\\operatorname{NPV}_{n-1}}{\\operatorname{NPV}_{n-1} - 3\\operatorname{NPV}_n + 2C_0} \\right)," }, { "math_id": 28, "text": " 0 > \\operatorname{NPV}_n > \\operatorname{NPV}_{n-1} " }, { "math_id": 29, "text": "r_1 = 0.25" }, { "math_id": 30, "text": "r_2 = 0.2" }, { "math_id": 31, "text": "r_1 = \\left( A / |C_0| \\right) ^{2/(N+1)} - 1 \\, " }, { "math_id": 32, "text": "r_2 = (1 + r_{1})^p - 1 \\, " }, { "math_id": 33, "text": " A = \\text{ sum of inflows } = C_1 + \\cdots + C_N \\, " }, { "math_id": 34, "text": "p = \\frac{\\log(\\mathrm{A} / |C_0|)}{\\log(\\mathrm{A} / \\operatorname{NPV}_{1,in})}." }, { "math_id": 35, "text": "\\operatorname{NPV}_{1,in}" }, { "math_id": 36, "text": "\\mathrm{C}_0 = 0" }, { "math_id": 37, "text": "t_n" }, { "math_id": 38, "text": "\\frac{1}{(1+r)^{t_{n}}}" }, { "math_id": 39, "text": "\\operatorname{NPV} = C_0 + \\sum_{n=1}^N \\frac{C_n}{(1+r)^{t_n}} = 0 " }, { "math_id": 40, "text": "r_{k+1} = r_k - \\frac{\\operatorname{NPV}_k}{\\operatorname{NPV}'_k}" }, { "math_id": 41, "text": "\\operatorname{NPV}'" }, { "math_id": 42, "text": "\\operatorname{NPV}' = -\\sum_{n=1}^N \\frac{C_n \\cdot t_n}{(1+r)^{t_n+1}}" }, { "math_id": 43, "text": "r_1 = \\frac{-1}{C_0} \\sum_{n=1}^N C_n - 1" }, { "math_id": 44, "text": "\\operatorname{NPV}(\\text{Big-Is-Best}) = \\frac{132,000}{1.1} - 100,000 = 20,000" }, { "math_id": 45, "text": "\\operatorname{NPV}(\\text{Small-Is-Beautiful}) = \\frac{13,750}{1.1} - 10,000 = 2,500" }, { "math_id": 46, "text": "\\mathit{NPV}(\\text{Big-Is-Best}) = \\frac{132,000}{1.32} - 100,000 = 0" }, { "math_id": 47, "text": "\\operatorname{NPV}(\\text{Small-Is-Beautiful}) = \\frac{13,750}{1.375} - 10,000 = 0" } ]
https://en.wikipedia.org/wiki?curid=60358
60360425
Hartle–Thorne metric
Approximate solution to Einstein's field equations The Hartle–Thorne metric is an approximate solution of the vacuum Einstein field equations of general relativity that describes the exterior of a slowly and rigidly rotating, stationary and axially symmetric body. The metric was found by James Hartle and Kip Thorne in the 1960s to study the spacetime outside neutron stars, white dwarfs and supermassive stars. It can be shown that it is an approximation to the Kerr metric (which describes a rotating black hole) when the quadrupole moment is set as formula_0, which is the correct value for a black hole but not, in general, for other astrophysical objects. Metric. Up to second order in the angular momentum formula_1, mass formula_2 and quadrupole moment formula_3, the metric in spherical coordinates is given by formula_4 where formula_5 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q=-a^2aM^3" }, { "math_id": 1, "text": "J" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "q" }, { "math_id": 4, "text": "\\begin{align}g_{tt} &= - \\left(1-\\frac{2M}{r}+\\frac{2q}{r^3} P_2 +\\frac{2Mq}{r^4} P_2 +\\frac{2q^2}{r^6} P^2_2\n-\\frac{2}{3} \\frac{J^2}{r^4} (2P_2+1)\\right), \\\\\ng_{t\\phi} &= -\\frac{2J}{r}\\sin^2\\theta, \\\\\ng_{rr} &= 1 + \\frac{2M}{r} +\\frac{4M^2}{r^2} -\\frac{2qP_2}{r^3} -\\frac{10MqP_2}{r^4}\n+ \\frac{1}{12} \\frac{q^2\\left(8P_2^2-16P_2+77\\right)}{r^6} +\\frac{2J^2(8P_2-1)}{r^4},\\\\\ng_{\\theta\\theta} &=r^2 \\left(1-\\frac{2qP_2}{r^3} -\\frac{5MqP_2}{r^4} +\\frac{1}{36}\\frac{q^2\\left(44P_2^2 +8P_2 -43\\right)}{r^6}\n+\\frac{J^2P_2}{r^4}\\right),\\\\\ng_{\\phi\\phi}&=r^2\\sin^2\\theta\\left(1-\\frac{2qP_2}{r^3} -\\frac{5MqP_2}{r^4} +\\frac{1}{36}\\frac{q^2\\left(44P_2^2 +8P_2 -43\\right)}{r^6}\n+\\frac{J^2P_2}{r^4}\\right),\n\\end{align} " }, { "math_id": 5, "text": "P_2=\\frac{3\\cos^2\\theta-1}{2}." } ]
https://en.wikipedia.org/wiki?curid=60360425
603690
Voltage clamp
The voltage clamp is an experimental method used by electrophysiologists to measure the ion currents through the membranes of excitable cells, such as neurons, while holding the membrane voltage at a set level. A basic voltage clamp will iteratively measure the membrane potential, and then change the membrane potential (voltage) to a desired value by adding the necessary current. This "clamps" the cell membrane at a desired constant voltage, allowing the voltage clamp to record what currents are delivered. Because the currents applied to the cell must be equal to (and opposite in charge to) the current going across the cell membrane at the set voltage, the recorded currents indicate how the cell reacts to changes in membrane potential. Cell membranes of excitable cells contain many different kinds of ion channels, some of which are voltage-gated. The voltage clamp allows the membrane voltage to be manipulated independently of the ionic currents, allowing the current–voltage relationships of membrane channels to be studied. History. The concept of the voltage clamp is attributed to Kenneth Cole and George Marmont in the spring of 1947. They inserted an internal electrode into the giant axon of a squid and began to apply a current. Cole discovered that it was possible to use two electrodes and a feedback circuit to keep the cell's membrane potential at a level set by the experimenter. Cole developed the voltage clamp technique before the era of microelectrodes, so his two electrodes consisted of fine wires twisted around an insulating rod. Because this type of electrode could be inserted into only the largest cells, early electrophysiological experiments were conducted almost exclusively on squid axons. Squids squirt jets of water when they need to move quickly, as when escaping a predator. To make this escape as fast as possible, they have an axon that can reach 1 mm in diameter (signals propagate more quickly down large axons). The squid giant axon was the first preparation that could be used to voltage clamp a transmembrane current, and it was the basis of Hodgkin and Huxley's pioneering experiments on the properties of the action potential. Alan Hodgkin realized that, to understand ion flux across the membrane, it was necessary to eliminate differences in membrane potential. Using experiments with the voltage clamp, Hodgkin and Andrew Huxley published 5 papers in the summer of 1952 describing how ionic currents give rise to the action potential. The final paper proposed the Hodgkin–Huxley model which mathematically describes the action potential. The use of voltage clamps in their experiments to study and model the action potential in detail has laid the foundation for electrophysiology; for which they shared the 1963 Nobel Prize in Physiology or Medicine. Technique. The voltage clamp is a current generator. Transmembrane voltage is recorded through a "voltage electrode", relative to ground, and a "current electrode" passes current into the cell. The experimenter sets a "holding voltage", or "command potential", and the voltage clamp uses negative feedback to maintain the cell at this voltage. The electrodes are connected to an amplifier, which measures membrane potential and feeds the signal into a feedback amplifier. This amplifier also gets an input from the signal generator that determines the command potential, and it subtracts the membrane potential from the command potential (Vcommand – Vm), magnifies any difference, and sends an output to the current electrode. Whenever the cell deviates from the holding voltage, the operational amplifier generates an "error signal", that is the difference between the command potential and the actual voltage of the cell. The feedback circuit passes current into the cell to reduce the error signal to zero. Thus, the clamp circuit produces a current equal and opposite to the ionic current. Variations of the voltage clamp technique. Two-electrode voltage clamp using microelectrodes. The two-electrode voltage clamp (TEVC) technique is used to study properties of membrane proteins, especially ion channels. Researchers use this method most commonly to investigate membrane structures expressed in "Xenopus" oocytes. The large size of these oocytes allows for easy handling and manipulability. The TEVC method utilizes two low-resistance pipettes, one sensing voltage and the other injecting current. The microelectrodes are filled with conductive solution and inserted into the cell to artificially control membrane potential. The membrane acts as a dielectric as well as a resistor, while the fluids on either side of the membrane function as capacitors. The microelectrodes compare the membrane potential against a command voltage, giving an accurate reproduction of the currents flowing across the membrane. Current readings can be used to analyze the electrical response of the cell to different applications. This technique is favored over single-microelectrode clamp or other voltage clamp techniques when conditions call for resolving large currents. The high current-passing capacity of the two-electrode clamp makes it possible to clamp large currents that are impossible to control with single-electrode patch techniques. The two-electrode system is also desirable for its fast clamp settling time and low noise. However, TEVC is limited in use with regard to cell size. It is effective in larger-diameter oocytes, but more difficult to use with small cells. Additionally, TEVC method is limited in that the transmitter of current must be contained in the pipette. It is not possible to manipulate the intracellular fluid while clamping, which is possible using patch clamp techniques. Another disadvantage involves "space clamp" issues. Cole's voltage clamp used a long wire that clamped the squid axon uniformly along its entire length. TEVC microelectrodes can provide only a spatial point source of current that may not uniformly affect all parts of an irregularly shaped cell. Dual-cell voltage clamp. The dual-cell voltage clamp technique is a specialized variation of the two electrode voltage clamp, and is only used in the study of gap junction channels. Gap junctions are pores that directly link two cells through which ions and small molecules flow freely. When two cells in which gap junction proteins, typically connexins or innexins, are expressed, either endogenously or via injection of mRNA, a junction channel will form between the cells. Since two cells are present in the system, two sets of electrodes are used. A recording electrode and a current injecting electrode are inserted into each cell, and each cell is clamped individually (each set of electrodes is attached to a separate apparatus, and integration of data is performed by computer). To record junctional conductance, the current is varied in the first cell while the recording electrode in the second cell records any changes in Vm for the second cell only. (The process can be reversed with the stimulus occurring in the second cell and recording occurring in the first cell.) Since no variation in current is being induced by the electrode in the recorded cell, any change in voltage must be induced by current crossing into the recorded cell, through the gap junction channels, from the cell in which the current was varied. Single-electrode voltage clamp. This category describes a set of techniques in which one electrode is used for voltage clamp. Continuous single-electrode clamp (SEVC-c) technique is often used with patch-clamp recording. Discontinuous single-electrode voltage-clamp (SEVC-d) technique is used with penetrating intracellular recording. This single electrode carries out the functions of both current injection and voltage recording. Continuous single-electrode clamp (SEVC-c). The "patch-clamp" technique allows the study of individual ion channels. It uses an electrode with a relatively large tip (&gt; 1 micrometer) that has a smooth surface (rather than a sharp tip). This is a "patch-clamp electrode" (as distinct from a "sharp electrode" used to impale cells). This electrode is pressed against a cell membrane and suction is applied to pull the cell's membrane inside the electrode tip. The suction causes the cell to form a tight seal with the electrode (a "gigaohm seal", as the resistance is more than a gigaohm). SEV-c has the advantage that you can record from small cells that would be impossible to impale with two electrodes. However: Discontinuous single-electrode voltage-clamp (SEVC-d). A single-electrode voltage clamp — discontinuous, or SEVC-d, has some advantages over SEVC-c for whole-cell recording. In this, a different approach is taken for passing current and recording voltage. A SEVC-d amplifier operates on a "time-sharing" basis, so the electrode regularly and frequently switches between passing current and measuring voltage. In effect, there are two electrodes, but each is in operation for only half of the time it is on. The oscillation between the two functions of the single electrode is termed a duty cycle. During each cycle, the amplifier measures the membrane potential and compares it with the holding potential. An operational amplifier measures the difference, and generates an error signal. This current is a mirror image of the current generated by the cell. The amplifier outputs feature sample and hold circuits, so each briefly sampled voltage is then held on the output until the next measurement in the next cycle. To be specific, the amplifier measures voltage in the first few microseconds of the cycle, generates the error signal, and spends the rest of the cycle passing current to reduce that error. At the start of the next cycle, voltage is measured again, a new error signal generated, current passed etc. The experimenter sets the cycle length, and it is possible to sample with periods as low as about 15 microseconds, corresponding to a 67 kHz switching frequency. Switching frequencies lower than about 10 kHz are not sufficient when working with action potentials that are less than 1 millisecond wide. Note that not all discontinuous voltage-clamp amplifier support switching frequencies higher than 10 kHz. For this to work, the cell capacitance must be higher than the electrode capacitance by at least an order of magnitude. Capacitance slows the kinetics (the rise and fall times) of currents. If the electrode capacitance is much less than that of the cell, then when current is passed through the electrode, the electrode voltage will change faster than the cell voltage. Thus, when current is injected and then turned off (at the end of a duty cycle), the electrode voltage will decay faster than the cell voltage. As soon as the electrode voltage asymptotes to the cell voltage, the voltage can be sampled (again) and the next amount of charge applied. Thus, the frequency of the duty cycle is limited to the speed at which the electrode voltage rises and decays while passing current. The lower the electrode capacitance the faster one can cycle. SEVC-d has a major advantage over SEVC-c in allowing the experimenter to measure membrane potential, and, as it obviates passing current and measuring voltage at the same time, there is never a series resistance error. The main disadvantages are that the time resolution is limited and the amplifier is unstable. If it passes too much current, so that the goal voltage is over-shot, it reverses the polarity of the current in the next duty cycle. This causes it to undershoot the target voltage, so the next cycle reverses the polarity of the injected current again. This error can grow with each cycle until the amplifier oscillates out of control (“ringing”); this usually results in the destruction of the cell being recorded. The investigator wants a short duty cycle to improve temporal resolution; the amplifier has adjustable compensators that will make the electrode voltage decay faster, but, if these are set too high the amplifier will ring, so the investigator is always trying to “tune” the amplifier as close to the edge of uncontrolled oscillation as possible, in which case small changes in recording conditions can cause ringing. There are two solutions: to “back off” the amplifier settings into a safe range, or to be alert for signs that the amplifier is about to ring. Mathematical modeling. From the point of view of control theory, the voltage clamp experiment can be described in terms of the application of a high-gain output feedback control law to the neuronal membrane. Mathematically, the membrane voltage can be modeled by a conductance-based model with an input given by the applied current formula_0 and an output given by the membrane voltage formula_1. Hodgkin and Huxley's original conductance-based model, which represents a neuronal membrane containing sodium and potassium ion currents, as well as a leak current, is given by the system of ordinary differential equations formula_2 formula_3 where formula_4 is the membrane capacitance, formula_5, formula_6 and formula_7are maximal conductances, formula_8, formula_9 and formula_10 are reversal potentials, formula_11 and formula_12 are ion channel voltage-dependent rate constants, and the state variables formula_13, formula_14, and formula_15 are ion channel gating variables. It is possible to rigorously show that the feedback law formula_16 drives the membrane voltage formula_1 arbitrarily close to the reference voltage formula_17 as the gain formula_18 is increased to an arbitrarily large value. This fact, which is by no means a general property of dynamical systems (a high-gain can, in general, lead to instability), is a consequence of the structure and the properties of the conductance-based model above. In particular, the dynamics of each gating variable formula_19, which are driven by formula_20, verify the strong stability property of exponential contraction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_{app}(t)" }, { "math_id": 1, "text": "V(t)" }, { "math_id": 2, "text": "\tC_m \\frac{dV}{dt} = -\\bar{g}_{\\text{K}} n^4 (V-V_\\text{K})\n\t -\\bar{g}_{\\text{Na}} m^3 h (V-V_\\text{Na}) \n\t - \\bar{g}_{\\text{L}} (V-V_L) + I_{app}" }, { "math_id": 3, "text": "\t \t\\frac{dp}{dt} = \\alpha_p(V)(1-p) - \\beta_p(V)p, \\quad p=m,n,h" }, { "math_id": 4, "text": "C_m" }, { "math_id": 5, "text": "\\bar{g}_{\\text{Na}}" }, { "math_id": 6, "text": "\\bar{g}_{\\text{K}}" }, { "math_id": 7, "text": "\\bar{g}_{\\text{L}}" }, { "math_id": 8, "text": "V_{\\text{Na}}" }, { "math_id": 9, "text": "V_{\\text{K}}" }, { "math_id": 10, "text": "V_{\\text{L}}" }, { "math_id": 11, "text": "\\alpha_p" }, { "math_id": 12, "text": "\\beta_p" }, { "math_id": 13, "text": "m" }, { "math_id": 14, "text": "h" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "I_{app}(t) = k(V_{\\text{ref}} - V(t))" }, { "math_id": 17, "text": "V_{\\text{ref}}" }, { "math_id": 18, "text": "k>0" }, { "math_id": 19, "text": "p=m,h,n" }, { "math_id": 20, "text": "V" } ]
https://en.wikipedia.org/wiki?curid=603690
60373549
Phase separation
Creation of two phases of matter from a single homogenous mixture Phase separation is the creation of two distinct phases from a single homogeneous mixture. The most common type of phase separation is between two immiscible liquids, such as oil and water. This type of phase separation is known as liquid-liquid equilibrium. Colloids are formed by phase separation, though not all phase separations forms colloids - for example oil and water can form separated layers under gravity rather than remaining as microscopic droplets in suspension. A common form of spontaneous phase separation is termed spinodal decomposition; it is described by the Cahn–Hilliard equation. Regions of a phase diagram in which phase separation occurs are called miscibility gaps. There are two boundary curves of note: the binodal coexistence curve and the spinodal curve. On one side of the binodal, mixtures are absolutely stable. In between the binodal and the spinodal, mixtures may be metastable: staying mixed (or unmixed) absent some large disturbance. The region beyond the spinodal curve is absolutely unstable, and (if starting from a mixed state) will spontaneously phase-separate. The upper critical solution temperature (UCST) and the lower critical solution temperature (LCST) are two critical temperatures, above which or below which the components of a mixture are miscible in all proportions. It is rare for systems to have both, but some exist: the nicotine-water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C. Physical basis. Mixing is governed by the Gibbs free energy, with phase separation or mixing occurring for whichever case lowers the Gibbs free energy. The free energy formula_0 can be decomposed into two parts: formula_1, with formula_2 the enthalpy, formula_3 the temperature, and formula_4 the entropy. Thus, the change of the free energy in mixing is the sum of the enthalpy of mixing and the entropy of mixing. The enthalpy of mixing is zero for ideal mixtures, and ideal mixtures are enough to describe many common solutions. Thus, in many cases, mixing (or phase separation) is driven primarily by the entropy of mixing. It is generally the case that the entropy will increase whenever a particle (an atom, a molecule) has a larger space to explore; and thus, the entropy of mixing is generally positive: the components of the mixture can increase their entropy by sharing a larger common volume. Phase separation is then driven by several distinct processes. In one case, the enthalpy of mixing is positive, and the temperature is low: the increase in entropy is insufficient to lower the free energy. In another, considerably more rare case, the entropy of mixing is "unfavorable", that is to say, it is negative. In this case, even if the change in enthalpy is negative, phase separation will occur unless the temperature is low enough. It is this second case which gives rise to the idea of the lower critical solution temperature. Phase separation in cold gases. A mixture of two helium isotopes (helium-3 and helium-4) in a certain range of temperatures and concentrations separates into parts. The initial mix of the two isotopes spontaneously separates into &lt;chem&gt;^{4}He&lt;/chem&gt;-rich and &lt;chem&gt;{}^3He&lt;/chem&gt;-rich regions. Phase separation also exists in ultracold gas systems. It has been shown experimentally in a two-component ultracold Fermi gas case. The phase separation can compete with other phenomena as vortex lattice formation or an exotic Fulde-Ferrell-Larkin-Ovchinnikov phase. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "G=H-TS" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=60373549
60376830
Hypercube (communication pattern)
formula_0-dimensional hypercube is a network topology for parallel computers with formula_1 processing elements. The topology allows for an efficient implementation of some basic communication primitives such as Broadcast, All-Reduce, and Prefix sum. The processing elements are numbered formula_2 through formula_3. Each processing element is adjacent to processing elements whose numbers differ in one and only one bit. The algorithms described in this page utilize this structure efficiently. Algorithm outline. Most of the communication primitives presented in this article share a common template. Initially, each processing element possesses one message that must reach every other processing element during the course of the algorithm. The following pseudo code sketches the communication steps necessary. Hereby, Initialization, Operation, and Output are placeholders that depend on the given communication primitive (see next section). Input: message formula_4. Output: depends on Initialization, Operation and Output. Initialization formula_5 for formula_6 do formula_7 Send formula_8 to formula_9 Receive formula_4 from formula_9 Operationformula_10 endfor Output Each processing element iterates over its neighbors (the expression formula_11 negates the formula_12-th bit in formula_13's binary representation, therefore obtaining the numbers of its neighbors). In each iteration, each processing element exchanges a message with the neighbor and processes the received message afterwards. The processing operation depends on the communication primitive. Communication primitives. Prefix sum. In the beginning of a prefix sum operation, each processing element formula_13 owns a message formula_14. The goal is to compute formula_15, where formula_16 is an associative operation. The following pseudo code describes the algorithm. Input: message formula_14 of processor formula_13. Output: prefix sum formula_15 of processor formula_13. formula_17 formula_18 for formula_19 do formula_7 Send formula_20 to formula_9 Receive formula_4 from formula_9 formula_21 if bit formula_12 in formula_13 is set then formula_22 endfor The algorithm works as follows. Observe that hypercubes of dimension formula_0 can be split into two hypercubes of dimension formula_23. Refer to the sub cube containing nodes with a leading 0 as the 0-sub cube and the sub cube consisting of nodes with a leading 1 as 1-sub cube. Once both sub cubes have calculated the prefix sum, the sum over all elements in the 0-sub cube has to be added to the every element in the 1-sub cube, since every processing element in the 0-sub cube has a lower rank than the processing elements in the 1-sub cube. The pseudo code stores the prefix sum in variable formula_24 and the sum over all nodes in a sub cube in variable formula_20. This makes it possible for all nodes in 1-sub cube to receive the sum over the 0-sub cube in every step. This results in a factor of formula_25 for formula_26 and a factor of formula_27 for formula_28: formula_29. All-gather / all-reduce. All-gather operations start with each processing element having a message formula_14. The goal of the operation is for each processing element to know the messages of all other processing elements, i.e. formula_30 where formula_31 is concatenation. The operation can be implemented following the algorithm template. Input: message formula_17 at processing unit formula_13. Output: all messages formula_32. formula_17 for formula_33 do formula_7 Send formula_24 to formula_9 Receive formula_34 from formula_9 formula_35 endfor With each iteration, the transferred message doubles in length. This leads to a runtime of formula_36. The same principle can be applied to the All-Reduce operations, but instead of concatenating the messages, it performs a reduction operation on the two messages. So it is a Reduce operation, where all processing units know the result. Compared to a normal reduce operation followed by a broadcast, All-Reduce in hypercubes reduces the number of communication steps. All-to-all. Here every processing element has a unique message for all other processing elements. Input: message formula_37 at processing element formula_13 to processing element formula_38. for formula_39 do Receive from processing element formula_11: all messages for my formula_12-dimensional sub cube Send to processing element formula_11: all messages for its formula_12-dimensional sub cube endfor With each iteration a messages comes closer to its destination by one dimension, if it hasn't arrived yet. Hence, all messages have reached their target after at most formula_40 steps. In every step, formula_41 messages are sent: in the first iteration, half of the messages aren't meant for the own sub cube. In every following step, the sub cube is only half the size as before, but in the previous step exactly the same number of messages arrived from another processing element. This results in a run-time of formula_42. ESBT-broadcast. The ESBT-broadcast (Edge-disjoint Spanning Binomial Tree) algorithm is a pipelined broadcast algorithm with optimal runtime for clusters with hypercube network topology. The algorithm embeds formula_0 edge-disjoint binomial trees in the hypercube, such that each neighbor of processing element formula_2 is the root of a spanning binomial tree on formula_3 nodes. To broadcast a message, the source node splits its message into formula_12 chunks of equal size and cyclically sends them to the roots of the binomial trees. Upon receiving a chunk, the binomial trees broadcast it. Runtime. In each step, the source node sends one of its formula_12 chunks to a binomial tree. Broadcasting the chunk within the binomial tree takes formula_0 steps. Thus, it takes formula_12 steps to distribute all chunks and additionally formula_0 steps until the last binomial tree broadcast has finished, resulting in formula_43 steps overall. Therefore, the runtime for a message of length formula_44 is formula_45. With the optimal chunk size formula_46, the optimal runtime of the algorithm is formula_47. Construction of the binomial trees. This section describes how to construct the binomial trees systematically. First, construct a single binomial spanning tree von formula_1 nodes as follows. Number the nodes from formula_2 to formula_3 and consider their binary representation. Then the children of each nodes are obtained by negating single leading zeroes. This results in a single binomial spanning tree. To obtain formula_0 edge-disjoint copies of the tree, translate and rotate the nodes: for the formula_12-th copy of the tree, apply a XOR operation with formula_48 to each node. Subsequently, right-rotate all nodes by formula_12 digits. The resulting binomial trees are edge-disjoint and therefore fulfill the requirements for the ESBT-broadcasting algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "2^d" }, { "math_id": 2, "text": "0" }, { "math_id": 3, "text": "2^d - 1" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "s := m" }, { "math_id": 6, "text": "0 \\leq k < d" }, { "math_id": 7, "text": "y := i \\text{ XOR } 2^k" }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "(s, m)" }, { "math_id": 11, "text": "i \\text{ XOR } 2^k" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "i" }, { "math_id": 14, "text": "m_i" }, { "math_id": 15, "text": "\\bigoplus_{0 \\le j \\le i} m_j" }, { "math_id": 16, "text": "\\oplus" }, { "math_id": 17, "text": "x := m_i" }, { "math_id": 18, "text": "\\sigma := m_i" }, { "math_id": 19, "text": "0 \\le k \\le d - 1" }, { "math_id": 20, "text": "\\sigma" }, { "math_id": 21, "text": "\\sigma := \\sigma \\oplus m" }, { "math_id": 22, "text": "x := x \\oplus m" }, { "math_id": 23, "text": "d - 1" }, { "math_id": 24, "text": "x" }, { "math_id": 25, "text": "\\log p" }, { "math_id": 26, "text": "T_\\text{start}" }, { "math_id": 27, "text": "n\\log p" }, { "math_id": 28, "text": "T_\\text{byte}" }, { "math_id": 29, "text": "T(n,p) = (T_\\text{start} + nT_\\text{byte})\\log p" }, { "math_id": 30, "text": "x := m_0 \\cdot m_1 \\dots m_p" }, { "math_id": 31, "text": "\\cdot" }, { "math_id": 32, "text": "m_1 \\cdot m_2 \\dots m_p" }, { "math_id": 33, "text": "0 \\le k < d" }, { "math_id": 34, "text": "x'" }, { "math_id": 35, "text": "x := x \\cdot x'" }, { "math_id": 36, "text": "T(n,p) \\approx \\sum_{j=0}^{d-1}(T_\\text{start} + n \\cdot 2^jT_\\text{byte})= \\log(p) T_\\text{start} + (p-1)nT_\\text{byte}" }, { "math_id": 37, "text": "m_{ij}" }, { "math_id": 38, "text": "j" }, { "math_id": 39, "text": "d > k \\geq 0" }, { "math_id": 40, "text": "d = \\log{p}" }, { "math_id": 41, "text": "p / 2" }, { "math_id": 42, "text": "T(n,p) \\approx \\log{p} (T_\\text{start} + \\frac{p}{2}nT_\\text{byte})" }, { "math_id": 43, "text": "k + d" }, { "math_id": 44, "text": "n" }, { "math_id": 45, "text": "T(n, p, k) = \\left(\\frac{n}{k} T_\\text{byte} + T_\\text{start} \\right) (k + d)" }, { "math_id": 46, "text": "k^* = \\sqrt{\\frac{nd \\cdot T_\\text{byte}}{T_\\text{start}}}" }, { "math_id": 47, "text": "T^*(n, p) = n \\cdot T_\\text{byte} + \\log(p) \\cdot T_\\text{start} + \\sqrt{n \\log(p) \\cdot T_\\text{start} \\cdot T_\\text{byte}}" }, { "math_id": 48, "text": "2^k" } ]
https://en.wikipedia.org/wiki?curid=60376830
603780
Coherent sheaf
Generalization of vector bundles In mathematics, especially in algebraic geometry and the theory of complex manifolds, coherent sheaves are a class of sheaves closely linked to the geometric properties of the underlying space. The definition of coherent sheaves is made with reference to a sheaf of rings that codifies this geometric information. Coherent sheaves can be seen as a generalization of vector bundles. Unlike vector bundles, they form an abelian category, and so they are closed under operations such as taking kernels, images, and cokernels. The quasi-coherent sheaves are a generalization of coherent sheaves and include the locally free sheaves of infinite rank. Coherent sheaf cohomology is a powerful technique, in particular for studying the sections of a given coherent sheaf. Definitions. A quasi-coherent sheaf on a ringed space formula_0 is a sheaf formula_1 of formula_2-modules that has a local presentation, that is, every point in formula_3 has an open neighborhood formula_4 in which there is an exact sequence formula_5 for some (possibly infinite) sets formula_6 and formula_7. A coherent sheaf on a ringed space formula_0 is a sheaf formula_1 of formula_2-modules satisfying the following two properties: Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of formula_2-modules. The case of schemes. When formula_3 is a scheme, the general definitions above are equivalent to more explicit ones. A sheaf formula_1 of formula_2-modules is quasi-coherent if and only if over each open affine subscheme formula_13 the restriction formula_14 is isomorphic to the sheaf formula_15 associated to the module formula_16 over formula_17. When formula_3 is a locally Noetherian scheme, formula_1 is coherent if and only if it is quasi-coherent and the modules formula_18 above can be taken to be finitely generated. On an affine scheme formula_19, there is an equivalence of categories from formula_17-modules to quasi-coherent sheaves, taking a module formula_18 to the associated sheaf formula_15. The inverse equivalence takes a quasi-coherent sheaf formula_1 on formula_4 to the formula_17-module formula_20 of global sections of formula_1. Here are several further characterizations of quasi-coherent sheaves on a scheme. Properties. On an arbitrary ringed space, quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on any scheme form an abelian category, and they are extremely useful in that context. On any ringed space formula_3, the coherent sheaves form an abelian category, a full subcategory of the category of formula_2-modules. (Analogously, the category of coherent modules over any ring formula_17 is a full abelian subcategory of the category of all formula_17-modules.) So the kernel, image, and cokernel of any map of coherent sheaves are coherent. The direct sum of two coherent sheaves is coherent; more generally, an formula_2-module that is an extension of two coherent sheaves is coherent. A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always an formula_2-module of "finite presentation", meaning that each point formula_25 in formula_3 has an open neighborhood formula_4 such that the restriction formula_14 of formula_1 to formula_4 is isomorphic to the cokernel of a morphism formula_26 for some natural numbers formula_9 and formula_27. If formula_2 is coherent, then, conversely, every sheaf of finite presentation over formula_2 is coherent. The sheaf of rings formula_2 is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, the Oka coherence theorem states that the sheaf of holomorphic functions on a complex analytic space formula_3 is a coherent sheaf of rings. The main part of the proof is the case formula_28. Likewise, on a locally Noetherian scheme formula_3, the structure sheaf formula_2 is a coherent sheaf of rings. Vector bundles in this sheaf-theoretic sense over a scheme formula_3 are equivalent to vector bundles defined in a more geometric way, as a scheme formula_30 with a morphism formula_31 and with a covering of formula_3 by open sets formula_23 with given isomorphisms formula_32 over formula_23 such that the two isomorphisms over an intersection formula_33 differ by a linear automorphism. (The analogous equivalence also holds for complex analytic spaces.) For example, given a vector bundle formula_30 in this geometric sense, the corresponding sheaf formula_1 is defined by: over an open set formula_4 of formula_3, the formula_22-module formula_20 is the set of sections of the morphism formula_34. The sheaf-theoretic interpretation of vector bundles has the advantage that vector bundles (on a locally Noetherian scheme) are included in the abelian category of coherent sheaves. formula_52 this is because formula_53 restricted to the vanishing locus of the two polynomials has two-dimensional fibers, and has one-dimensional fibers elsewhere. formula_60 formula_65 Since this sheaf has non-trivial stalks, but zero global sections, this cannot be a quasi-coherent sheaf. This is because quasi-coherent sheaves on an affine scheme are equivalent to the category of modules over the underlying ring, and the adjunction comes from taking global sections. Functoriality. Let formula_66 be a morphism of ringed spaces (for example, a morphism of schemes). If formula_1 is a quasi-coherent sheaf on formula_67, then the inverse image formula_2-module (or pullback) formula_68 is quasi-coherent on formula_3. For a morphism of schemes formula_66 and a coherent sheaf formula_1 on formula_67, the pullback formula_68 is not coherent in full generality (for example, formula_69, which might not be coherent), but pullbacks of coherent sheaves are coherent if formula_3 is locally Noetherian. An important special case is the pullback of a vector bundle, which is a vector bundle. If formula_66 is a quasi-compact quasi-separated morphism of schemes and formula_1 is a quasi-coherent sheaf on formula_3, then the direct image sheaf (or pushforward) formula_70 is quasi-coherent on formula_67. The direct image of a coherent sheaf is often not coherent. For example, for a field formula_71, let formula_3 be the affine line over formula_71, and consider the morphism formula_72; then the direct image formula_73 is the sheaf on formula_74 associated to the polynomial ring formula_75, which is not coherent because formula_75 has infinite dimension as a formula_71-vector space. On the other hand, the direct image of a coherent sheaf under a proper morphism is coherent, by results of Grauert and Grothendieck. Local behavior of coherent sheaves. An important feature of coherent sheaves formula_1 is that the properties of formula_1 at a point formula_25 control the behavior of formula_1 in a neighborhood of formula_25, more than would be true for an arbitrary sheaf. For example, Nakayama's lemma says (in geometric language) that if formula_1 is a coherent sheaf on a scheme formula_3, then the fiber formula_76 of formula_77 at a point formula_25 (a vector space over the residue field formula_78) is zero if and only if the sheaf formula_1 is zero on some open neighborhood of formula_25. A related fact is that the dimension of the fibers of a coherent sheaf is upper-semicontinuous. Thus a coherent sheaf has constant rank on an open set, while the rank can jump up on a lower-dimensional closed subset. In the same spirit: a coherent sheaf formula_1 on a scheme formula_3 is a vector bundle if and only if its stalk formula_79 is a free module over the local ring formula_80 for every point formula_25 in formula_3. On a general scheme, one cannot determine whether a coherent sheaf is a vector bundle just from its fibers (as opposed to its stalks). On a reduced locally Noetherian scheme, however, a coherent sheaf is a vector bundle if and only if its rank is locally constant. Examples of vector bundles. For a morphism of schemes formula_81, let formula_82 be the diagonal morphism, which is a closed immersion if formula_3 is separated over formula_67. Let formula_83 be the ideal sheaf of formula_3 in formula_84. Then the sheaf of differentials formula_85 can be defined as the pullback formula_86 of formula_83 to formula_3. Sections of this sheaf are called 1-forms on formula_3 over formula_67, and they can be written locally on formula_3 as finite sums formula_87 for regular functions formula_88 and formula_89. If formula_3 is locally of finite type over a field formula_71, then formula_90 is a coherent sheaf on formula_3. If formula_3 is smooth over formula_71, then formula_91 (meaning formula_90) is a vector bundle over formula_3, called the cotangent bundle of formula_3. Then the tangent bundle formula_92 is defined to be the dual bundle formula_93. For formula_3 smooth over formula_71 of dimension formula_9 everywhere, the tangent bundle has rank formula_9. If formula_67 is a smooth closed subscheme of a smooth scheme formula_3 over formula_71, then there is a short exact sequence of vector bundles on formula_67: formula_94 which can be used as a definition of the normal bundle formula_95 to formula_67 in formula_3. For a smooth scheme formula_3 over a field formula_71 and a natural number formula_96, the vector bundle formula_97 of "i"-forms on formula_3 is defined as the formula_96-th exterior power of the cotangent bundle, formula_98. For a smooth variety formula_3 of dimension formula_9 over formula_71, the canonical bundle formula_99 means the line bundle formula_100. Thus sections of the canonical bundle are algebro-geometric analogs of volume forms on formula_3. For example, a section of the canonical bundle of affine space formula_101 over formula_71 can be written as formula_102 where formula_24 is a polynomial with coefficients in formula_71. Let formula_36 be a commutative ring and formula_9 a natural number. For each integer formula_103, there is an important example of a line bundle on projective space formula_104 over formula_36, called formula_105. To define this, consider the morphism of formula_36-schemes formula_106 given in coordinates by formula_107. (That is, thinking of projective space as the space of 1-dimensional linear subspaces of affine space, send a nonzero point in affine space to the line that it spans.) Then a section of formula_105 over an open subset formula_4 of formula_104 is defined to be a regular function formula_24 on formula_108 that is homogeneous of degree formula_103, meaning that formula_109 as regular functions on (formula_110. For all integers formula_96 and formula_103, there is an isomorphism formula_111 of line bundles on formula_104. In particular, every homogeneous polynomial in formula_112 of degree formula_103 over formula_36 can be viewed as a global section of formula_105 over formula_104. Note that every closed subscheme of projective space can be defined as the zero set of some collection of homogeneous polynomials, hence as the zero set of some sections of the line bundles formula_105. This contrasts with the simpler case of affine space, where a closed subscheme is simply the zero set of some collection of regular functions. The regular functions on projective space formula_104 over formula_36 are just the "constants" (the ring formula_36), and so it is essential to work with the line bundles formula_105. Serre gave an algebraic description of all coherent sheaves on projective space, more subtle than what happens for affine space. Namely, let formula_36 be a Noetherian ring (for example, a field), and consider the polynomial ring formula_113 as a graded ring with each formula_114 having degree 1. Then every finitely generated graded formula_115-module formula_18 has an associated coherent sheaf formula_21 on formula_104 over formula_36. Every coherent sheaf on formula_104 arises in this way from a finitely generated graded formula_115-module formula_18. (For example, the line bundle formula_105 is the sheaf associated to the formula_115-module formula_115 with its grading lowered by formula_103.) But the formula_115-module formula_18 that yields a given coherent sheaf on formula_104 is not unique; it is only unique up to changing formula_18 by graded modules that are nonzero in only finitely many degrees. More precisely, the abelian category of coherent sheaves on formula_104 is the quotient of the category of finitely generated graded formula_115-modules by the Serre subcategory of modules that are nonzero in only finitely many degrees. The tangent bundle of projective space formula_104 over a field formula_71 can be described in terms of the line bundle formula_116. Namely, there is a short exact sequence, the Euler sequence: formula_117 It follows that the canonical bundle formula_118 (the dual of the determinant line bundle of the tangent bundle) is isomorphic to formula_119. This is a fundamental calculation for algebraic geometry. For example, the fact that the canonical bundle is a negative multiple of the ample line bundle formula_116 means that projective space is a Fano variety. Over the complex numbers, this means that projective space has a Kähler metric with positive Ricci curvature. Vector bundles on a hypersurface. Consider a smooth degree-formula_120 hypersurface formula_121 defined by the homogeneous polynomial formula_24 of degree formula_120. Then, there is an exact sequence formula_122 where the second map is the pullback of differential forms, and the first map sends formula_123 Note that this sequence tells us that formula_124 is the conormal sheaf of formula_3 in formula_104. Dualizing this yields the exact sequence formula_125 hence formula_126 is the normal bundle of formula_3 in formula_104. If we use the fact that given an exact sequence formula_127 of vector bundles with ranks formula_128,formula_129,formula_130, there is an isomorphism formula_131 of line bundles, then we see that there is the isomorphism formula_132 showing that formula_133 Serre construction and vector bundles. One useful technique for constructing rank 2 vector bundles is the Serre constructionpg 3 which establishes a correspondence between rank 2 vector bundles formula_53 on a smooth projective variety formula_3 and codimension 2 subvarieties formula_67 using a certain formula_134-group calculated on formula_3. This is given by a cohomological condition on the line bundle formula_135 (see below). The correspondence in one direction is given as follows: for a section formula_136 we can associated the vanishing locus formula_137. If formula_138 is a codimension 2 subvariety, then In the other direction, for a codimension 2 subvariety formula_146 and a line bundle formula_147 such that there is a canonical isomorphismformula_150,which is functorial with respect to inclusion of codimension formula_151 subvarieties. Moreover, any isomorphism given on the left corresponds to a locally free sheaf in the middle of the extension on the right. That is, for formula_152 that is an isomorphism there is a corresponding locally free sheaf formula_53 of rank 2 that fits into a short exact sequenceformula_153This vector bundle can then be further studied using cohomological invariants to determine if it is stable or not. This forms the basis for studying moduli of stable vector bundles in many specific cases, such as on principally polarized abelian varieties and K3 surfaces. Chern classes and algebraic "K"-theory. A vector bundle formula_30 on a smooth variety formula_3 over a field has Chern classes in the Chow ring of formula_3, formula_154 in formula_155 for formula_156. These satisfy the same formal properties as Chern classes in topology. For example, for any short exact sequence formula_157 of vector bundles on formula_3, the Chern classes of formula_158 are given by formula_159 It follows that the Chern classes of a vector bundle formula_30 depend only on the class of formula_30 in the Grothendieck group formula_160. By definition, for a scheme formula_3, formula_160 is the quotient of the free abelian group on the set of isomorphism classes of vector bundles on formula_3 by the relation that formula_161 for any short exact sequence as above. Although formula_160 is hard to compute in general, algebraic K-theory provides many tools for studying it, including a sequence of related groups formula_162 for integers formula_163. A variant is the group formula_164 (or formula_165), the Grothendieck group of coherent sheaves on formula_3. (In topological terms, "G"-theory has the formal properties of a Borel–Moore homology theory for schemes, while "K"-theory is the corresponding cohomology theory.) The natural homomorphism formula_166 is an isomorphism if formula_3 is a regular separated Noetherian scheme, using that every coherent sheaf has a finite resolution by vector bundles in that case. For example, that gives a definition of the Chern classes of a coherent sheaf on a smooth variety over a field. More generally, a Noetherian scheme formula_3 is said to have the resolution property if every coherent sheaf on formula_3 has a surjection from some vector bundle on formula_3. For example, every quasi-projective scheme over a Noetherian ring has the resolution property. Applications of resolution property. Since the resolution property states that a coherent sheaf formula_167 on a Noetherian scheme is quasi-isomorphic in the derived category to the complex of vector bundles :formula_168 we can compute the total Chern class of formula_167 with formula_169 For example, this formula is useful for finding the Chern classes of the sheaf representing a subscheme of formula_3. If we take the projective scheme formula_54 associated to the ideal formula_170, then formula_171 since there is the resolution formula_172 over formula_173. Bundle homomorphism vs. sheaf homomorphism. When vector bundles and locally free sheaves of finite constant rank are used interchangeably, care must be given to distinguish between bundle homomorphisms and sheaf homomorphisms. Specifically, given vector bundles formula_174, by definition, a bundle homomorphism formula_175 is a scheme morphism over formula_3 (i.e., formula_176) such that, for each geometric point formula_25 in formula_3, formula_177 is a linear map of rank independent of formula_25. Thus, it induces the sheaf homomorphism formula_178 of constant rank between the corresponding locally free formula_2-modules (sheaves of dual sections). But there may be an formula_2-module homomorphism that does not arise this way; namely, those not having constant rank. In particular, a subbundle formula_179 is a subsheaf (i.e., formula_167 is a subsheaf of formula_1). But the converse can fail; for example, for an effective Cartier divisor formula_180 on formula_3, formula_181 is a subsheaf but typically not a subbundle (since any line bundle has only two subbundles). The category of quasi-coherent sheaves. The quasi-coherent sheaves on any fixed scheme form an abelian category. Gabber showed that, in fact, the quasi-coherent sheaves on any scheme form a particularly well-behaved abelian category, a Grothendieck category. A quasi-compact quasi-separated scheme formula_3 (such as an algebraic variety over a field) is determined up to isomorphism by the abelian category of quasi-coherent sheaves on formula_3, by Rosenberg, generalizing a result of Gabriel. Coherent cohomology. The fundamental technical tool in algebraic geometry is the cohomology theory of coherent sheaves. Although it was introduced only in the 1950s, many earlier techniques of algebraic geometry are clarified by the language of sheaf cohomology applied to coherent sheaves. Broadly speaking, coherent sheaf cohomology can be viewed as a tool for producing functions with specified properties; sections of line bundles or of more general sheaves can be viewed as generalized functions. In complex analytic geometry, coherent sheaf cohomology also plays a foundational role. Among the core results of coherent sheaf cohomology are results on finite-dimensionality of cohomology, results on the vanishing of cohomology in various cases, duality theorems such as Serre duality, relations between topology and algebraic geometry such as Hodge theory, and formulas for Euler characteristics of coherent sheaves such as the Riemann–Roch theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(X, \\mathcal O_X)" }, { "math_id": 1, "text": "\\mathcal F" }, { "math_id": 2, "text": "\\mathcal O_X" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "U" }, { "math_id": 5, "text": "\\mathcal{O}_X^{\\oplus I}|_{U} \\to \\mathcal{O}_X^{\\oplus J}|_{U} \\to \\mathcal{F}|_{U} \\to 0" }, { "math_id": 6, "text": "I" }, { "math_id": 7, "text": "J" }, { "math_id": 8, "text": "\\mathcal{O}_X^n|_{U} \\to \\mathcal{F}|_{U} " }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "U\\subseteq X" }, { "math_id": 11, "text": "\\varphi: \\mathcal{O}_X^n|_{U} \\to \\mathcal{F}|_{U} " }, { "math_id": 12, "text": "\\varphi" }, { "math_id": 13, "text": "U=\\operatorname{Spec} A" }, { "math_id": 14, "text": "\\mathcal F|_U" }, { "math_id": 15, "text": "\\tilde{M}" }, { "math_id": 16, "text": "M=\\Gamma(U, \\mathcal F)" }, { "math_id": 17, "text": "A" }, { "math_id": 18, "text": "M" }, { "math_id": 19, "text": "U = \\operatorname{Spec} A" }, { "math_id": 20, "text": "\\mathcal F(U)" }, { "math_id": 21, "text": "\\tilde M" }, { "math_id": 22, "text": "\\mathcal O(U)" }, { "math_id": 23, "text": "U_\\alpha" }, { "math_id": 24, "text": "f" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": "\\mathcal O_X^n|_U \\to \\mathcal O_X^m|_U" }, { "math_id": 27, "text": "m" }, { "math_id": 28, "text": "X = \\mathbf C^n" }, { "math_id": 29, "text": "\\mathcal O_X|_U" }, { "math_id": 30, "text": "E" }, { "math_id": 31, "text": "\\pi: E\\to X" }, { "math_id": 32, "text": "\\pi^{-1}(U_\\alpha) \\cong \\mathbb A^n \\times U_\\alpha" }, { "math_id": 33, "text": "U_\\alpha \\cap U_\\beta" }, { "math_id": 34, "text": "\\pi^{-1}(U) \\to U" }, { "math_id": 35, "text": "X = \\operatorname{Spec}(R)" }, { "math_id": 36, "text": "R" }, { "math_id": 37, "text": "X = \\operatorname{Proj}(R)" }, { "math_id": 38, "text": "\\N" }, { "math_id": 39, "text": "R_0" }, { "math_id": 40, "text": "\\Z" }, { "math_id": 41, "text": "\\mathcal F|_{\\{ f \\ne 0 \\}}" }, { "math_id": 42, "text": "R[f^{-1}]_0" }, { "math_id": 43, "text": "M[f^{-1}]_0" }, { "math_id": 44, "text": "\\{f \\ne 0 \\} = \\operatorname{Spec} R[f^{-1}]_0" }, { "math_id": 45, "text": "R(n)" }, { "math_id": 46, "text": "R(n)_l =R_{n+l}" }, { "math_id": 47, "text": "\\mathcal O_X(n)" }, { "math_id": 48, "text": "R_1" }, { "math_id": 49, "text": "\\mathcal O_X(1)" }, { "math_id": 50, "text": "\\mathcal O_{\\mathbb{P}^n}(-1)" }, { "math_id": 51, "text": "\\mathbb{P}^2" }, { "math_id": 52, "text": "\\mathcal{O}(1) \\xrightarrow{\\cdot (x^2-yz,y^3 + xy^2 - xyz)} \\mathcal{O}(3)\\oplus \\mathcal{O}(4) \\to \\mathcal{E} \\to 0" }, { "math_id": 53, "text": "\\mathcal{E}" }, { "math_id": 54, "text": "Z" }, { "math_id": 55, "text": "\\mathcal I_{Z/X}" }, { "math_id": 56, "text": "\\mathcal O_Z" }, { "math_id": 57, "text": "i_*\\mathcal O_Z" }, { "math_id": 58, "text": "i: Z \\to X" }, { "math_id": 59, "text": "X-Z" }, { "math_id": 60, "text": "0\\to \\mathcal I_{Z/X} \\to \\mathcal O_X \\to i_*\\mathcal O_Z \\to 0." }, { "math_id": 61, "text": "\\mathcal G" }, { "math_id": 62, "text": "\\mathcal F \\otimes_{\\mathcal O_X}\\mathcal G" }, { "math_id": 63, "text": "\\mathcal Hom_{\\mathcal O_X}(\\mathcal F, \\mathcal G)" }, { "math_id": 64, "text": "i_!\\mathcal{O}_X" }, { "math_id": 65, "text": "X = \\operatorname{Spec}(\\Complex[x,x^{-1}]) \\xrightarrow{i} \\operatorname{Spec}(\\Complex[x])=Y" }, { "math_id": 66, "text": "f: X\\to Y" }, { "math_id": 67, "text": "Y" }, { "math_id": 68, "text": "f^*\\mathcal F" }, { "math_id": 69, "text": "f^*\\mathcal O_Y = \\mathcal O_X" }, { "math_id": 70, "text": "f_*\\mathcal F" }, { "math_id": 71, "text": "k" }, { "math_id": 72, "text": "f: X\\to \\operatorname{Spec}(k)" }, { "math_id": 73, "text": "f_*\\mathcal O_X" }, { "math_id": 74, "text": "\\operatorname{Spec}(k)" }, { "math_id": 75, "text": "k[x]" }, { "math_id": 76, "text": "\\mathcal F_x\\otimes_{\\mathcal O_{X,x}} k(x)" }, { "math_id": 77, "text": " F" }, { "math_id": 78, "text": "k(x)" }, { "math_id": 79, "text": "\\mathcal F_x" }, { "math_id": 80, "text": "\\mathcal O_{X,x}" }, { "math_id": 81, "text": "X\\to Y" }, { "math_id": 82, "text": "\\Delta: X\\to X\\times_Y X" }, { "math_id": 83, "text": "\\mathcal I" }, { "math_id": 84, "text": "X\\times_Y X" }, { "math_id": 85, "text": "\\Omega^1_{X/Y}" }, { "math_id": 86, "text": "\\Delta^*\\mathcal I" }, { "math_id": 87, "text": "\\textstyle\\sum f_j\\, dg_j" }, { "math_id": 88, "text": "f_j" }, { "math_id": 89, "text": "g_j" }, { "math_id": 90, "text": "\\Omega^1_{X/k}" }, { "math_id": 91, "text": "\\Omega^1" }, { "math_id": 92, "text": "TX" }, { "math_id": 93, "text": "(\\Omega^1)^*" }, { "math_id": 94, "text": "0\\to TY \\to TX|_Y \\to N_{Y/X}\\to 0," }, { "math_id": 95, "text": "N_{Y/X}" }, { "math_id": 96, "text": "i" }, { "math_id": 97, "text": "\\Omega^i" }, { "math_id": 98, "text": "\\Omega^i = \\Lambda^i \\Omega^1" }, { "math_id": 99, "text": "K_X" }, { "math_id": 100, "text": "\\Omega^n" }, { "math_id": 101, "text": "\\mathbb A^n" }, { "math_id": 102, "text": "f(x_1,\\ldots,x_n) \\; dx_1 \\wedge\\cdots\\wedge dx_n," }, { "math_id": 103, "text": "j" }, { "math_id": 104, "text": "\\mathbb P^n" }, { "math_id": 105, "text": "\\mathcal O(j)" }, { "math_id": 106, "text": "\\pi: \\mathbb A^{n+1}-0\\to \\mathbb P^n" }, { "math_id": 107, "text": "(x_0,\\ldots,x_n) \\mapsto [x_0,\\ldots,x_n]" }, { "math_id": 108, "text": "\\pi^{-1}(U)" }, { "math_id": 109, "text": "f(ax)=a^jf(x)" }, { "math_id": 110, "text": "\\mathbb A^{1} - 0) \\times \\pi^{-1}(U)" }, { "math_id": 111, "text": "\\mathcal O(i) \\otimes \\mathcal O(j) \\cong \\mathcal O(i+j)" }, { "math_id": 112, "text": "x_0,\\ldots,x_n" }, { "math_id": 113, "text": "S = R[x_0,\\ldots,x_n]" }, { "math_id": 114, "text": "x_i" }, { "math_id": 115, "text": "S" }, { "math_id": 116, "text": "\\mathcal O(1)" }, { "math_id": 117, "text": " 0\\to \\mathcal O_{\\mathbb P^n}\\to \\mathcal O(1)^{\\oplus \\; n+1}\\to T\\mathbb P^n\\to 0." }, { "math_id": 118, "text": "K_{\\mathbb P^n}" }, { "math_id": 119, "text": "\\mathcal O(-n-1)" }, { "math_id": 120, "text": "d" }, { "math_id": 121, "text": "X \\subseteq \\mathbb{P}^n" }, { "math_id": 122, "text": "0 \\to \\mathcal O_X(-d) \\to i^*\\Omega_{\\mathbb{P}^n} \\to \\Omega_X \\to 0 " }, { "math_id": 123, "text": " \\phi \\mapsto d(f\\cdot \\phi)" }, { "math_id": 124, "text": "\\mathcal O(-d)" }, { "math_id": 125, "text": " 0 \\to T_X \\to i^*T_{\\mathbb{P}^n} \\to \\mathcal O(d) \\to 0" }, { "math_id": 126, "text": "\\mathcal O(d)" }, { "math_id": 127, "text": "0 \\to \\mathcal E_1 \\to \\mathcal E_2 \\to \\mathcal E_3 \\to 0" }, { "math_id": 128, "text": "r_1" }, { "math_id": 129, "text": "r_2" }, { "math_id": 130, "text": "r_3" }, { "math_id": 131, "text": "\\Lambda^{r_2}\\mathcal E_2 \\cong \\Lambda^{r_1}\\mathcal E_1\\otimes \\Lambda^{r_3}\\mathcal E_3" }, { "math_id": 132, "text": "i^*\\omega_{\\mathbb P^n} \\cong \\omega_X\\otimes \\mathcal O_X(-d)" }, { "math_id": 133, "text": "\\omega_X \\cong \\mathcal O_X(d - n -1)" }, { "math_id": 134, "text": "\\text{Ext}^1" }, { "math_id": 135, "text": "\\wedge^2\\mathcal{E}" }, { "math_id": 136, "text": "s \\in \\Gamma(X,\\mathcal{E})" }, { "math_id": 137, "text": "V(s) \\subseteq X" }, { "math_id": 138, "text": "V(s)" }, { "math_id": 139, "text": "U_i \\subseteq X" }, { "math_id": 140, "text": "s|_{U_i} \\in \\Gamma(U_i,\\mathcal{E})" }, { "math_id": 141, "text": "s_i:U_i \\to \\mathbb{A}^2" }, { "math_id": 142, "text": "s_i(p) = (s_i^1(p), s_i^2(p))" }, { "math_id": 143, "text": "V(s)\\cap U_i = V(s_i^1,s_i^2)" }, { "math_id": 144, "text": "\\omega_X\\otimes \\wedge^2\\mathcal{E}|_{V(s)}" }, { "math_id": 145, "text": "\\omega_{V(s)}" }, { "math_id": 146, "text": "Y \\subseteq X" }, { "math_id": 147, "text": "\\mathcal{L} \\to X" }, { "math_id": 148, "text": "H^1(X,\\mathcal{L}) = H^2(X,\\mathcal{L}) = 0" }, { "math_id": 149, "text": "\\omega_Y \\cong (\\omega_X\\otimes\\mathcal{L})|_Y" }, { "math_id": 150, "text": "\\text{Hom}((\\omega_X\\otimes\\mathcal{L})|_Y,\\omega_Y) \\cong \\text{Ext}^1(\\mathcal{I}_Y\\otimes\\mathcal{L}, \\mathcal{O}_X)" }, { "math_id": 151, "text": "2" }, { "math_id": 152, "text": "s \\in \\text{Hom}((\\omega_X\\otimes\\mathcal{L})|_Y,\\omega_Y)" }, { "math_id": 153, "text": "0 \\to \\mathcal{O}_X \\to \\mathcal{E} \\to \\mathcal{I}_Y\\otimes\\mathcal{L} \\to 0" }, { "math_id": 154, "text": "c_i(E)" }, { "math_id": 155, "text": "CH^i(X)" }, { "math_id": 156, "text": "i\\geq 0" }, { "math_id": 157, "text": "0\\to A \\to B \\to C \\to 0" }, { "math_id": 158, "text": "B" }, { "math_id": 159, "text": "c_i(B) = c_i(A)+c_1(A)c_{i-1}(C)+\\cdots+c_{i-1}(A)c_1(C)+c_i(C)." }, { "math_id": 160, "text": "K_0(X)" }, { "math_id": 161, "text": "[B] = [A] + [C]" }, { "math_id": 162, "text": "K_i(X)" }, { "math_id": 163, "text": "i>0" }, { "math_id": 164, "text": "G_0(X)" }, { "math_id": 165, "text": "K_0'(X)" }, { "math_id": 166, "text": "K_0(X)\\to G_0(X)" }, { "math_id": 167, "text": "\\mathcal E" }, { "math_id": 168, "text": "\\mathcal E_k \\to \\cdots \\to \\mathcal E_1 \\to \\mathcal E_0" }, { "math_id": 169, "text": "c(\\mathcal E) = c(\\mathcal E_0)c(\\mathcal E_1)^{-1} \\cdots c(\\mathcal E_k)^{(-1)^k}" }, { "math_id": 170, "text": "(xy,xz) \\subseteq \\mathbb C[x,y,z,w]" }, { "math_id": 171, "text": "c(\\mathcal O_Z) = \\frac{c(\\mathcal O)c(\\mathcal O(-3))}{c(\\mathcal O(-2)\\oplus \\mathcal O(-2))}" }, { "math_id": 172, "text": "0 \\to \\mathcal O(-3) \\to \\mathcal O(-2)\\oplus\\mathcal O(-2) \\to \\mathcal O \\to \\mathcal O_Z \\to 0" }, { "math_id": 173, "text": "\\mathbb{CP}^3" }, { "math_id": 174, "text": "p: E \\to X, \\, q: F \\to X" }, { "math_id": 175, "text": "\\varphi: E \\to F" }, { "math_id": 176, "text": "p = q \\circ \\varphi" }, { "math_id": 177, "text": "\\varphi_x: p^{-1}(x) \\to q^{-1}(x)" }, { "math_id": 178, "text": "\\widetilde{\\varphi}: \\mathcal E \\to \\mathcal F" }, { "math_id": 179, "text": "E \\subseteq F" }, { "math_id": 180, "text": "D" }, { "math_id": 181, "text": "\\mathcal O_X(-D) \\subseteq \\mathcal O_X" } ]
https://en.wikipedia.org/wiki?curid=603780
603782
Proper morphism
In algebraic geometry, a proper morphism between schemes is an analog of a proper map between complex analytic spaces. Some authors call a proper variety over a field "k" a complete variety. For example, every projective variety over a field "k" is proper over "k". A scheme "X" of finite type over the complex numbers (for example, a variety) is proper over C if and only if the space "X"(C) of complex points with the classical (Euclidean) topology is compact and Hausdorff. A closed immersion is proper. A morphism is finite if and only if it is proper and quasi-finite. Definition. A morphism "f": "X" → "Y" of schemes is called universally closed if for every scheme "Z" with a morphism "Z" → "Y", the projection from the fiber product formula_0 is a closed map of the underlying topological spaces. A morphism of schemes is called proper if it is separated, of finite type, and universally closed ([EGA] II, 5.4.1 ). One also says that "X" is proper over "Y". In particular, a variety "X" over a field "k" is said to be proper over "k" if the morphism "X" → Spec("k") is proper. Examples. For any natural number "n", projective space P"n" over a commutative ring "R" is proper over "R". Projective morphisms are proper, but not all proper morphisms are projective. For example, there is a smooth proper complex variety of dimension 3 which is not projective over C. Affine varieties of positive dimension over a field "k" are never proper over "k". More generally, a proper affine morphism of schemes must be finite. For example, it is not hard to see that the affine line "A"1 over a field "k" is not proper over "k", because the morphism "A"1 → Spec("k") is not universally closed. Indeed, the pulled-back morphism formula_1 (given by ("x","y") ↦ "y") is not closed, because the image of the closed subset "xy" = 1 in "A"1 × "A"1 = "A"2 is "A"1 − 0, which is not closed in "A"1. Properties and characterizations of proper morphisms. In the following, let "f": "X" → "Y" be a morphism of schemes. Valuative criterion of properness. There is a very intuitive criterion for properness which goes back to Chevalley. It is commonly called the valuative criterion of properness. Let "f": "X" → "Y" be a morphism of finite type of noetherian schemes. Then "f" is proper if and only if for all discrete valuation rings "R" with fraction field "K" and for any "K"-valued point "x" ∈ "X"("K") that maps to a point "f"("x") that is defined over "R", there is a unique lift of "x" to formula_7. (EGA II, 7.3.8). More generally, a quasi-separated morphism "f": "X" → "Y" of finite type (note: finite type includes quasi-compact) of 'any' schemes "X", "Y" is proper if and only if for all valuation rings "R" with fraction field "K" and for any "K"-valued point "x" ∈ "X"("K") that maps to a point "f"("x") that is defined over "R", there is a unique lift of "x" to formula_7. (Stacks project Tags 01KF and 01KY). Noting that "Spec K" is the generic point of "Spec R" and discrete valuation rings are precisely the regular local one-dimensional rings, one may rephrase the criterion: given a regular curve on "Y" (corresponding to the morphism "s": Spec "R" → "Y") and given a lift of the generic point of this curve to "X", "f" is proper if and only if there is exactly one way to complete the curve. Similarly, "f" is separated if and only if in every such diagram, there is at most one lift formula_7. For example, given the valuative criterion, it becomes easy to check that projective space P"n" is proper over a field (or even over Z). One simply observes that for a discrete valuation ring "R" with fraction field "K", every "K"-point ["x"0...,"x""n"] of projective space comes from an "R"-point, by scaling the coordinates so that all lie in "R" and at least one is a unit in "R". Geometric interpretation with disks. One of the motivating examples for the valuative criterion of properness is the interpretation of formula_8 as an infinitesimal disk, or complex-analytically, as the disk formula_9. This comes from the fact that every power seriesformula_10converges in some disk of radius formula_11 around the origin. Then, using a change of coordinates, this can be expressed as a power series on the unit disk. Then, if we invert formula_12, this is the ring formula_13 which are the power series which may have a pole at the origin. This is represented topologically as the open disk formula_14 with the origin removed. For a morphism of schemes over formula_15, this is given by the commutative diagramformula_16Then, the valuative criterion for properness would be a filling in of the point formula_17 in the image of formula_18. Example. It's instructive to look at a counter-example to see why the valuative criterion of properness should hold on spaces analogous to closed compact manifolds. If we take formula_19 and formula_20, then a morphism formula_21 factors through an affine chart of formula_22, reducing the diagram toformula_23where formula_24 is the chart centered around formula_25 on formula_22. This gives the commutative diagram of commutative algebrasformula_26Then, a lifting of the diagram of schemes, formula_27, would imply there is a morphism formula_28 sending formula_29 from the commutative diagram of algebras. This, of course, cannot happen. Therefore formula_22 is not proper over formula_30. Geometric interpretation with curves. There is another similar example of the valuative criterion of properness which captures some of the intuition for why this theorem should hold. Consider a curve formula_31 and the complement of a point formula_32. Then the valuative criterion for properness would read as a diagramformula_33with a lifting of formula_34. Geometrically this means every curve in the scheme formula_22 can be completed to a compact curve. This bit of intuition aligns with what the scheme-theoretic interpretation of a morphism of topological spaces with compact fibers, that a sequence in one of the fibers must converge. Because this geometric situation is a problem locally, the diagram is replaced by looking at the local ring formula_35, which is a DVR, and its fraction field formula_36. Then, the lifting problem then gives the commutative diagramformula_37where the scheme formula_38 represents a local disk around formula_39 with the closed point formula_39 removed. Proper morphism of formal schemes. Let formula_40 be a morphism between locally noetherian formal schemes. We say "f" is proper or formula_41 is proper over formula_42 if (i) "f" is an adic morphism (i.e., maps the ideal of definition to the ideal of definition) and (ii) the induced map formula_43 is proper, where formula_44 and "K" is the ideal of definition of formula_42. The definition is independent of the choice of "K". For example, if "g": "Y" → "Z" is a proper morphism of locally noetherian schemes, "Z"0 is a closed subset of "Z", and "Y"0 is a closed subset of "Y" such that "g"("Y"0) ⊂ "Z"0, then the morphism formula_45 on formal completions is a proper morphism of formal schemes. Grothendieck proved the coherence theorem in this setting. Namely, let formula_40 be a proper morphism of locally noetherian formal schemes. If "F" is a coherent sheaf on formula_41, then the higher direct images formula_6 are coherent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\times_Y Z \\to Z" }, { "math_id": 1, "text": "\\mathbb{A}^1 \\times_k \\mathbb{A}^1 \\to \\mathbb{A}^1" }, { "math_id": 2, "text": "f\\colon X \\to S" }, { "math_id": 3, "text": "F" }, { "math_id": 4, "text": "\\mathcal{O}_X" }, { "math_id": 5, "text": "i \\ge 0" }, { "math_id": 6, "text": "R^i f_* F" }, { "math_id": 7, "text": "\\overline{x} \\in X(R)" }, { "math_id": 8, "text": "\\text{Spec}(\\mathbb{C}[[t]])" }, { "math_id": 9, "text": "\\Delta = \\{x \\in \\mathbb{C} : |x| < 1 \\}" }, { "math_id": 10, "text": "f(t) = \\sum_{n=0}^\\infty a_nt^n" }, { "math_id": 11, "text": "r" }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "\\mathbb{C}[[t]][t^{-1}] = \\mathbb{C}((t))" }, { "math_id": 14, "text": "\\Delta^* = \\{x \\in \\mathbb{C} : 0<|x| < 1 \\}" }, { "math_id": 15, "text": "\\text{Spec}(\\mathbb{C})" }, { "math_id": 16, "text": "\\begin{matrix}\n\\Delta^* & \\to & X \\\\\n\\downarrow & & \\downarrow \\\\\n\\Delta & \\to & Y\n\\end{matrix}" }, { "math_id": 17, "text": "0 \\in \\Delta" }, { "math_id": 18, "text": "\\Delta^*" }, { "math_id": 19, "text": "X = \\mathbb{P}^1 - \\{x \\}" }, { "math_id": 20, "text": "Y = \\text{Spec}(\\mathbb{C})" }, { "math_id": 21, "text": "\\text{Spec}(\\mathbb{C}((t))) \\to X" }, { "math_id": 22, "text": "X" }, { "math_id": 23, "text": "\\begin{matrix}\n\\text{Spec}(\\mathbb{C}((t))) & \\to & \\text{Spec}(\\mathbb{C}[t,t^{-1}]) \\\\\n\\downarrow & & \\downarrow \\\\\n\\text{Spec}(\\mathbb{C}[[t]]) & \\to & \\text{Spec}(\\mathbb{C})\n\\end{matrix}" }, { "math_id": 24, "text": "\\text{Spec}(\\mathbb{C}[t,t^{-1}]) = \\mathbb{A}^1 - \\{0\\}" }, { "math_id": 25, "text": "\\{x \\}" }, { "math_id": 26, "text": "\\begin{matrix}\n\\mathbb{C}((t)) & \\leftarrow & \\mathbb{C}[t,t^{-1}] \\\\\n\\uparrow & & \\uparrow \\\\\n\\mathbb{C}[[t]] & \\leftarrow & \\mathbb{C}\n\\end{matrix}" }, { "math_id": 27, "text": "\\text{Spec}(\\mathbb{C}[[t]]) \\to \\text{Spec}(\\mathbb{C}[t,t^{-1}])" }, { "math_id": 28, "text": "\\mathbb{C}[t,t^{-1}] \\to \\mathbb{C}[[t]]" }, { "math_id": 29, "text": "t \\mapsto t" }, { "math_id": 30, "text": "Y" }, { "math_id": 31, "text": "C" }, { "math_id": 32, "text": "C-\\{p\\}" }, { "math_id": 33, "text": "\\begin{matrix}\nC-\\{p\\} & \\rightarrow & X \\\\\n\\downarrow & & \\downarrow \\\\\nC & \\rightarrow & Y\n\\end{matrix}" }, { "math_id": 34, "text": "C \\to X" }, { "math_id": 35, "text": "\\mathcal{O}_{C,\\mathfrak{p}}" }, { "math_id": 36, "text": "\\text{Frac}(\\mathcal{O}_{C,\\mathfrak{p}})" }, { "math_id": 37, "text": "\\begin{matrix}\n\\text{Spec}(\\text{Frac}(\\mathcal{O}_{C,\\mathfrak{p}})\n) & \\rightarrow & X \\\\\n\\downarrow & & \\downarrow \\\\\n\\text{Spec}(\\mathcal{O}_{C,\\mathfrak{p}}\n) & \\rightarrow & Y\n\\end{matrix}" }, { "math_id": 38, "text": "\\text{Spec}(\\text{Frac}(\\mathcal{O}_{C,\\mathfrak{p}}))" }, { "math_id": 39, "text": "\\mathfrak{p}" }, { "math_id": 40, "text": "f\\colon \\mathfrak{X} \\to \\mathfrak{S}" }, { "math_id": 41, "text": "\\mathfrak{X}" }, { "math_id": 42, "text": "\\mathfrak{S}" }, { "math_id": 43, "text": "f_0\\colon X_0 \\to S_0" }, { "math_id": 44, "text": "X_0 = (\\mathfrak{X}, \\mathcal{O}_\\mathfrak{X}/I), S_0 = (\\mathfrak{S}, \\mathcal{O}_\\mathfrak{S}/K), I = f^*(K) \\mathcal{O}_\\mathfrak{X}" }, { "math_id": 45, "text": "\\widehat{g}\\colon Y_{/Y_0} \\to Z_{/Z_0}" } ]
https://en.wikipedia.org/wiki?curid=603782
60378307
Broadcast (parallel pattern)
Broadcast is a collective communication primitive in parallel programming to distribute programming instructions or data to nodes in a cluster. It is the reverse operation of reduction. The broadcast operation is widely used in parallel algorithms, such as matrix-vector multiplication, Gaussian elimination and shortest paths. The Message Passing Interface implements broadcast in codice_0. Definition. A message formula_0of length formula_1 should be distributed from one node to all other formula_2 nodes. formula_3is the time it takes to send one byte. formula_4is the time it takes for a message to travel to another node, independent of its length. Therefore, the time to send a package from one node to another is formula_5. formula_6 is the number of nodes and the number of processors. Binomial Tree Broadcast. With Binomial Tree Broadcast the whole message is sent at once. Each node that has already received the message sends it on further. This grows exponentially as each time step the amount of sending nodes is doubled. The algorithm is ideal for short messages but falls short with longer ones as during the time when the first transfer happens only one node is busy. Sending a message to all nodes takes formula_7 time which results in a runtime of formula_8 Message M id := node number p := number of nodes if id &gt; 0 blocking_receive M for (i := ceil(log_2(p)) - 1; i &gt;= 0; i--) if (id % 2^(i+1) == 0 &amp;&amp; id + 2^i &lt;= p) send M to node id + 2^i Linear Pipeline Broadcast. The message is split up into formula_9 packages and send piecewise from node formula_10 to node formula_11. The time needed to distribute the first message piece is formula_12 whereby formula_13 is the time needed to send a package from one processor to another. Sending a whole message takes formula_14. Optimal is to choose formula_15 resulting in a runtime of approximately formula_16 The run time is dependent on not only message length but also the number of processors that play roles. This approach shines when the length of the message is much larger than the amount of processors. Message M := [m_1, m_2, ..., m_n] id = node number for (i := 1; i &lt;= n; i++) in parallel if (id != 0) blocking_receive m_i if (id != n) send m_i to node id + 1 Pipelined Binary Tree Broadcast. This algorithm combines Binomial Tree Broadcast and Linear Pipeline Broadcast, which makes the algorithm work well for both short and long messages. The aim is to have as many nodes work as possible while maintaining the ability to send short messages quickly. A good approach is to use Fibonacci trees for splitting up the tree, which are a good choice as a message cannot be sent to both children at the same time. This results in a binary tree structure. We will assume in the following that communication is full-duplex. The Fibonacci tree structure has a depth of about formula_17whereby formula_18the golden ratio. The resulting runtime is formula_19. Optimal is formula_20. This results in a runtime of formula_21. Message M := [m_1, m_2, ..., m_k] for i = 1 to k if (hasParent()) blocking_receive m_i if (hasChild(left_child)) send m_i to left_child if (hasChild(right_child)) send m_i to right_child Two Tree Broadcast (23-Broadcast). Definition. This algorithm aims to improve on some disadvantages of tree structure models with pipelines. Normally in tree structure models with pipelines (see above methods), leaves receive just their data and cannot contribute to send and spread data. The algorithm concurrently uses two binary trees to communicate over. Those trees will be called tree A and B. Structurally in binary trees there are relatively more leave nodes than inner nodes. Basic Idea of this algorithm is to make a leaf node of tree A be an inner node of tree B. It has also the same technical function in opposite side from B to A tree. This means, two packets are sent and received by inner nodes and leaves in different steps. Tree construction. The number of steps needed to construct two parallel-working binary trees is dependent on the amount of processors. Like with other structures one processor can is the root node who sends messages to two trees. It is not necessary to set a root node, because it is not hard to recognize that the direction of sending messages in binary tree is normally top to bottom. There is no limitation on the number of processors to build two binary trees. Let the height of the combined tree be "h" ⌈log("p" + 2)⌉. Tree A and B can have a height of formula_22. Especially, if the number of processors correspond to formula_23, we can make both sides trees and a root node. To construct this model efficiently and easily with a fully built tree, we can use two methods called "Shifting" and "Mirroring" to get second tree. Let assume tree A is already modeled and tree B is supposed to be constructed based on tree A. We assume that we have formula_24 processors ordered from 0 to formula_25. Shifting. The "Shifting" method, first copies tree A and moves every node one position to the left to get tree B. The node, which will be located on -1, becomes a child of processor formula_26. Mirroring. "Mirroring" is ideal for an even number of processors. With this method tree B can be more easily constructed by tree A, because there are no structural transformations in order to create the new tree. In addition, a symmetric process makes this approach simple. This method can also handle an odd number of processors, in this case, we can set processor formula_25 as root node for both trees. For the remaining processors "Mirroring" can be used. Coloring. We need to find a schedule in order to make sure that no processor has to send or receive two messages from two trees in a step. The edge, is a communication connection to connect two nodes, and can be labelled as either 0 or 1 to make sure that every processor can alternate between 0 and 1-labelled edges. The edges of A and B can be colored with two colors (0 and 1) such that In every even step the edges with 0 are activated and edges with 1 are activated in every odd step. Time complexity. In this case the number of packet k is divided in half for each tree. Both trees are working together the total number of packets formula_27 (upper tree + bottom tree) In each binary tree sending a message to another nodes takes formula_28 steps until a processor has at least a packet in step formula_29. Therefore, we can calculate all steps as formula_30. The resulting run time is formula_31. (Optimal formula_32) This results in a run time of formula_33. ESBT-Broadcasting (Edge-disjoint Spanning Binomial Trees). In this section, another broadcasting algorithm with an underlying telephone communication model will be introduced. A Hypercube creates network system with formula_34. Every node is represented by binary formula_35 depending on the number of dimensions. Fundamentally ESBT(Edge-disjoint Spanning Binomial Trees) is based on hypercube graphs, pipelining(formula_36 messages are divided by formula_37 packets) and binomial trees. The Processor formula_38 cyclically spreads packets to roots of ESBTs. The roots of ESBTs broadcast data with binomial tree. To leave all of formula_37 from formula_39, formula_37 steps are required, because all packets are distributed by formula_40. It takes another d steps until the last leaf node receives the packet. In total formula_41 steps are necessary to broadcast formula_36 message through ESBT. The resulting run time is formula_42. formula_43. This results in a run time of formula_44.
[ { "math_id": 0, "text": "M [1 .. m]" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "p-1" }, { "math_id": 3, "text": "T_\\text{byte}" }, { "math_id": 4, "text": "T_\\text{start}" }, { "math_id": 5, "text": "t = \\mathrm{size} \\times T_\\text{byte} + T_\\text{start}" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "\\log_2(p) t" }, { "math_id": 8, "text": "\\log_2(p) ( m T_\\text{byte} + T_\\text{start})" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "n+1" }, { "math_id": 12, "text": "p t = \\frac{m}{k} T_\\text{byte} + T_\\text{start}" }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "( p + k )\\left( \\frac{mT_\\text{byte}}{k} + T_\\text{start} \\right) = (p + k) t = p t + k t" }, { "math_id": 15, "text": "k = \\sqrt{ \\frac{ m (p-2) T_\\text{byte} }{T_\\text{start} } }\n\n" }, { "math_id": 16, "text": "m T_\\text{byte} + p T_\\text{start} + \\sqrt{m p T_\\text{start} T_\\text{byte}}" }, { "math_id": 17, "text": "d \\approx \\log_\\Phi(p)" }, { "math_id": 18, "text": "\\Phi = \\frac{1+\\sqrt{5}}{2}" }, { "math_id": 19, "text": "(\\frac{m}{k}T_\\text{byte} + T_\\text{start})(d + 2k - 2)" }, { "math_id": 20, "text": "k = \\sqrt{\\frac{n (d-2) T_\\text{byte} }{ 3T_\\text{start}}}" }, { "math_id": 21, "text": "2mT_\\text{byte} + T_\\text{start} \\log_\\Phi(p) + \\sqrt{2m \\log_\\Phi(p) T_\\text{start} T_\\text{byte}}" }, { "math_id": 22, "text": " h -1 " }, { "math_id": 23, "text": " p = 2^h - 1 " }, { "math_id": 24, "text": " p " }, { "math_id": 25, "text": " p-1 " }, { "math_id": 26, "text": " p-2 " }, { "math_id": 27, "text": " k = k/2 + k/2 " }, { "math_id": 28, "text": " 2i " }, { "math_id": 29, "text": "i" }, { "math_id": 30, "text": " d := \\log_2 (p+1) \\Rightarrow \\log_2(p+1) \\approx \\log_2(p) " }, { "math_id": 31, "text": "T(m,p,k) \\approx (\\frac{m}{k}T_\\text{byte} + T_\\text{start})(2d + k - 1)" }, { "math_id": 32, "text": "k = \\sqrt{{m (2d-1) T_\\text{byte} }/{ T_\\text{start}}}" }, { "math_id": 33, "text": "T(m,p) \\approx mT_\\text{byte} + T_\\text{start} \\cdot 2\\log_2 (p) + \\sqrt{m \\cdot 2\\log_2 (p) T_\\text{start} T_\\text{byte}}" }, { "math_id": 34, "text": " p = 2^d (d = 0,1,2,3,...) " }, { "math_id": 35, "text": " {0,1} " }, { "math_id": 36, "text": " m " }, { "math_id": 37, "text": " k " }, { "math_id": 38, "text": " 0^d " }, { "math_id": 39, "text": "p_0" }, { "math_id": 40, "text": " p_0 " }, { "math_id": 41, "text": " d+ k " }, { "math_id": 42, "text": "T(m,p,k) = (\\frac{m}{k}T_\\text{byte} + T_\\text{start})(k+d)" }, { "math_id": 43, "text": " (k = \\sqrt{{m d T_\\text{byte} }/{ T_\\text{start}}})" }, { "math_id": 44, "text": "T(m,p) := mT_\\text{byte} + dT_\\text{start} + \\sqrt{mdT_\\text{start} T_\\text{byte}}" } ]
https://en.wikipedia.org/wiki?curid=60378307
60381359
Alternating timed automaton
In automata theory, an alternating timed automaton (ATA) is a mix of both timed automaton and alternating finite automaton. That is, it is a sort of automata which can measure time and in which there exists universal and existential transition. ATAs are more expressive than timed automaton. one clock alternating timed automaton (OCATA) is the restriction of ATA allowing the use of a single clock. OCATAs allow to express timed languages which can not be expressed using timed-automaton. Definition. An alternating timed automaton is defined as a timed automaton, where the transitions are more complex. Difference from a timed-automaton. Given a set formula_0, let formula_1 the set of positive Boolean combination of elements of formula_0. I.e. the set containing the elements of formula_0, and containing formula_2 and formula_3, for formula_4. For each letter formula_5 and location formula_6, let formula_7 be a set of clock constraints such that their zones partition formula_8, with formula_9 the number of clocks. Given a clock valuation formula_10, let formula_11 be the only clock constraint of formula_7 which is satisfied by formula_10. An alternating timed-automaton contains a transition function, which associates to a 3-tuple formula_12, with formula_13, to an element of formula_14. For example, formula_15 is an element of formula_14. Intuitively, it means that the run may either continue by moving to location formula_16, and resetting no clock. Or by moving to location formula_17 and should be successful when either formula_18 or formula_19 is reset. Formal definition. Formally, an alternating timed automaton is a tuple formula_20 that consists of the following components: Any Boolean expression can be rewritten into an equivalent expression in disjunctive normal form. In the representation of a ATA, each disjunction is represented by a different arrow. Each conjunct of a disjunction is represented by a set of arrows with the same tail and multiple heads. The tail is labelled by the letter and each head is labelled by the set of clocks it resets. Run. We now define a run of an alternating timed automaton over a timed word formula_27. There are two equivalent way to define a run, either as a tree or as a game. Run as a tree. In this definition of a run, a run is not anymore a list of pairs, but a rooted tree. The node of the trooted tree are labelled by pairs with a location and a clock valuation. The tree is defined as follows: The definition of an accepting runs differs depending on whether the timed word is finite or infinite. If the timed word is finite, then the run is accepting if the label of each leaf contains an accepting location. If the timed word is infinite, then a run is accepting if each branch contains an infinite number of accepting location. Run as a game. A run can also be defined as a two player game formula_39. Let us call the two players "player" and "opponent". The goal of the player is to create an accepting run and the goal of the opponent is to create a rejecting (non-accepting) run. Each state of the game is a tuple composed of a location, a clock valuation, a position in the word, and potentially an element of formula_14. Intuitively, a tuple formula_40 means that the run has read formula_31 letters, is in location formula_6, with clock value formula_10, and that the transition will be as described by formula_41. The run is defined as follow: The set of successive states starting in a state of the form formula_43 and ending in before the next such state is called a phase. The definition of an accepting run is the same than for timed automata. Subclass of ATA. One clock alternating timed automaton. A one clock alternating timed automaton (OCATA) is an alternating timed automaton using a single clock. The expressivity of OCATAs and of timed-automaton are incomparable. For example, the language over the alphabet formula_51 such that there is never exactly one time unit between two letters can not be recognized by a timed-automaton. However, the OCATA pictured nearby accepts it. In this alternating timed automaton, two branches are started. A branch restarts the clock formula_18, and ensures that each time in the future when a letter is emitted, the clock formula_18 is distinct from 1. This ensure that between this letter and the next ones, the time elapsed is not one. The second branch only waits for other letters to be emitted and do the same checking. Purely-Universal and Purely-Existential ATA. An ATA is said to be purely-universal (respectively, purely-exisential) if its transition function does not use disjunction (respectively, conjunction). Purely-existential ATAs are as expressive as non-deterministic timed-automaton. Closure. The class of language accepted by ATAs and by OCATAs is closed under complement. The construction is explained for the case where there is a single initial location. Given an ATA formula_52 accepting a timed language formula_22, its complement language formula_53 is accepted by an automaton formula_54 which is essentially formula_55, where formula_56 is defined as formula_57 where disjunction and conjunctions are reversed and formula_58 simulates the run from each of the locations of formula_59 simultaneously. It follows that the class of language accepted by ATAs and by OCATAs are accepted by unions and intersection. The union of two languages is constructed by taking disjoint copies of the automata accepting both language. The intersection can be constructed from union and concatenation. Complexity. The emptiness problem, the universality problem and the containability problem for OCATA is decidable but is a nonelementary problem. Those three problems are undecidable over ATAs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal B^+(X)" }, { "math_id": 2, "text": "\\phi\\land\\psi" }, { "math_id": 3, "text": "\\phi\\lor\\psi" }, { "math_id": 4, "text": "\\phi,\\psi\\in\\mathcal B^+(X)" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "\\ell" }, { "math_id": 7, "text": "\\mathcal B_{a,\\ell}" }, { "math_id": 8, "text": "\\mathbb R_{\\ge 0}^{|X|}" }, { "math_id": 9, "text": "|X|" }, { "math_id": 10, "text": "\\nu" }, { "math_id": 11, "text": "c(a,\\ell,\\nu)" }, { "math_id": 12, "text": "(\\ell,a,c)" }, { "math_id": 13, "text": "c\\in\\mathcal B_{a,\\ell}" }, { "math_id": 14, "text": "\\mathcal B^+(L\\times\\mathcal P(C))" }, { "math_id": 15, "text": "(\\ell_1,\\emptyset)\\lor((\\ell_2,\\{x\\})\\land(\\ell_2,\\{y\\}))" }, { "math_id": 16, "text": "\\ell_1" }, { "math_id": 17, "text": "\\ell_2" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "y" }, { "math_id": 20, "text": "\\mathcal A=\\langle \\Sigma,L,L_0,C,F,E\\rangle" }, { "math_id": 21, "text": "\\mathcal A" }, { "math_id": 22, "text": "L" }, { "math_id": 23, "text": "C" }, { "math_id": 24, "text": "L_0\\subseteq L" }, { "math_id": 25, "text": "F\\subseteq L" }, { "math_id": 26, "text": "E\\subseteq L\\times\\Sigma\\times\\mathcal B(C)\\to\\mathcal B^+(L\\times \\mathcal P(C))" }, { "math_id": 27, "text": "w=(\\sigma_1,t_1),(\\sigma_2,t_2),\\dots," }, { "math_id": 28, "text": "(\\ell_0,\\nu_0)" }, { "math_id": 29, "text": "\\ell_0\\in L_0" }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "i" }, { "math_id": 32, "text": "(\\ell,\\nu)" }, { "math_id": 33, "text": "E(a_{i+1},\\ell,c(a,\\ell,\\nu))" }, { "math_id": 34, "text": "\\bigvee_{i=1}^n\\bigwedge_{j=1}^{m_i}(\\ell_{i,j}, r_{i,j})" }, { "math_id": 35, "text": "m_i" }, { "math_id": 36, "text": "1\\le i\\le n" }, { "math_id": 37, "text": "j" }, { "math_id": 38, "text": "(\\ell_{i,j},(\\nu+t_{i+1}-t_{i})[r_{i,j}\\to 0]" }, { "math_id": 39, "text": "G_{A,w}" }, { "math_id": 40, "text": "(\\ell,\\nu,i,b)" }, { "math_id": 41, "text": "b" }, { "math_id": 42, "text": "(\\ell_0,\\nu_0, 0)" }, { "math_id": 43, "text": "(\\ell,\\nu,i)" }, { "math_id": 44, "text": "(\\ell,\\nu,i,c(a_{i+1},\\ell,\\nu))" }, { "math_id": 45, "text": "(\\ell,\\nu,i,(\\ell',r))" }, { "math_id": 46, "text": "(\\ell',\\nu+t_{i}-t_{i-1}[r\\to0],i+1)" }, { "math_id": 47, "text": "(\\ell,\\nu,i,\\phi\\lor\\psi)" }, { "math_id": 48, "text": "(\\ell,\\nu,i,\\phi)" }, { "math_id": 49, "text": "(\\ell,\\nu,i,\\psi)" }, { "math_id": 50, "text": "(\\ell,\\nu,i,\\phi\\land\\psi)" }, { "math_id": 51, "text": "\\{a\\}" }, { "math_id": 52, "text": "\\mathcal A=\\langle \\Sigma,L,\\{q_0\\},C,F,E\\rangle" }, { "math_id": 53, "text": "L^c" }, { "math_id": 54, "text": "A^c" }, { "math_id": 55, "text": "\\langle \\Sigma,L,\\{q_0\\},C,L\\setminus F,E'\\rangle" }, { "math_id": 56, "text": "E'(\\ell,a,c)" }, { "math_id": 57, "text": "E(\\ell,a,c))" }, { "math_id": 58, "text": "E'(q_0,a,c)" }, { "math_id": 59, "text": "L_0" } ]
https://en.wikipedia.org/wiki?curid=60381359
60381369
All-to-all (parallel pattern)
Collective operation in parallel computing In parallel computing, all-to-all (also known as "index operation" or "total exchange") is a collective operation, where each processor sends an individual message to every other processor. Initially, each processor holds "p" messages of size m each, and the goal is to exchange the i-th message of processor j with the j-th message of processor i. The number of communication rounds and the overall communication volume are measures to evaluate the quality of an all-to-all algorithm. We consider a single-ported full-duplex machine throughout this article. On such a machine, an all-to-all algorithm requires at least formula_0 communication rounds. Further a minimum of formula_1 units of data is transferred. Optimum for both these measures can not be achieved simultaneously. Depending on the network topology (fully connected, hypercube, ring), different all-to-all algorithms are required. All-to-all algorithms based on topology. We consider a single-ported machine. The way the data is routed through the network depends on its underlying topology. We take a look at all-to-all algorithms for common network topologies. Hypercube. A hypercube is a network topology, where two processors share a link, if the hamming distance of their indices is one. The idea of an all-to-all algorithm is to combine messages belonging to the same subcube, and then distribute them. Ring. An all-to-all algorithm in a ring topology is very intuitive. Initially a processor sends a message of size m(p-1) to one of its neighbors. Communication is performed in the same direction on all processors. When a processor receives a message, it extracts the part that belongs to it and forwards the remainder of the message to the next neighbor. After (p-1) communication rounds, every message is distributed to its destination. The time taken by this algorithm is formula_2. Here formula_3 is the startup cost for a communication, and formula_4 is the cost of transmitting a unit of data. This term can further be improved when half of the messages are sent in one and the other half in the other direction. This way, messages arrive earlier at their destination. Mesh. For a mesh we look at a formula_5 mesh. This algorithm is easily adaptable for any mesh. An all-to-all algorithm in a mesh consists of two communication phases. First, each processors groups the messages into formula_6 groups, each containing formula_6 messages. Messages are in the same group, if their destined processors share the same row. Next, an all-to-all operation among rows is performed. Each processor now holds all relevant information for processors in his column. Again, the messages need to be rearranged. After another all-to-all operation, this time in respect to columns, each processor ends up with its messages. The overall time of communication for this algorithm is formula_7. Additionally, time for the local rearrangement of messages adds to the overall runtime of the algorithm. 1-factor algorithm. Again, we consider a single-ported machine. A trivial algorithm, is to send (p-1) asynchronous messages into the network for each processor. The performance of this algorithm is poor, which is due to congestion arising because of the bisection width of the network. More sophisticated algorithms combine messages to reduce the number of send operations and try to control congestion. For large messages, the cost of a startup is small compared to the cost of transmitting the payload. It is faster to send messages directly to their destination. In the following algorithm an all-to-all algorithm is performed using (p-1) one-to-one routings. // p odd: // pe index formula_8 for i := 0 to p-1 do Exchange data with PE formula_9 // p even: // pe index formula_8 for i := 0 to p-2 do idle := formula_10 if j = p-1 then exchange data with PE idle else if j = idle then exchange data with pe p-1 else exchange data with PE formula_11 The algorithm has a different behavior, whether p is odd or even. In case p is odd, one processor is idle in each iteration. For an even p, this idle processor communicates with the processor with index p-1. The total time taken is formula_12 for an even p, and formula_13 for an odd p respectively. Instead of pairing processor j with processor formula_14 in iteration i, we can also use the exclusive-or of j and i to determine a mapping. This approach requires p to be a power of two. Depending on the underlying topology of the network, one approach might be superior to the other. The exclusive or approach is superior, when performing pairwise one-to-one routings in a hypercube or fat-tree. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lceil \\log_2 n\\rceil" }, { "math_id": 1, "text": "\\left\\lceil m(n-1)\\right\\rceil" }, { "math_id": 2, "text": "(t_s+\\frac{1}{2}t_wmp)(p-1)" }, { "math_id": 3, "text": "t_s" }, { "math_id": 4, "text": "t_w" }, { "math_id": 5, "text": "\\sqrt{p}\\times\\sqrt{p}" }, { "math_id": 6, "text": "\\sqrt{p}" }, { "math_id": 7, "text": "(2t_s+t_wmp)(\\sqrt{p}-1)" }, { "math_id": 8, "text": "j\\in\\{0,\\dots,p-1\\}" }, { "math_id": 9, "text": "(i-j)\\,\\mod\\, p" }, { "math_id": 10, "text": "\\frac{p}{2}i\\,\\mod\\,(p-1)" }, { "math_id": 11, "text": "(i-j) mod (p-1)" }, { "math_id": 12, "text": "(p-1)(t_s+t_wm)" }, { "math_id": 13, "text": "p(t_s+t_wm)" }, { "math_id": 14, "text": "(i-j) \\bmod p" } ]
https://en.wikipedia.org/wiki?curid=60381369
6038197
Antiderivative (complex analysis)
Concept in complex analysis In complex analysis, a branch of mathematics, the antiderivative, or primitive, of a complex-valued function "g" is a function whose complex derivative is "g". More precisely, given an open set formula_0 in the complex plane and a function formula_1 the antiderivative of formula_2 is a function formula_3 that satisfies formula_4. As such, this concept is the complex-variable version of the antiderivative of a real-valued function. Uniqueness. The derivative of a constant function is the zero function. Therefore, any constant function is an antiderivative of the zero function. If formula_0 is a connected set, then the constant functions are the only antiderivatives of the zero function. Otherwise, a function is an antiderivative of the zero function if and only if it is constant on each connected component of formula_0 (those constants need not be equal). This observation implies that if a function formula_5 has an antiderivative, then that antiderivative is unique up to addition of a function which is constant on each connected component of formula_0. Existence. By Cauchy's integral formula, which shows that a differentiable function is in fact infinitely differentiable, a function formula_6 must itself be differentiable if it has an antiderivative formula_7, because if formula_8 then formula_7 is differentiable and so formula_9 exists. One can characterize the existence of antiderivatives via path integrals in the complex plane, much like the case of functions of a real variable. Perhaps not surprisingly, "g" has an antiderivative "f" if and only if, for every γ path from "a" to "b", the path integral formula_10 Equivalently, formula_11 for any closed path γ. However, this formal similarity notwithstanding, possessing a complex-antiderivative is a much more restrictive condition than its real counterpart. While it is possible for a discontinuous real function to have an anti-derivative, anti-derivatives can fail to exist even for "holomorphic" functions of a complex variable. For example, consider the reciprocal function, "g"("z") = 1/"z" which is holomorphic on the punctured plane C\{0}. A direct calculation shows that the integral of "g" along any circle enclosing the origin is non-zero. So "g" fails the condition cited above. This is similar to the existence of potential functions for conservative vector fields, in that Green's theorem is only able to guarantee path independence when the function in question is defined on a "simply connected" region, as in the case of the Cauchy integral theorem. In fact, holomorphy is characterized by having an antiderivative "locally", that is, "g" is holomorphic if for every "z" in its domain, there is some neighborhood "U" of "z" such that "g" has an antiderivative on "U". Furthermore, holomorphy is a necessary condition for a function to have an antiderivative, since the derivative of any holomorphic function is holomorphic. Various versions of Cauchy integral theorem, an underpinning result of Cauchy function theory, which makes heavy use of path integrals, gives sufficient conditions under which, for a holomorphic "g", formula_12 vanishes for any closed path γ (which may be, for instance, that the domain of "g" be simply connected or star-convex). Necessity. First we show that if "f" is an antiderivative of "g" on "U", then "g" has the path integral property given above. Given any piecewise "C"1 path γ : ["a", "b"] → "U", one can express the path integral of "g" over γ as formula_13 By the chain rule and the fundamental theorem of calculus one then has formula_14 Therefore, the integral of "g" over γ does "not" depend on the actual path γ, but only on its endpoints, which is what we wanted to show. Sufficiency. Next we show that if "g" is holomorphic, and the integral of "g" over any path depends only on the endpoints, then "g" has an antiderivative. We will do so by finding an anti-derivative explicitly. Without loss of generality, we can assume that the domain "U" of "g" is connected, as otherwise one can prove the existence of an antiderivative on each connected component. With this assumption, fix a point "z"0 in "U" and for any "z" in "U" define the function formula_15 where γ is any path joining "z"0 to "z". Such a path exists since "U" is assumed to be an open connected set. The function "f" is well-defined because the integral depends only on the endpoints of γ. That this "f" is an antiderivative of "g" can be argued in the same way as the real case. We have, for a given "z" in "U", that there must exist a disk centred on "z" and contained entirely within "U". Then for every "w" other than "z" within this disk formula_16 where ["z", "w"] denotes the line segment between "z" and "w". By continuity of "g", the final expression goes to zero as "w" approaches "z". In other words, "f′" = "g".
[ { "math_id": 0, "text": "U" }, { "math_id": 1, "text": "g:U\\to \\mathbb C," }, { "math_id": 2, "text": "g" }, { "math_id": 3, "text": "f:U\\to \\mathbb C" }, { "math_id": 4, "text": "\\frac{df}{dz}=g" }, { "math_id": 5, "text": "g:U\\to \\mathbb C" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "F" }, { "math_id": 8, "text": "F'=f" }, { "math_id": 9, "text": "F''=f'" }, { "math_id": 10, "text": " \\int_{\\gamma} g(\\zeta) \\, d \\zeta = f(b) - f(a)." }, { "math_id": 11, "text": " \\oint_{\\gamma} g(\\zeta) \\, d \\zeta = 0," }, { "math_id": 12, "text": " \\oint_{\\gamma} g(\\zeta) \\, d \\zeta" }, { "math_id": 13, "text": "\\int_\\gamma g(z)\\,dz=\\int_a^b g(\\gamma(t))\\gamma'(t)\\, dt=\\int_a^b f'(\\gamma(t))\\gamma'(t)\\,dt." }, { "math_id": 14, "text": "\\int_\\gamma g(z)\\,dz=\\int_a^b \\frac{d}{dt}f\\left(\\gamma(t)\\right)\\,dt=f\\left(\\gamma(b)\\right)-f\\left(\\gamma(a)\\right)." }, { "math_id": 15, "text": "f(z)=\\int_{\\gamma}\\! g(\\zeta)\\, d\\zeta" }, { "math_id": 16, "text": "\\begin{align} \n\\left| \\frac{f(w) - f(z)}{ w-z } - g(z) \\right|&= \\left| \\int_z^w \\frac{ g(\\zeta) \\,d\\zeta}{w -z} - \\int_z^w \\frac{ g(z) \\,d\\zeta}{w -z} \\right|\\\\\n&\\leq \\int_z^w \\frac{ | g(\\zeta) - g (z) |}{|w -z|} \\,d\\zeta \\\\\n&\\leq \\sup_{ \\zeta \\in [w, z]} | g(\\zeta) - g(z) |,\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=6038197
6038251
Shubnikov–de Haas effect
An oscillation in the conductivity of a material that occurs at low temperatures in the presence of very intense magnetic fields, the Shubnikov–de Haas effect (SdH) is a macroscopic manifestation of the inherent quantum mechanical nature of matter. It is often used to determine the effective mass of charge carriers (electrons and electron holes), allowing investigators to distinguish among majority and minority carrier populations. The effect is named after Wander Johannes de Haas and Lev Shubnikov. Physical process. At sufficiently low temperatures and high magnetic fields, the free electrons in the conduction band of a metal, semimetal, or narrow band gap semiconductor will behave like simple harmonic oscillators. When the magnetic field strength is changed, the oscillation period of the simple harmonic oscillators changes proportionally. The resulting energy spectrum is made up of Landau levels separated by the cyclotron energy. These Landau levels are further split by the Zeeman energy. In each Landau level the cyclotron and Zeeman energies and the number of electron states ("eB"/"h") all increase linearly with increasing magnetic field. Thus, as the magnetic field increases, the spin-split Landau levels move to higher energy. As each energy level passes through the Fermi energy, it depopulates as the electrons become free to flow as current. This causes the material's transport and thermodynamic properties to oscillate periodically, producing a measurable oscillation in the material's conductivity. Since the transition across the Fermi 'edge' spans a small range of energies, the waveform is square rather than sinusoidal, with the shape becoming ever more square as the temperature is lowered. Theory. Consider a two-dimensional quantum gas of electrons confined in a sample with given width and with edges. In the presence of a magnetic flux density "B", the energy eigenvalues of this system are described by Landau levels. As shown in Fig 1, these levels are equidistant along the vertical axis. Each energy level is substantially flat inside a sample (see Fig 1). At the edges of a sample, the work function bends levels upwards. Fig 1 shows the Fermi energy "E"F located in between two Landau levels. Electrons become mobile as their energy levels cross the Fermi energy "E"F. With the Fermi energy "E"F in between two Landau levels, scattering of electrons will occur only at the edges of a sample where the levels are bent. The corresponding electron states are commonly referred to as edge channels. The Landauer–Büttiker approach is used to describe transport of electrons in this particular sample. The Landauer–Büttiker approach allows calculation of net currents "Im" flowing between a number of contacts 1 ≤ "m" ≤ "n". In its simplified form, the net current "Im" of contact "m" with chemical potential "μm" reads where "e" denotes the electron charge, "h" denotes the Planck constant, and "i" stands for the number of edge channels. The matrix "Tml" denotes the probability of transmission of a negatively charged particle (i.e. of an electron) from a contact "l" ≠ "m" to another contact "m". The net current "Im" in relationship (1) is made up of the currents towards contact "m" and of the current transmitted from the contact "m" to all other contacts "l" ≠ "m". That current equals the voltage "μ""m" / "e" of contact "m" multiplied with the Hall conductivity of 2"e"2 / "h" per edge channel. Fig 2 shows a sample with four contacts. To drive a current through the sample, a voltage is applied between the contacts 1 and 4. A voltage is measured between the contacts 2 and 3. Suppose electrons leave the 1st contact, then are transmitted from contact 1 to contact 2, then from contact 2 to contact 3, then from contact 3 to contact 4, and finally from contact 4 back to contact 1. A negative charge (i.e. an electron) transmitted from contact 1 to contact 2 will result in a current from contact 2 to contact 1. An electron transmitted from contact 2 to contact 3 will result in a current from contact 3 to contact 2 etc. Suppose also that no electrons are transmitted along any further paths. The probabilities of transmission of ideal contacts then read formula_0 and formula_1 otherwise. With these probabilities, the currents "I"1 ... "I"4 through the four contacts, and with their chemical potentials "μ"1 ... "μ"4, equation (1) can be re-written formula_2 A voltage is measured between contacts 2 and 3. The voltage measurement should ideally not involve a flow of current through the meter, so "I"2 = "I"3 = 0. It follows that formula_3 formula_4 In other words, the chemical potentials "μ"2 and "μ"3 and their respective voltages "μ"2 / "e" and "μ"3 / "e" are the same. As a consequence of no drop of voltage between the contacts 2 and 3, the current "I"1 experiences zero resistivity "R"SdH in between contacts 2 and 3 formula_5 The result of zero resistivity between the contacts 2 and 3 is a consequence of the electrons being mobile only in the edge channels of the sample. The situation would be different if a Landau level came close to the Fermi energy "E"F. Any electrons in that level would become mobile as their energy approaches the Fermi energy "E"F. Consequently, scatter would lead to "R"SdH &gt; 0. In other words, the above approach yields zero resistivity whenever the Landau levels are positioned such that the Fermi energy "E"F is in between two levels. Applications. Shubnikov–De Haas oscillations can be used to determine the two-dimensional electron density of a sample. For a given magnetic flux formula_6 the maximum number "D" of electrons with spin "S" = 1/2 per Landau level is Upon insertion of the expressions for the flux quantum Φ0 "h" / "e" and for the magnetic flux Φ "BA" relationship (2) reads formula_7 Let "N" denote the maximum number of states per unit area, so "D" "NA" and formula_8 Now let each Landau level correspond to an edge channel of the above sample. For a given number "i" of edge channels each filled with "N" electrons per unit area, the overall number "n" of electrons per unit area will read formula_9 The overall number "n" of electrons per unit area is commonly referred to as the electron density of a sample. No electrons disappear from the sample into the unknown, so the electron density "n" is constant. It follows that formula_10 formula_11 For a given sample, all factors including the electron density "n" on the right hand side of relationship (3) are constant. When plotting the index "i" of an edge channel versus the reciprocal of its magnetic flux density 1/"B""i", one obtains a straight line with slope 2"e"/("nh"). Since the electron charge "e" is known and also the Planck constant "h", one can derive the electron density "n" of a sample from this plot. Shubnikov–De Haas oscillations are observed in highly doped Bi2Se3. Fig 3 shows the reciprocal magnetic flux density 1/"B""i" of the 10th to 14th minima of a Bi2Se3 sample. The slope of 0.00618/T as obtained from a linear fit yields the electron density "n" formula_12 Shubnikov–de Haas oscillations can be used to map the Fermi surface of electrons in a sample, by determining the periods of oscillation for various applied field directions. Related physical process. The effect is related to the De Haas–Van Alphen effect, which is the name given to the corresponding oscillations in magnetization. The signature of each effect is a periodic waveform when plotted as a function of inverse magnetic field. The "frequency" of the magnetoresistance oscillations indicate areas of extremal orbits around the Fermi surface. The area of the Fermi surface is expressed in teslas. More accurately, the period in inverse Teslas is inversely proportional to the area of the extremal orbit of the Fermi surface in inverse m/cm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{21} = T_{32} = T_{43} = T_{14} = 1," }, { "math_id": 1, "text": "T_{ml} = 0" }, { "math_id": 2, "text": "\\left(\\begin{matrix} I_1 \\\\ I_2 \\\\ I_3 \\\\ I_4 \\end{matrix}\\right)=\\frac{2e\\cdot i}{h}\\left(\\begin{matrix} 1 & 0 & 0 & -1 \\\\ -1 & 1 & 0 & 0 \\\\ 0 & -1 & 1 & 0 \\\\ 0 & 0 & -1 & 1 \\end{matrix}\\right)\\left(\\begin{matrix} \\mu_1 \\\\ \\mu_2 \\\\ \\mu_3 \\\\ \\mu_4 \\end{matrix}\\right)." }, { "math_id": 3, "text": "I_3 = 0 = \\frac{2e\\cdot i}{h} \\left(-\\mu_2 + \\mu_3\\right)," }, { "math_id": 4, "text": "\\mu_2 = \\mu_3." }, { "math_id": 5, "text": "R_{\\mathrm{SdH}} = \\frac{\\mu_2 - \\mu_3}{e\\cdot I_1}=0." }, { "math_id": 6, "text": "\\Phi" }, { "math_id": 7, "text": " D = 2 \\frac{e B A}{h}" }, { "math_id": 8, "text": " N = 2 \\frac{e B}{h}." }, { "math_id": 9, "text": " n = i N = 2 i \\frac{e B}{h}." }, { "math_id": 10, "text": " B_i = \\frac{n h}{2 e i}," }, { "math_id": 11, "text": " \\frac{1}{B_i} = \\frac{2 e i}{n h}," }, { "math_id": 12, "text": " n = \\frac{2 e}{0.00618/\\mathrm{T}\\cdot h} \\approx 7.82 \\times 10^{14}/\\mathrm{m}^2." } ]
https://en.wikipedia.org/wiki?curid=6038251
60388496
Channel system (computer science)
Finite-state machine with fifo buffers for memory In computer science, a channel system is a finite state machine similar to communicating finite-state machine in which there is a single system communicating with itself instead of many systems communicating with each other. A channel system is similar to a pushdown automaton where a queue is used instead of a stack. Those queues are called channels. Intuitively, each channel represents a sequence a message to be sent, and to be read in the order in which they are sent. Definition. Channel system. Formally, a channel system (or perfect channel system) formula_0 is defined as a tuple formula_1 with: Depending on the author, a channel system may have no initial state and may have an empty alphabet. Configuration. A configuration or global state of the channel system is a formula_11 tuple belonging to formula_12. Intuitively, a configuration formula_13 represents that a run is in state formula_14 and that its formula_15-th channel contains the word formula_16. The initial configuration is formula_17, with formula_18 the empty word. Step. Intuitively, a transition formula_19 means that the system may goes to control state formula_14 to formula_20 by writing an formula_21 to the end of the channel formula_22. Similarly formula_23 means that the system may goes to control state formula_14 to formula_20 by removing a formula_21 starting the word formula_22. Formally, given a configuration formula_24, and a transition formula_25, there is a perfect step formula_26, where the step adds a letter formula_21 to the end of the formula_15-th word. Similarly, given a transition formula_27, there is a perfect step formula_28 where the first letter of the formula_15-th word is formula_21 and has been removed during the step. Run. A perfect run is a sequence of perfect step, of the form formula_29. We let formula_30 denote that there is a perfect run starting at formula_31 and ending at formula_32. Languages. Given a perfect or a lossy channel system formula_0, multiple languages may be defined. A word over formula_33 is accepted by formula_0 if it is the concatenation of the labels of a run of formula_0. The language defined by formula_0 is the set of words accepted by formula_0. The set of reachable configuration of formula_0, denoted formula_34 is defined as the set of configuration formula_31 reachable from the initial state. I.e. as the set of configurations formula_32 such that formula_35. Given a channel formula_36, the channel of formula_36 is the set of tuples formula_37 such that formula_38. Channel system and Turing machine. Most problem related to perfect channel system are undecidable92. This is due to the fact that such a machine may simulates the run of a Turing machine. This simulation is now sketched. Given a Turing machine formula_39, there exists a perfect channel system formula_0 such that any run of formula_39 of length formula_40 can be simulated by a run of formula_0 of length formula_41. Intuitively, this simulation consists simply in having the entire tape of the simulated Turing machine in a channel. The content channel is then entirely read and immediately rewritten in the channel, with one exception, the part of the content representing the head of the Turing machine is changed, to simulate a step of the Turing machine computation. Variants. Multiple variants of channel systems have been introduced. The two variants introduced below does not allow to simulate a Turing machine and thus allows multiple problem of interest to be decidable. One channel machine. A one-channel machine is a channel system using a single channel. The same definition also applies for all variants of channel system. Counter machine. When the alphabet of a channel system contains a single message, then each channel is essentially a counter. It follows that those systems are essentially Minsky machines. We call such systems counter machines. This same definition applies for all variants of channel system. Completely specified protocol. A completely specified protocol (CSP) is defined exactly as a channel system. However, the notion of step and of run are defined differently. A CSP admits two kinds of steps. Perfect steps, as defined above, and a message loss transition step. We denote a message loss transition step by formula_42. Looseness. A lossy channel system or machine capable of lossiness error is an extension of completely specified protocol in which letters may disappear anywhere. A lossy channel system admits two kinds of steps. Perfect steps, as defined above, and lossy step. We denote a lossy step, formula_43. A run in which channel are emptied as soon as messages are sent into them is a valid run according to this definition. For this reason, some fairness conditions may be introduced to those systems. Channel fairness. Given a message a channel formula_36, a run is said to be channel fair with respect to formula_36 if, assuming there are infinitely many steps in which a letter is sent to formula_36 then there are infinitely many steps in which a letter is read from formula_36. 88 A computation is said to be channel fair if it is channel fair with respect to each channel formula_36. Impartiality. The impartiality condition is a restriction to the channel fairness condition in which both the channel and the letter are considered. Given a message formula_44 and a channel formula_36, a run is said to be impartial with respect to formula_36 and formula_44 if, assuming there are infinitely many steps in which formula_44 is sent to formula_36 then there are infinitely many steps in which formula_44 is read from formula_36. 83 A computation is said to be impartial with respect to a channel formula_36 if it is impartial with respect to formula_36 and a messages formula_44. It is said to be impartial if it is impartial with respect to every channels formula_36. Message fairness. The message fairness property is similar to impartiality, but the condition only have to hold if there is an infinite number of step at which formula_44 may be read. Formally, a run is said to be message faire with respect to formula_36 and formula_44 if, assuming there are infinitely many steps in which formula_44 is sent to formula_36, and infinitely many step formula_15 which occurs in a state formula_14 such that there exists a transition formula_45, then there are infinitely many steps in which formula_44 is read from formula_36. 88 Boundedness. The run is said to have bounded lossiness if the number of letter removed between two perfect steps is bounded.339 Insertion of errors. A machine capable of insertion of error is an extension of channel system in which letters may appear anywhere. A machine capable of insertion of error admits two kinds of steps. Perfect steps, as defined above, and insertion steps. We denote an insertion step by formula_46.25 Duplication errors. A machine capable of duplication error is an extension of machine capable of insertion of error in which the inserted letter is a copy of the previous letter. A machine capable of insertion of error admits two kinds of steps. Perfect steps, as defined above, and duplication steps. We denote an insertion step by formula_47.26 A non-duplicate machine capable of duplication error is a machine which ensures that in each channel, the letters alternate between a special new letter #, and a regular letter from the alphabet of message. If it is not the caes, it means a duplication occurred and the run rejects. This process allow to encode any channel system into a machine capable of duplication error, while forcing it not to have errors. Since channel systems can simulate machines, it follows that machines capable of duplication error can simulate Turing machine. Properties. The set of reachable configurations is recognizable for lossy channel machines23 and machines capable of insertions of errors26. It is recursively enumerable for machine capable of duplication error27. Problems and their complexity. This section contain a list of problems over channel system, and their decidability of complexity over variants of such systems. Termination problem. The termination problem consists in deciding, given a channel system formula_0 and an initial configuration formula_31 whether all runs of formula_0 starting at formula_31 are finite. This problem is undecidable over perfect channel systems, even when the system is a counter machine or when it is a one-channel machine26. This problem is decidable but nonprimitive recursive over lossy channel system.10 This problem is trivially decidable over machine capable of insertion of errors26. Reachability problem. The reachability problem consists in deciding, given a channel system formula_0 and two initial configurations formula_31 and formula_32 whether there is a run of formula_0 from formula_31 to formula_32. This problem is undecidable over perfect channel systems and decidable but nonprimitive recursive over lossy channel system.10 This problem is decidable over machine capable of insertion of errors .26 Reachability problem. The deadlock problem consists in deciding whether there is a reachable configuration without successor. This problem is decidable over lossy channel system10 and trivially decidable over machine capable of insertion of errors26. It is also decidable over counter machine. Model checking problem. The model checking problem consists in deciding whether given a system formula_0 and a CTL**-formula or a LTL-formula formula_48 or a whether the language defined by formula_0 satisfies formula_48. This problem is undecidable over lossy channel system.23 Recurrent state problem. The recurrent state problem consists in deciding, given a channel system formula_0 and an initial configuration formula_31 and a state formula_49 whether there exists a run of formula_0, starting at formula_31, going infinitely often through state formula_49. This problem is undecidable over lossy channel system, even with a single channel.2380 Equivalent finite state machine. Given a system formula_0, there is no algorithm which computes a finite state machine representing formula_34 for the class of lossy channel system.24 This problem is decidable over machine capable of insertion of error .26 Boundedness problem. The boundedness problem consists in deciding whether the set of reachable configuration is finite. I.e. the length of the content of each channel is bounded. This problem is trivially decidable over machine capable of insertion of errors26. It is also decidable over counter machine. Eventually properties. The eventuality property, or inevitability property problem consists in deciding, given a channel system formula_0 and a set formula_50 of configurations whether all run of formula_0 starting at formula_31 goes through a configuration of formula_50. This problem is undecidable for lossy channel system with impartiality84 and with the two other fairness constraints.87 Safety property. The safety property problem consists in deciding, given a channel system formula_0 and a regular set formula_51 whether Structural termination. The structural termination problem consists in deciding, given a channel system formula_0 if the termination problem holds for formula_0 for every initial configuration. This problem is undecidable even over counter machine.342 Communicating Hierarchical State Machine. Hierarchical state machines are finite state machines whose states themselves can be other machines. Since a communicating finite state machine is characterized by concurrency, the most notable trait in a communicating hierarchical state machine is the coexistence of hierarchy and concurrency. This had been considered highly suitable as it signifies stronger interaction inside the machine. However, it was proved that the coexistence of hierarchy and concurrency intrinsically costs language inclusion, language equivalence, and all of universality.
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "\\langle Q,C,\\Sigma,\\Delta \\rangle" }, { "math_id": 2, "text": "Q=\\{q_1,\\dots,q_m\\}" }, { "math_id": 3, "text": "q_0\\in Q" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "\\epsilon \\in A" }, { "math_id": 6, "text": "C=\\{c_1,\\dots,c_n\\}" }, { "math_id": 7, "text": "\\Sigma=\\{a_1,\\dots,a_p\\}" }, { "math_id": 8, "text": "\\Delta\\subseteq Q\\times C\\times\\{?,!\\}\\times\\Sigma^*\\times A \\times Q" }, { "math_id": 9, "text": "\\Sigma^*" }, { "math_id": 10, "text": "\\Sigma" }, { "math_id": 11, "text": "n+1" }, { "math_id": 12, "text": "Q\\times\\prod_{i=1}^n(\\Sigma^*)" }, { "math_id": 13, "text": "\\gamma=(q,w_1,\\dots,w_m)" }, { "math_id": 14, "text": "q" }, { "math_id": 15, "text": "i" }, { "math_id": 16, "text": "w_i" }, { "math_id": 17, "text": "(q_0,\\epsilon,\\dots,\\epsilon)" }, { "math_id": 18, "text": "\\epsilon" }, { "math_id": 19, "text": "(q,c_i,!,u,a,q')" }, { "math_id": 20, "text": "q'" }, { "math_id": 21, "text": "u" }, { "math_id": 22, "text": "c_i" }, { "math_id": 23, "text": "(q,c_i,?,u,a,q')" }, { "math_id": 24, "text": "(q,w_1,\\dots,w_m)" }, { "math_id": 25, "text": "(q,c_i,!,u,q')" }, { "math_id": 26, "text": "(q,w_1,\\dots, w_{i-1}, w_i, w_{i+1},\\dots,w_m)\\xrightarrow{a}_{\\mathtt{perf}}(q',w_1,\\dots,w_{i-1},w_iu,w_{i+1},\\dots,w_m)" }, { "math_id": 27, "text": "(q,c_i,?,u,q')" }, { "math_id": 28, "text": "(q,w_1,\\dots,w_{i-1},uw_i,w_{i+1},w_m)\\xrightarrow{a}_{\\mathtt{perf}}(q',w_1,\\dots,w_{i-1}, w_i, w_{i+1},\\dots,w_m)" }, { "math_id": 29, "text": "\\gamma_0\\xrightarrow{a_0}_{\\mathtt{perf}}\\gamma_1\\dots" }, { "math_id": 30, "text": "\\gamma\\xrightarrow[\\mathtt{perf}]{*}\\gamma'" }, { "math_id": 31, "text": "\\gamma" }, { "math_id": 32, "text": "\\gamma'" }, { "math_id": 33, "text": "A^*" }, { "math_id": 34, "text": "R(S)" }, { "math_id": 35, "text": "(q_0,\\epsilon,\\dots,\\epsilon)\\xrightarrow{*}\\gamma'" }, { "math_id": 36, "text": "c" }, { "math_id": 37, "text": "(w_1,\\dots w_m)" }, { "math_id": 38, "text": "(c,w_1,\\dots,w_m)\\in R(S)" }, { "math_id": 39, "text": "M" }, { "math_id": 40, "text": "n" }, { "math_id": 41, "text": "O(n^2)" }, { "math_id": 42, "text": "(q,w_1,\\dots,a\\cdot w_i,\\dots,w_m)\\rightsquigarrow(q,w_1,\\dots,w_i,\\dots,w_m)" }, { "math_id": 43, "text": "(q,w_1,\\dots,w_i\\cdot a\\cdot w'_i\\dots,,w_m)\\rightsquigarrow(q,w_1,\\dots,w_i\\cdot w'_i,w'_m)" }, { "math_id": 44, "text": "m" }, { "math_id": 45, "text": "(q,m,q')" }, { "math_id": 46, "text": "(q,w_1,\\dots,w_i\\cdot w'_i\\dots,,w_m)\\rightsquigarrow(q,w_1,\\dots,w_i\\cdot a\\cdot w'_i,w'_m)" }, { "math_id": 47, "text": "(q,w_1,\\dots,w_i\\cdot a\\cdot w'_i\\dots,,w_m)\\rightsquigarrow(q,w_1,\\dots,w_i\\cdot a\\cdot a\\cdot w'_i,w'_m)" }, { "math_id": 48, "text": "\\phi" }, { "math_id": 49, "text": "s" }, { "math_id": 50, "text": "\\Gamma" }, { "math_id": 51, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=60388496
603916
Kähler differential
Differential form in commutative algebra In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available. Definition. Let "R" and "S" be commutative rings and "φ" : "R" → "S" be a ring homomorphism. An important example is for "R" a field and "S" a unital algebra over "R" (such as the coordinate ring of an affine variety). Kähler differentials formalize the observation that the derivatives of polynomials are again polynomial. In this sense, differentiation is a notion which can be expressed in purely algebraic terms. This observation can be turned into a definition of the module formula_0 of differentials in different, but equivalent ways. Definition using derivations. An "R"-linear "derivation" on "S" is an "R"-module homomorphism formula_1 to an "S"-module "M" satisfying the Leibniz rule formula_2 (it automatically follows from this definition that the image of "R" is in the kernel of "d" ). The module of Kähler differentials is defined as the "S"-module formula_0 for which there is a universal derivation formula_3. As with other universal properties, this means that "d" is the "best possible" derivation in the sense that any other derivation may be obtained from it by composition with an "S"-module homomorphism. In other words, the composition with "d" provides, for every M, an "S"-module isomorphism formula_4 One construction of Ω"S"/"R" and "d" proceeds by constructing a free "S"-module with one formal generator "ds" for each "s" in "S", and imposing the relations for all "r" in "R" and all "s" and "t" in "S". The universal derivation sends "s" to "ds". The relations imply that the universal derivation is a homomorphism of "R"-modules. Definition using the augmentation ideal. Another construction proceeds by letting "I" be the ideal in the tensor product formula_5 defined as the kernel of the multiplication map formula_6 Then the module of Kähler differentials of "S" can be equivalently defined by formula_7 and the universal derivation is the homomorphism "d" defined by formula_8 This construction is equivalent to the previous one because "I" is the kernel of the projection formula_9 Thus we have: formula_10 Then formula_11 may be identified with "I" by the map induced by the complementary projection formula_12 This identifies "I" with the "S"-module generated by the formal generators "ds" for "s" in "S", subject to "d" being a homomorphism of "R"-modules which sends each element of "R" to zero. Taking the quotient by "I"2 precisely imposes the Leibniz rule. Examples and basic facts. For any commutative ring "R", the Kähler differentials of the polynomial ring formula_13 are a free "S"-module of rank "n" generated by the differentials of the variables: formula_14 Kähler differentials are compatible with extension of scalars, in the sense that for a second "R"-algebra "R"′ and for formula_15, there is an isomorphism formula_16 As a particular case of this, Kähler differentials are compatible with localizations, meaning that if "W" is a multiplicative set in "S", then there is an isomorphism formula_17 Given two ring homomorphisms formula_18, there is a short exact sequence of "T"-modules formula_19 If formula_20 for some ideal "I", the term formula_21 vanishes and the sequence can be continued at the left as follows: formula_22 A generalization of these two short exact sequences is provided by the cotangent complex. The latter sequence and the above computation for the polynomial ring allows the computation of the Kähler differentials of finitely generated "R"-algebras formula_23. Briefly, these are generated by the differentials of the variables and have relations coming from the differentials of the equations. For example, for a single polynomial in a single variable, formula_24 Kähler differentials for schemes. Because Kähler differentials are compatible with localization, they may be constructed on a general scheme by performing either of the two definitions above on affine open subschemes and gluing. However, the second definition has a geometric interpretation that globalizes immediately. In this interpretation, "I" represents the "ideal defining the diagonal" in the fiber product of Spec("S") with itself over Spec("S") → Spec("R"). This construction therefore has a more geometric flavor, in the sense that the notion of "first infinitesimal neighbourhood" of the diagonal is thereby captured, via functions vanishing modulo functions vanishing at least to second order (see cotangent space for related notions). Moreover, it extends to a general morphism of schemes formula_25 by setting formula_26 to be the ideal of the diagonal in the fiber product formula_27. The "cotangent sheaf" formula_28, together with the derivation formula_29 defined analogously to before, is universal among formula_30-linear derivations of formula_31-modules. If "U" is an open affine subscheme of "X" whose image in "Y" is contained in an open affine subscheme "V", then the cotangent sheaf restricts to a sheaf on "U" which is similarly universal. It is therefore the sheaf associated to the module of Kähler differentials for the rings underlying "U" and "V". Similar to the commutative algebra case, there exist exact sequences associated to morphisms of schemes. Given morphisms formula_32 and formula_33 of schemes there is an exact sequence of sheaves on formula_34 formula_35 Also, if formula_36 is a closed subscheme given by the ideal sheaf formula_26, then formula_37 and there is an exact sequence of sheaves on formula_34 formula_38 Examples. Finite separable field extensions. If formula_39 is a finite field extension, then formula_40 if and only if formula_39 is separable. Consequently, if formula_39 is a finite separable field extension and formula_41 is a smooth variety (or scheme), then the relative cotangent sequence formula_42 proves formula_43. Cotangent modules of a projective variety. Given a projective scheme formula_44, its cotangent sheaf can be computed from the sheafification of the cotangent module on the underlying graded algebra. For example, consider the complex curve formula_45 then we can compute the cotangent module as formula_46 Then, formula_47 Morphisms of schemes. Consider the morphism formula_48 in formula_49. Then, using the first sequence we see that formula_50 hence formula_51 Higher differential forms and algebraic de Rham cohomology. de Rham complex. As before, fix a map formula_52. Differential forms of higher degree are defined as the exterior powers (over formula_53), formula_54 The derivation formula_55 extends in a natural way to a sequence of maps formula_56 satisfying formula_57 This is a cochain complex known as the "de Rham complex". The de Rham complex enjoys an additional multiplicative structure, the wedge product formula_58 This turns the de Rham complex into a commutative differential graded algebra. It also has a coalgebra structure inherited from the one on the exterior algebra. de Rham cohomology. The hypercohomology of the de Rham complex of sheaves is called the "algebraic de Rham cohomology" of "X" over "Y" and is denoted by formula_59 or just formula_60 if "Y" is clear from the context. (In many situations, "Y" is the spectrum of a field of characteristic zero.) Algebraic de Rham cohomology was introduced by . It is closely related to crystalline cohomology. As is familiar from coherent cohomology of other quasi-coherent sheaves, the computation of de Rham cohomology is simplified when "X" = Spec "S" and "Y" = Spec "R" are affine schemes. In this case, because affine schemes have no higher cohomology, formula_59 can be computed as the cohomology of the complex of abelian groups formula_61 which is, termwise, the global sections of the sheaves formula_62. To take a very particular example, suppose that formula_63 is the multiplicative group over formula_64 Because this is an affine scheme, hypercohomology reduces to ordinary cohomology. The algebraic de Rham complex is formula_65 The differential "d" obeys the usual rules of calculus, meaning formula_66 The kernel and cokernel compute algebraic de Rham cohomology, so formula_67 and all other algebraic de Rham cohomology groups are zero. By way of comparison, the algebraic de Rham cohomology groups of formula_68 are much larger, namely, formula_69 Since the Betti numbers of these cohomology groups are not what is expected, crystalline cohomology was developed to remedy this issue; it defines a Weil cohomology theory over finite fields. Grothendieck's comparison theorem. If "X" is a smooth complex algebraic variety, there is a natural comparison map of complexes of sheaves formula_70 between the algebraic de Rham complex and the smooth de Rham complex defined in terms of (complex-valued) differential forms on formula_71, the complex manifold associated to "X". Here, formula_72 denotes the complex analytification functor. This map is far from being an isomorphism. Nonetheless, showed that the comparison map induces an isomorphism formula_73 from algebraic to smooth de Rham cohomology (and thus to singular cohomology formula_74 by de Rham's theorem). In particular, if "X" is a smooth affine algebraic variety embedded in formula_75, then the inclusion of the subcomplex of algebraic differential forms into that of all smooth forms on "X" is a quasi-isomorphism. For example, if formula_76, then as shown above, the computation of algebraic de Rham cohomology gives explicit generators formula_77 for formula_78 and formula_79, respectively, while all other cohomology groups vanish. Since "X" is homotopy equivalent to a circle, this is as predicted by Grothendieck's theorem. Counter-examples in the singular case can be found with non-Du Bois singularities such as the graded ring formula_80 with formula_81 where formula_82 and formula_83. Other counterexamples can be found in algebraic plane curves with isolated singularities whose Milnor and Tjurina numbers are non-equal. A proof of Grothendieck's theorem using the concept of a mixed Weil cohomology theory was given by . Applications. Canonical divisor. If "X" is a smooth variety over a field "k", then formula_84 is a vector bundle (i.e., a locally free formula_53-module) of rank equal to the dimension of "X". This implies, in particular, that formula_85 is a line bundle or, equivalently, a divisor. It is referred to as the "canonical divisor". The canonical divisor is, as it turns out, a dualizing complex and therefore appears in various important theorems in algebraic geometry such as Serre duality or Verdier duality. Classification of algebraic curves. The geometric genus of a smooth algebraic variety "X" of dimension "d" over a field "k" is defined as the dimension formula_86 For curves, this purely algebraic definition agrees with the topological definition (for formula_87) as the "number of handles" of the Riemann surface associated to "X". There is a rather sharp trichotomy of geometric and arithmetic properties depending on the genus of a curve, for "g" being 0 (rational curves), 1 (elliptic curves), and greater than 1 (hyperbolic Riemann surfaces, including hyperelliptic curves), respectively. Tangent bundle and Riemann–Roch theorem. The tangent bundle of a smooth variety "X" is, by definition, the dual of the cotangent sheaf formula_84. The Riemann–Roch theorem and its far-reaching generalization, the Grothendieck–Riemann–Roch theorem, contain as a crucial ingredient the Todd class of the tangent bundle. Unramified and smooth morphisms. The sheaf of differentials is related to various algebro-geometric notions. A morphism formula_88 of schemes is unramified if and only if formula_89 is zero. A special case of this assertion is that for a field "k", formula_90 is separable over "k" iff formula_91, which can also be read off the above computation. A morphism "f" of finite type is a smooth morphism if it is flat and if formula_89 is a locally free formula_53-module of appropriate rank. The computation of formula_92 above shows that the projection from affine space formula_93 is smooth. Periods. "Periods" are, broadly speaking, integrals of certain arithmetically defined differential forms. The simplest example of a period is formula_94, which arises as formula_95 Algebraic de Rham cohomology is used to construct periods as follows: For an algebraic variety "X" defined over formula_96 the above-mentioned compatibility with base-change yields a natural isomorphism formula_97 On the other hand, the right hand cohomology group is isomorphic to de Rham cohomology of the complex manifold formula_71 associated to "X", denoted here formula_98 Yet another classical result, de Rham's theorem, asserts an isomorphism of the latter cohomology group with singular cohomology (or sheaf cohomology) with complex coefficients, formula_99, which by the universal coefficient theorem is in its turn isomorphic to formula_100 Composing these isomorphisms yields two "rational" vector spaces which, after tensoring with formula_101 become isomorphic. Choosing bases of these rational subspaces (also called lattices), the determinant of the base-change matrix is a complex number, well defined up to multiplication by a rational number. Such numbers are "periods". Algebraic number theory. In algebraic number theory, Kähler differentials may be used to study the ramification in an extension of algebraic number fields. If "L" / "K" is a finite extension with rings of integers "R" and "S" respectively then the different ideal δ"L" / "K", which encodes the ramification data, is the annihilator of the "R"-module Ω"R"/"S": formula_102 Related notions. Hochschild homology is a homology theory for associative rings that turns out to be closely related to Kähler differentials. This is because of the Hochschild-Kostant-Rosenberg theorem which states that the Hochschild homology formula_103 of an algebra of a smooth variety is isomorphic to the de-Rham complex formula_104 for formula_105 a field of characteristic formula_106. A derived enhancement of this theorem states that the Hochschild homology of a differential graded algebra is isomorphic to the derived de-Rham complex. The de Rham–Witt complex is, in very rough terms, an enhancement of the de Rham complex for the ring of Witt vectors. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Omega_{S/R}" }, { "math_id": 1, "text": "d : S \\to M" }, { "math_id": 2, "text": "d(fg) = f\\,dg + g\\,df" }, { "math_id": 3, "text": "d : S \\to \\Omega_{S/R}" }, { "math_id": 4, "text": "\\operatorname{Hom}_S(\\Omega_{S/R},M) \\xrightarrow{\\cong} \\operatorname{Der}_R(S,M)." }, { "math_id": 5, "text": "S \\otimes_R S" }, { "math_id": 6, "text": "\\begin{cases} S \\otimes_R S\\to S \\\\ \\sum s_i \\otimes t_i \\mapsto \\sum s_i\\cdot t_i \\end{cases}" }, { "math_id": 7, "text": " \\Omega_{S/R} = I/I^2," }, { "math_id": 8, "text": "ds = 1 \\otimes s - s \\otimes 1." }, { "math_id": 9, "text": "\\begin{cases} S \\otimes_R S\\to S \\otimes_R R \\\\ \\sum s_i \\otimes t_i \\mapsto \\sum s_i \\cdot t_i \\otimes 1 \\end{cases}" }, { "math_id": 10, "text": "S \\otimes_R S \\equiv I \\oplus S \\otimes_R R." }, { "math_id": 11, "text": "S \\otimes_R S / S \\otimes_R R" }, { "math_id": 12, "text": "\\sum s_i \\otimes t_i \\mapsto \\sum s_i \\otimes t_i - \\sum s_i\\cdot t_i \\otimes 1." }, { "math_id": 13, "text": "S=R[t_1, \\dots, t_n]" }, { "math_id": 14, "text": "\\Omega^1_{R[t_1, \\dots, t_n]/R} = \\bigoplus_{i=1}^n R[t_1, \\dots t_n] \\, dt_i." }, { "math_id": 15, "text": "S' = R' \\otimes_R S" }, { "math_id": 16, "text": "\\Omega_{S/R} \\otimes_S S' \\cong \\Omega_{S'/R'}." }, { "math_id": 17, "text": "W^{-1}\\Omega_{S/R} \\cong \\Omega_{W^{-1}S/R}." }, { "math_id": 18, "text": "R \\to S \\to T" }, { "math_id": 19, "text": "\\Omega_{S/R} \\otimes_S T \\to \\Omega_{T/R} \\to \\Omega_{T/S} \\to 0." }, { "math_id": 20, "text": "T=S/I" }, { "math_id": 21, "text": "\\Omega_{T/S}" }, { "math_id": 22, "text": "I/I^2 \\xrightarrow{[f] \\mapsto df \\otimes 1} \\Omega_{S/R} \\otimes_S T \\to \\Omega_{T/R} \\to 0." }, { "math_id": 23, "text": "T=R[t_1, \\ldots, t_n]/(f_1, \\ldots, f_m)" }, { "math_id": 24, "text": "\\Omega_{(R[t]/(f)) / R} \\cong (R[t]\\,dt \\otimes R[t]/(f)) / (df) \\cong R[t]/(f, df/dt)\\,dt." }, { "math_id": 25, "text": "f : X \\to Y" }, { "math_id": 26, "text": "\\mathcal{I}" }, { "math_id": 27, "text": "X \\times_Y X" }, { "math_id": 28, "text": "\\Omega_{X/Y} = \\mathcal{I} / \\mathcal{I}^2" }, { "math_id": 29, "text": "d: \\mathcal{O}_X \\to \\Omega_{X/Y}" }, { "math_id": 30, "text": "f^{-1}\\mathcal{O}_Y" }, { "math_id": 31, "text": "\\mathcal{O}_X" }, { "math_id": 32, "text": "f:X\\to Y" }, { "math_id": 33, "text": "g:Y\\to Z" }, { "math_id": 34, "text": "X" }, { "math_id": 35, "text": "f^*\\Omega_{Y/Z} \\to \\Omega_{X/Z} \\to \\Omega_{X/Y} \\to 0 " }, { "math_id": 36, "text": "X \\subset Y" }, { "math_id": 37, "text": " \\Omega_{X/Y}=0 " }, { "math_id": 38, "text": "\\mathcal{I}/\\mathcal{I}^2 \\to \\Omega_{Y/Z}|_X \\to \\Omega_{X/Z} \\to 0" }, { "math_id": 39, "text": "K/k" }, { "math_id": 40, "text": "\\Omega^1_{K/k}=0" }, { "math_id": 41, "text": "\\pi:Y \\to \\operatorname{Spec}(K)" }, { "math_id": 42, "text": "\\pi^*\\Omega^1_{K/k} \\to \\Omega^1_{Y/k} \\to \\Omega^1_{Y/K} \\to 0" }, { "math_id": 43, "text": "\\Omega^1_{Y/k} \\cong \\Omega^1_{Y/K}" }, { "math_id": 44, "text": "X\\in \\operatorname{Sch}/\\mathbb{k}" }, { "math_id": 45, "text": " \\operatorname{Proj}\\left(\\frac{\\Complex[x,y,z]}{(x^n + y^n - z^n)} \\right)=\\operatorname{Proj}(R) " }, { "math_id": 46, "text": "\\Omega_{R/\\Complex} = \\frac{R\\cdot dx \\oplus R \\cdot dy \\oplus R \\cdot dz}{nx^{n-1}dx + ny^{n-1}dy - nz^{n-1}dz}" }, { "math_id": 47, "text": "\\Omega_{X/\\Complex} = \\widetilde{\\Omega_{R/\\Complex}}" }, { "math_id": 48, "text": "X = \\operatorname{Spec}\\left( \\frac{\\Complex[t,x,y]}{(xy-t)} \\right)=\\operatorname{Spec}(R) \\to \\operatorname{Spec}(\\Complex[t]) = Y" }, { "math_id": 49, "text": "\\operatorname{Sch}/\\Complex" }, { "math_id": 50, "text": "\\widetilde{R\\cdot dt} \\to \\widetilde{\\frac{R\\cdot dt \\oplus R \\cdot dx \\oplus R \\cdot dy}{ydx + xdy - dt}} \\to \\Omega_{X/Y} \\to 0 " }, { "math_id": 51, "text": "\\Omega_{X/Y} = \\widetilde{\\frac{R \\cdot dx \\oplus R \\cdot dy}{ydx + xdy}}" }, { "math_id": 52, "text": "X \\to Y" }, { "math_id": 53, "text": "\\mathcal O_X" }, { "math_id": 54, "text": "\\Omega^n_{X/Y} := \\bigwedge^n \\Omega_{X/Y}." }, { "math_id": 55, "text": "\\mathcal O_X \\to \\Omega_{X/Y}" }, { "math_id": 56, "text": "0 \\to \\mathcal{O}_X \\xrightarrow{d} \\Omega^1_{X/Y} \\xrightarrow{d} \\Omega^2_{X/Y} \\xrightarrow{d} \\cdots" }, { "math_id": 57, "text": "d \\circ d=0." }, { "math_id": 58, "text": "\\Omega^n_{X/Y} \\otimes \\Omega^m_{X/Y} \\to \\Omega^{n+m}_{X/Y}." }, { "math_id": 59, "text": "H^n_\\text{dR}(X / Y)" }, { "math_id": 60, "text": "H^n_\\text{dR}(X)" }, { "math_id": 61, "text": "0 \\to S \\xrightarrow{d} \\Omega^1_{S/R} \\xrightarrow{d} \\Omega^2_{S/R} \\xrightarrow{d} \\cdots" }, { "math_id": 62, "text": "\\Omega^r_{X/Y}" }, { "math_id": 63, "text": "X=\\operatorname{Spec}\\Q \\left [x,x^{-1} \\right ]" }, { "math_id": 64, "text": "\\Q." }, { "math_id": 65, "text": "\\Q[x, x^{-1}] \\xrightarrow{d} \\Q[x, x^{-1}]\\,dx." }, { "math_id": 66, "text": "d(x^n) = nx^{n-1}\\,dx." }, { "math_id": 67, "text": "\\begin{align}\nH_\\text{dR}^0(X) &= \\Q \\\\\nH_\\text{dR}^1(X) &= \\Q \\cdot x^{-1} dx\n\\end{align}" }, { "math_id": 68, "text": "Y=\\operatorname{Spec}\\mathbb{F}_p \\left [x,x^{-1} \\right ]" }, { "math_id": 69, "text": "\\begin{align}\nH_\\text{dR}^0(Y) &= \\bigoplus_{k \\in \\Z} \\mathbb{F}_p \\cdot x^{kp} \\\\\nH_\\text{dR}^1(Y) &= \\bigoplus_{k \\in \\Z} \\mathbb{F}_p \\cdot x^{kp-1}\\,dx\n\\end{align}" }, { "math_id": 70, "text": "\\Omega^{\\bullet}_{X/\\Complex}(-) \\to \\Omega^{\\bullet}_{X^\\text{an}}((-)^\\text{an})" }, { "math_id": 71, "text": "X^\\text{an}" }, { "math_id": 72, "text": "(-)^{\\text{an}}" }, { "math_id": 73, "text": "H^\\ast_\\text{dR}(X/\\Complex) \\cong H^\\ast_\\text{dR}(X^\\text{an})" }, { "math_id": 74, "text": "H^*_{\\text{sing}}(X^{\\text{an}}; \\C)" }, { "math_id": 75, "text": "\\C^n" }, { "math_id": 76, "text": "X = \\{ (w,z) \\in \\C^2: w z =1 \\}" }, { "math_id": 77, "text": "\\{ 1, z^{-1} dz \\}" }, { "math_id": 78, "text": "H^0_{\\text{dR}}(X/\\C)" }, { "math_id": 79, "text": "H^1_{\\text{dR}}(X/ \\C)" }, { "math_id": 80, "text": "k[x,y]/(y^2-x^3)" }, { "math_id": 81, "text": "y" }, { "math_id": 82, "text": "\\deg(y)=3" }, { "math_id": 83, "text": "\\deg(x)=2" }, { "math_id": 84, "text": "\\Omega_{X/k}" }, { "math_id": 85, "text": "\\omega_{X/k} := \\bigwedge^{\\dim X} \\Omega_{X/k}" }, { "math_id": 86, "text": "g := \\dim H^0(X, \\Omega^d_{X/k})." }, { "math_id": 87, "text": "k=\\Complex" }, { "math_id": 88, "text": "f: X \\to Y" }, { "math_id": 89, "text": "\\Omega_{X/Y}" }, { "math_id": 90, "text": "K := k[t]/f" }, { "math_id": 91, "text": "\\Omega_{K/k} = 0" }, { "math_id": 92, "text": "\\Omega_{R[t_1, \\ldots, t_n]/R}" }, { "math_id": 93, "text": "\\mathbb A^n_R \\to \\operatorname{Spec}(R)" }, { "math_id": 94, "text": "2 \\pi i" }, { "math_id": 95, "text": "\\int_{S^1} \\frac {dz} z = 2 \\pi i." }, { "math_id": 96, "text": "\\Q," }, { "math_id": 97, "text": "H^n_\\text{dR}(X / \\Q) \\otimes_{\\Q} \\Complex = H^n_\\text{dR}(X \\otimes_{\\Q} \\Complex / \\Complex)." }, { "math_id": 98, "text": "H^n_\\text{dR}(X^\\text{an})." }, { "math_id": 99, "text": "H^n(X^\\text{an}, \\Complex)" }, { "math_id": 100, "text": "H^n(X^\\text{an}, \\Q) \\otimes_{\\Q} \\Complex." }, { "math_id": 101, "text": "\\Complex" }, { "math_id": 102, "text": "\\delta_{L/K} = \\{ x \\in R : x \\,dy = 0 \\text{ for all } y \\in R \\}." }, { "math_id": 103, "text": "HH_\\bullet(R)" }, { "math_id": 104, "text": "\\Omega^\\bullet_{R/k}" }, { "math_id": 105, "text": "k" }, { "math_id": 106, "text": "0" } ]
https://en.wikipedia.org/wiki?curid=603916
604052
Schinzel's hypothesis H
In mathematics, Schinzel's hypothesis H is one of the most famous open problems in the topic of number theory. It is a very broad generalization of widely open conjectures such as the twin prime conjecture. The hypothesis is named after Andrzej Schinzel. Statement. The hypothesis claims that for every finite collection formula_0 of nonconstant irreducible polynomials over the integers with positive leading coefficients, one of the following conditions holds: The second condition is satisfied by sets such as formula_8, since formula_9 is always divisible by 2. It is easy to see that this condition prevents the first condition from being true. Schinzel's hypothesis essentially claims that condition 2 is the only way condition 1 can fail to hold. No effective technique is known for determining whether the first condition holds for a given set of polynomials, but the second one is straightforward to check: Let formula_10 and compute the greatest common divisor of formula_11 successive values of formula_12. One can see by extrapolating with finite differences that this divisor will also divide all other values of formula_12 too. Schinzel's hypothesis builds on the earlier Bunyakovsky conjecture, for a single polynomial, and on the Hardy–Littlewood conjectures and Dickson's conjecture for multiple linear polynomials. It is in turn extended by the Bateman–Horn conjecture. Examples. As a simple example with formula_13, formula_14 has no fixed prime divisor. We therefore expect that there are infinitely many primes formula_15 This has not been proved, though. It was one of Landau's conjectures and goes back to Euler, who observed in a letter to Goldbach in 1752 that formula_15 is often prime for formula_16 up to 1500. As another example, take formula_17 with formula_18 and formula_19. The hypothesis then implies the existence of infinitely many twin primes, a basic and notorious open problem. Variants. As proved by Schinzel and Sierpiński it is equivalent to the following: if condition 2 does not hold, then there exists at least one positive integer formula_16 such that all formula_20 will be simultaneously prime, for any choice of irreducible integral polynomials formula_21 with positive leading coefficients. If the leading coefficients were negative, we could expect negative prime values; this is a harmless restriction. There is probably no real reason to restrict polynomials with integer coefficients, rather than integer-valued polynomials (such as formula_22, which takes integer values for all integers formula_23 even though the coefficients are not integers). Previous results. The special case of a single linear polynomial is Dirichlet's theorem on arithmetic progressions, one of the most important results of number theory. In fact, this special case is the only known instance of Schinzel's Hypothesis H. We do not know the hypothesis to hold for any given polynomial of degree greater than formula_24, nor for any system of more than one polynomial. Almost prime approximations to Schinzel's Hypothesis have been attempted by many mathematicians; among them, most notably, Chen's theorem states that there exist infinitely many prime numbers formula_1 such that formula_25 is either a prime or a semiprime and Iwaniec proved that there exist infinitely many integers formula_1 for which formula_26 is either a prime or a semiprime. Skorobogatov and Sofos have proved that almost all polynomials of any fixed degree satisfy Schinzel's hypothesis H. Let formula_27 be an integer-valued polynomial with common factor formula_28, and let formula_29. Then formula_30 is an primitive integer-valued polynomial. Ronald Joseph Miech proved using Brun sieve that formula_31 infinitely often and therefore formula_32 infinitely often, where formula_1 runs over positive integers. The numbers formula_33 and formula_34 don't depend on formula_1, and formula_35, where formula_36 is the degree of the polynomial formula_30. This theorem is also known as Miech's theorem. The proof of the Miech's theorem uses Brun sieve. If there is a hypothetical probabilistic density sieve, using the Miech's theorem can prove the Schinzel's hypothesis H in all cases by mathematical induction. Prospects and applications. The hypothesis is probably not accessible with current methods in analytic number theory, but is now quite often used to prove conditional results, for example in Diophantine geometry. This connection is due to Jean-Louis Colliot-Thélène and Jean-Jacques Sansuc. For further explanations and references on this connection see the notes of Swinnerton-Dyer. The conjectural result being so strong in nature, it is possible that it could be shown to be too much to expect. Extension to include the Goldbach conjecture. The hypothesis does not cover Goldbach's conjecture, but a closely related version (hypothesis HN) does. That requires an extra polynomial formula_37, which in the Goldbach problem would just be formula_38, for which "N" − "F"("n") is required to be a prime number, also. This is cited in Halberstam and Richert, "Sieve Methods". The conjecture here takes the form of a statement "when N is sufficiently large", and subject to the condition that formula_39 has no fixed divisor &gt; 1. Then we should be able to require the existence of "n" such that "N" − "F"("n") is both positive and a prime number; and with all the "fi"("n") prime numbers. Not many cases of these conjectures are known; but there is a detailed quantitative theory (see Bateman–Horn conjecture). Local analysis. The condition of having no fixed prime divisor is purely local (depending just on primes, that is). In other words, a finite set of irreducible integer-valued polynomials with no local obstruction to taking infinitely many prime values is conjectured to take infinitely many prime values. An analogue that fails. The analogous conjecture with the integers replaced by the one-variable polynomial ring over a finite field is false. For example, Swan noted in 1962 (for reasons unrelated to Hypothesis H) that the polynomial formula_40 over the ring "F"2["u"] is irreducible and has no fixed prime polynomial divisor (after all, its values at "x" = 0 and "x" = 1 are relatively prime polynomials) but all of its values as "x" runs over "F"2["u"] are composite. Similar examples can be found with "F"2 replaced by any finite field; the obstructions in a proper formulation of Hypothesis H over "F"["u"], where "F" is a finite field, are no longer just local but a new global obstruction occurs with no classical parallel, assuming hypothesis H is in fact correct. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{f_1,f_2,\\ldots,f_k\\}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "f_1(n),f_2(n),\\ldots,f_k(n)" }, { "math_id": 3, "text": "m>1" }, { "math_id": 4, "text": "f_1(n)f_2(n)\\cdots f_k(n)" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "i" }, { "math_id": 7, "text": "f_i(n)" }, { "math_id": 8, "text": "f_1(x)=x+4, f_2(x)=x+7" }, { "math_id": 9, "text": "(x+4)(x+7)" }, { "math_id": 10, "text": "Q(x)=f_1(x)f_2(x)\\cdots f_k(x)" }, { "math_id": 11, "text": "\\deg(Q)+1" }, { "math_id": 12, "text": "Q(n)" }, { "math_id": 13, "text": "k=1" }, { "math_id": 14, "text": " x^2 + 1 " }, { "math_id": 15, "text": " n^2 + 1 " }, { "math_id": 16, "text": " n " }, { "math_id": 17, "text": "k=2" }, { "math_id": 18, "text": "f_1(x)=x" }, { "math_id": 19, "text": "f_2(x)=x+2 " }, { "math_id": 20, "text": " f_i(n) " }, { "math_id": 21, "text": " f_i(x) " }, { "math_id": 22, "text": "\\tfrac{1}{2}x^2+\\tfrac{1}{2}x+1" }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": " 1 " }, { "math_id": 25, "text": "n+2" }, { "math_id": 26, "text": "n^2+1" }, { "math_id": 27, "text": "P(x)" }, { "math_id": 28, "text": "d" }, { "math_id": 29, "text": "Q(x)=\\frac{P(x)}{d}" }, { "math_id": 30, "text": "Q(x)" }, { "math_id": 31, "text": "\\Omega(Q(n))\\le k" }, { "math_id": 32, "text": "\\Omega(P(n))\\le m" }, { "math_id": 33, "text": "k" }, { "math_id": 34, "text": "m=k+\\Omega(d)" }, { "math_id": 35, "text": "k< D\\cdot (\\ln(D)+2.8)" }, { "math_id": 36, "text": "D" }, { "math_id": 37, "text": " F(x) " }, { "math_id": 38, "text": " x " }, { "math_id": 39, "text": "f_1(n)f_2(n)\\cdots f_k(n)(N - F(n))" }, { "math_id": 40, "text": "x^8 + u^3\\," } ]
https://en.wikipedia.org/wiki?curid=604052
60409393
Delta-matroid
In mathematics, a delta-matroid or Δ-matroid is a family of sets obeying an exchange axiom generalizing an axiom of matroids. A non-empty family of sets is a delta-matroid if, for every two sets formula_0 and formula_1 in the family, and for every element formula_2 in their symmetric difference formula_3, there exists an formula_4 such that formula_5 is in the family. For the basis sets of a matroid, the corresponding exchange axiom requires in addition that formula_6 and formula_7, ensuring that formula_0 and formula_1 have the same cardinality. For a delta-matroid, either of the two elements may belong to either of the two sets, and it is also allowed for the two elements to be equal. An alternative and equivalent definition is that a family of sets forms a delta-matroid when the convex hull of its indicator vectors (the analogue of a matroid polytope) has the property that every edge length is either one or the square root of two. Delta-matroids were defined by André Bouchet in 1987. Algorithms for matroid intersection and the matroid parity problem can be extended to some cases of delta-matroids. Delta-matroids have also been used to study constraint satisfaction problems. As a special case, an "even delta-matroid" is a delta-matroid in which either all sets have even number of elements, or all sets have an odd number of elements. If a constraint satisfaction problem has a Boolean variable on each edge of a planar graph, and if the variables of the edges incident to each vertex of the graph are constrained to belong to an even delta-matroid (possibly a different even delta-matroid for each vertex), then the problem can be solved in polynomial time. This result plays a key role in a characterization of the planar Boolean constraint satisfaction problems that can be solved in polynomial time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "e" }, { "math_id": 3, "text": "E\\triangle F" }, { "math_id": 4, "text": "f\\in E\\triangle F" }, { "math_id": 5, "text": "E\\triangle\\{e,f\\}" }, { "math_id": 6, "text": "e\\in E" }, { "math_id": 7, "text": "f\\in F" } ]
https://en.wikipedia.org/wiki?curid=60409393
60409426
Blue Abyss
Proposed deep-water diving and research pool Blue Abyss is a research pool planned for construction in Cornwall, England, United Kingdom. It will be deep with volume of approximately , making it the world's second deepest pool after the Deep Dive Dubai. The Blue Abyss pool will be used for training and development for commercial diving, space exploration, human life science, and submersibles. This pool could aid in reducing risk in extreme environments, including space and the sub-aquatic. The pool. The pool itself will have several entrance points and includes a series of depths. The multi-level depths of the pool has many functions, including 'Astrolab' at . The total surface area of the pool is planned to be , and its deepest point at , giving a total volume of 42,000mformula_0 of water. The pool was designed by architect Robin Partington. The facility itself will include: Commercial diving. The Blue Abyss will aid in testing, training, and pre-operational exercises for the commercial diving sector. The pools indoor facility will allow for diving any time and year round. The facility will not be dependent on weather. A 30-tonne crane and lifting platforms will allow for roof access to the pool, and the sliding roof will facilitate access to insert larger training objects into the pool. The pool will allow for deep water training for offshore and inshore commercial divers and diving teams. This training will include scenarios and conditions that otherwise would be impossible for safe diving training. This would allow for an acceleration of emergency services, oceanographic research, archaeology, and civil engineering. Submersibles. Submersible vehicles are a new alternative to classic diving methods. The indoor pool will permit a controlled environment allowing for submersible trials and training, including simulations. This would reduce the risk of subsea operations by moving trials and training into a safe and heavily monitored facility. Testing and training is expected to include hydrographic, pipeline and cable surveys, inspections of wind turbine foundations, vessel hulls and installations, oceanographic studies, and even film production. The pool will include roof access, as well as an R&amp;D capability for new ROV and AUV sub-systems. Space exploration. The Blue Abyss will provide a facility for spaceflight simulation, like a mission to Mars simulation. It will offer safety and experience for astronauts undergoing human spaceflight training. The facility will include a neutral buoyancy pool, parabolic flight and centrifuge training, hypo and hyperbolic chambers and micro-gravity simulation suite, and an environmental research centre. Human life science. The centre will provide human physiology and human robotic interface R&amp;D capabilities. This will allow for extreme environment research, including human spaceflight. The Kuehnegger Human Performance Centre will have astronaut and athlete test and evaluation facilities, with full-body suspension and hypobaric chambers for altitude training. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^3" } ]
https://en.wikipedia.org/wiki?curid=60409426
6041076
Airy points
Airy points (after George Biddell Airy) are used for precision measurement (metrology) to support a length standard in such a way as to minimise bending or drop of a horizontally supported beam. Choice of support points. A kinematic support for a one-dimensional beam requires exactly two support points. Three or more support points will not share the load evenly (unless they are hinged in a non-rigid whiffle tree or similar). The position of those points can be chosen to minimize various forms of gravity deflection. A beam supported at the ends will sag in the middle, resulting in the ends moving closer together and tilting upward. A beam supported only in the middle will sag at the ends, making a similar shape but upside down. Airy points. Supporting a uniform beam at the Airy points produces zero angular deflection of the ends. The Airy points are symmetrically arranged around the centre of the length standard and are separated by a distance equal to formula_0 of the length of the rod. "End standards", that is standards whose length is defined as the distance between their flat ends such as long gauge blocks or the , must be supported at the Airy points so that their length is well-defined; if the ends are not parallel, the measurement uncertainty is increased because the length depends on which part of the end is measured. For this reason, the Airy points are commonly identified by inscribed marks or lines. For example, a 1000 mm length gauge would have an Airy point separation of 577.4 mm. A line or pair of lines would be marked onto the gauge 211.3 mm in from each end. Supporting the artifact at these points ensures that the calibrated length is preserved. Airy's 1845 paper derives the equation for n equally spaced support points. In this case, the distance between each support is the fraction formula_1 the length of the rod. He also derives the formula for a rod which extends beyond the reference marks. Bessel points. "Line standards" are measured between lines marked on their surfaces. They are much less convenient to use than end standards but, when the marks are placed on the neutral plane of the beam, allow greater accuracy. To support a line standard, one wishes to minimise the "linear", rather than angular, motion of the ends. The Bessel points (after Friedrich Wilhelm Bessel) are the points at which the length of the beam is maximized. Because this is a maximum, the effect of a small positioning error is proportional to the square of the error, an even smaller amount. The Bessel points are located 0.5594 of the length of the rod apart, slightly closer than the Airy points. Because line standards invariably extend beyond the lines marked on them, the optimal support points depend on both the overall length and the length to be measured. The latter is the quantity to be maximized, requiring a more complex calculation. For example, the 1927–1960 definition of the metre specified that the International Prototype Metre bar was to be measured while "supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other." Those would be the Bessel points of a beam 1020 mm long. Other support points of interest. Other sets of support points, even closer than the Bessel points, which may be wanted in some applications are:
[ { "math_id": 0, "text": "\n1/\\sqrt{3}=0.57735...\n" }, { "math_id": 1, "text": "c = 1/\\sqrt{n^2-1}" } ]
https://en.wikipedia.org/wiki?curid=6041076
604111
Stone's representation theorem for Boolean algebras
Every Boolean algebra is isomorphic to a certain field of sets In mathematics, Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a certain field of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by Marshall H. Stone. Stone was led to it by his study of the spectral theory of operators on a Hilbert space. Stone spaces. Each Boolean algebra "B" has an associated topological space, denoted here "S"("B"), called its Stone space. The points in "S"("B") are the ultrafilters on "B", or equivalently the homomorphisms from "B" to the two-element Boolean algebra. The topology on "S"("B") is generated by a basis consisting of all sets of the form formula_0 where "b" is an element of "B". These sets are also closed and so are clopen (both closed and open). This is the topology of pointwise convergence of nets of homomorphisms into the two-element Boolean algebra. For every Boolean algebra "B", "S"("B") is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces (also "profinite spaces"). Conversely, given any topological space "X", the collection of subsets of "X" that are clopen is a Boolean algebra. Representation theorem. A simple version of Stone's representation theorem states that every Boolean algebra "B" is isomorphic to the algebra of clopen subsets of its Stone space "S"("B"). The isomorphism sends an element formula_1 to the set of all ultrafilters that contain "b". This is a clopen set because of the choice of topology on "S"("B") and because "B" is a Boolean algebra. Restating the theorem using the language of category theory; the theorem states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the correspondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra "A" to a Boolean algebra "B" corresponds in a natural way to a continuous function from "S"("B") to "S"("A"). In other words, there is a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial duality of categories. The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets. The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal. An extension of the classical Stone duality to the category of Boolean spaces (that is, zero-dimensional locally compact Hausdorff spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by H. P. Doctor). Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{ x \\in S(B) \\mid b \\in x\\}," }, { "math_id": 1, "text": "b \\in B" } ]
https://en.wikipedia.org/wiki?curid=604111
60415297
Radial basis function interpolation
Radial basis function (RBF) interpolation is an advanced method in approximation theory for constructing high-order accurate interpolants of unstructured data, possibly in high-dimensional spaces. The interpolant takes the form of a weighted sum of radial basis functions. RBF interpolation is a mesh-free method, meaning the nodes (points in the domain) need not lie on a structured grid, and does not require the formation of a mesh. It is often spectrally accurate and stable for large numbers of nodes even in high dimensions. Many interpolation methods can be used as the theoretical foundation of algorithms for approximating linear operators, and RBF interpolation is no exception. RBF interpolation has been used to approximate differential operators, integral operators, and surface differential operators. Examples. Let formula_0 and let formula_1 be 15 equally spaced points on the interval formula_2. We will form formula_3 where formula_4 is a radial basis function, and choose formula_5 such that formula_6 (formula_7 interpolates formula_8 at the chosen points). In matrix notation this can be written as formula_9 Choosing formula_10, the Gaussian, with a shape parameter of formula_11, we can then solve the matrix equation for the weights and plot the interpolant. Plotting the interpolating function below, we see that it is visually the same everywhere except near the left boundary (an example of Runge's phenomenon), where it is still a very close approximation. More precisely the maximum error is roughly formula_12 at formula_13. Motivation. The Mairhuber–Curtis theorem says that for any open set formula_14 in formula_15 with formula_16, and formula_17 linearly independent functions on formula_14, there exists a set of formula_18 points in the domain such that the interpolation matrix formula_19 is singular. This means that if one wishes to have a general interpolation algorithm, one must choose the basis functions to depend on the interpolation points. In 1971, Rolland Hardy developed a method of interpolating scattered data using interpolants of the form formula_20. This is interpolation using a basis of shifted multiquadric functions, now more commonly written as formula_21, and is the first instance of radial basis function interpolation. It has been shown that the resulting interpolation matrix will always be non-singular. This does not violate the Mairhuber–Curtis theorem since the basis functions depend on the points of interpolation. Choosing a radial kernel such that the interpolation matrix is non-singular is exactly the definition of a strictly positive definite function. Such functions, including the Gaussian, inverse quadratic, and inverse multiquadric are often used as radial basis functions for this reason. Shape-parameter tuning. Many radial basis functions have a parameter that controls their relative flatness or peakedness. This parameter is usually represented by the symbol formula_22 with the function becoming increasingly flat as formula_23. For example, Rolland Hardy used the formula formula_24 for the multiquadric, however nowadays the formula formula_25 is used instead. These formulas are equivalent up to a scale factor. This factor is inconsequential since the basis vectors have the same span and the interpolation weights will compensate. By convention, the basis function is scaled such that formula_26 as seen in the plots of the Gaussian functions and the bump functions. A consequence of this choice, is that the interpolation matrix approaches the identity matrix as formula_27 leading to stability when solving the matrix system. The resulting interpolant will in general be a poor approximation to the function since it will be near zero everywhere, except near the interpolation points where it will sharply peak – the so-called "bed-of-nails interpolant" (as seen in the plot to the right). On the opposite side of the spectrum, the condition number of the interpolation matrix will diverge to infinity as formula_23 leading to ill-conditioning of the system. In practice, one chooses a shape parameter so that the interpolation matrix is "on the edge of ill-conditioning" (eg. with a condition number of roughly formula_28 for double-precision floating point). There are sometimes other factors to consider when choosing a shape-parameter. For example the bump function formula_29 has a compact support (it is zero everywhere except when formula_30) leading to a sparse interpolation matrix. Some radial basis functions such as the polyharmonic splines have no shape-parameter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = \\exp(x \\cos(3 \\pi x))" }, { "math_id": 1, "text": "x_k = \\frac{k}{14}, k=0, 1, \\dots, 14" }, { "math_id": 2, "text": "[0, 1]" }, { "math_id": 3, "text": "s(x) = \\sum\\limits_{k=0}^{14} w_k \\varphi(\\|x-x_k\\|)" }, { "math_id": 4, "text": "\\varphi" }, { "math_id": 5, "text": "w_k, k=0, 1, \\dots, 14" }, { "math_id": 6, "text": "s(x_k)=f(x_k), k=0, 1, \\dots, 14" }, { "math_id": 7, "text": "s" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "\n\\begin{bmatrix}\n\\varphi(\\|x_0 - x_0\\|) & \\varphi(\\|x_1 - x_0\\|) & \\dots & \\varphi(\\|x_{14} - x_0\\|) \\\\\n\\varphi(\\|x_0 - x_1\\|) & \\varphi(\\|x_1 - x_1\\|) & \\dots & \\varphi(\\|x_{14} - x_{1}\\|) \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\varphi(\\|x_0 - x_{14}\\|) & \\varphi(\\|x_1 - x_{14}\\|) & \\dots & \\varphi(\\|x_{14} - x_{14}\\|) \\\\\n\\end{bmatrix}\n\\begin{bmatrix}w_0 \\\\ w_1 \\\\ \\vdots \\\\ w_{14}\\end{bmatrix}\n= \\begin{bmatrix}f(x_0) \\\\ f(x_1) \\\\ \\vdots \\\\ f(x_{14})\\end{bmatrix}.\n" }, { "math_id": 10, "text": "\\varphi(r) = \\exp(-(\\varepsilon r)^2)" }, { "math_id": 11, "text": "\\varepsilon = 3" }, { "math_id": 12, "text": "\\|f - s\\|_\\infty \\approx 0.0267414" }, { "math_id": 13, "text": "x = 0.0220012" }, { "math_id": 14, "text": "V" }, { "math_id": 15, "text": "\\mathbb{R}^n" }, { "math_id": 16, "text": "n \\geq 2" }, { "math_id": 17, "text": "f_1, f_2, \\dots, f_n" }, { "math_id": 18, "text": "n" }, { "math_id": 19, "text": "\n\\begin{bmatrix}\nf_1(x_1) & f_2(x_1) & \\dots & f_n(x_1) \\\\\nf_1(x_2) & f_2(x_2) & \\dots & f_n(x_2) \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nf_1(x_n) & f_2(x_n) & \\dots & f_n(x_n)\n\\end{bmatrix}\n" }, { "math_id": 20, "text": "s(\\mathbf{x}) = \\sum\\limits_{k=1}^N \\sqrt{\\|\\mathbf{x} - \\mathbf{x}_k\\|^2 + C}" }, { "math_id": 21, "text": "\\varphi(r) = \\sqrt{1+(\\varepsilon r)^2}" }, { "math_id": 22, "text": "\\varepsilon" }, { "math_id": 23, "text": "\\varepsilon \\to 0" }, { "math_id": 24, "text": "\\varphi(r) = \\sqrt{r^2 + C}" }, { "math_id": 25, "text": "\\varphi(r) = \\sqrt{1 + (\\varepsilon r)^2}" }, { "math_id": 26, "text": "\\varphi(0) = 1" }, { "math_id": 27, "text": "\\varepsilon \\to \\infty" }, { "math_id": 28, "text": "10^{12}" }, { "math_id": 29, "text": "\\varphi(r) = \n\\begin{cases}\n\\exp\\left( -\\frac{1}{1 - (\\varepsilon r)^2}\\right) & \\mbox{ for } r<\\frac{1}{\\varepsilon} \\\\\n0 & \\mbox{ otherwise} \n\\end{cases}\n" }, { "math_id": 30, "text": "r< \\tfrac{1}{\\varepsilon}" } ]
https://en.wikipedia.org/wiki?curid=60415297
60415782
Generalized Cohen–Macaulay ring
Local ring in mathematics In algebra, a generalized Cohen–Macaulay ring is a commutative Noetherian local ring formula_0 of Krull dimension "d" &gt; 0 that satisfies any of the following equivalent conditions: The last condition implies that the localization formula_14 is Cohen–Macaulay for each prime ideal formula_15. A standard example is the local ring at the vertex of an affine cone over a smooth projective variety. Historically, the notion grew up out of the study of a Buchsbaum ring, a Noetherian local ring "A" in which formula_16 is constant for formula_6-primary ideals formula_4; see the introduction of. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(A, \\mathfrak{m})" }, { "math_id": 1, "text": "i = 0, \\dots, d - 1" }, { "math_id": 2, "text": "\\operatorname{length}_A(\\operatorname{H}^i_{\\mathfrak{m}}(A)) < \\infty" }, { "math_id": 3, "text": "\\sup_Q (\\operatorname{length}_A(A/Q) - e(Q)) < \\infty" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "e(Q)" }, { "math_id": 6, "text": "\\mathfrak{m}" }, { "math_id": 7, "text": "x_1, \\dots, x_d" }, { "math_id": 8, "text": "(x_1, \\dots, x_{d-1}) : x_d = (x_1, \\dots, x_{d-1}) : Q." }, { "math_id": 9, "text": "\\mathfrak{p}" }, { "math_id": 10, "text": "\\widehat{A}" }, { "math_id": 11, "text": "\\mathfrak{m} \\widehat{A}" }, { "math_id": 12, "text": "\\dim \\widehat{A}_{\\mathfrak{p}} + \\dim \\widehat{A}/\\mathfrak{p} = d" }, { "math_id": 13, "text": "\\widehat{A}_{\\mathfrak{p}}" }, { "math_id": 14, "text": "A_\\mathfrak{p}" }, { "math_id": 15, "text": "\\mathfrak{p} \\ne \\mathfrak{m}" }, { "math_id": 16, "text": "\\operatorname{length}_A(A/Q) - e(Q)" } ]
https://en.wikipedia.org/wiki?curid=60415782
6041918
State prices
In financial economics, a state-price security, also called an Arrow–Debreu security (from its origins in the Arrow–Debreu model), a pure security, or a primitive security is a contract that agrees to pay one unit of a numeraire (a currency or a commodity) if a particular state occurs at a particular time in the future and pays zero numeraire in all the other states. The price of this security is the state price of this particular state of the world. The state price vector is the vector of state prices for all states. See . An Arrow security is an instrument with a fixed payout of one unit in a specified state and no payout in other states. It is a type of hypothetical asset used in the Arrow market structure model. In contrast to the Arrow-Debreu market structure model, an Arrow market is a market in which the individual agents engage in trading assets at every time period t. In an Arrow-Debreu model, trading occurs only once at the beginning of time. An Arrow Security is an asset traded in an Arrow market structure model which encompasses a complete market. The Arrow–Debreu model (also referred to as the Arrow–Debreu–McKenzie model or ADM model) is the central model in general equilibrium theory and uses state prices in the process of proving the existence of a unique general equilibrium. State prices may relatedly be applied in derivatives pricing and hedging: a contract whose settlement value is a function of an underlying asset whose value is uncertain at contract date, can be decomposed as a linear combination of its Arrow–Debreu securities, and thus as a weighted sum of its state prices; see Contingent claim analysis. Breeden and Litzenberger's work in 1978 established the latter, more general use of state prices in finance. Example. Imagine a world where two states are possible tomorrow: peace (P) and war (W). Denote the random variable which represents the state as ω; denote tomorrow's random variable as ω1. Thus, ω1 can take two values: ω1=P and ω1=W. Let's imagine that: The prices qP and qW are the state prices. The factors that affect these state prices are: Application to financial assets. If the agent buys both qP and qW, he has secured £1 for tomorrow. He has purchased a riskless bond. The price of the bond is b0 = qP + qW. Now consider a security with state-dependent payouts (e.g. an equity security, an option, a risky bond etc.). It pays ck if ω1=k ,k=p or w.-- i.e. it pays cP in peacetime and cW in wartime). The price of this security is c0 = qPcP + qWcW. Generally, the usefulness of state prices arises from their linearity: Any security can be valued as the sum over all possible states of state price times payoff in that state: formula_0. Analogously, for a continuous random variable indicating a continuum of possible states, the value is found by integrating over the state price density. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_0 = \\sum_k q_k\\times c_k" } ]
https://en.wikipedia.org/wiki?curid=6041918
60419824
Combinatorial participatory budgeting
Problem in social choice Combinatorial participatory budgeting, also called indivisible participatory budgeting or budgeted social choice, is a problem in social choice. There are several candidate "projects", each of which has a fixed costs. There is a fixed "budget", that cannot cover all these projects. Each voter has different "preferences" regarding these projects. The goal is to find a "budget-allocation" - a subset of the projects, with total cost at most the budget, that will be funded. Combinatorial participatory budgeting is the most common form of participatory budgeting. Combinatorial PB can be seen as a generalization of committee voting: committee voting is a special case of PB in which the "cost" of each candidate is 1, and the "budget" is the committee size. This assumption is often called the "unit-cost assumption". The setting in which the projects are divisible (can receive any amount of money) is called portioning, fractional social choice, or budget-proposal aggregation. PB rules have other applications besides proper budgeting. For example: Welfare-maximization rules. One class of rules aims to maximize a given social welfare function. In particular, the utilitarian rule aims to find a budget-allocation that maximizes the sum of agents' utilities. With cardinal voting, finding a utilitarian budget-allocation requires solving a knapsack problem, which is NP-hard in theory but can be solved easily in practice. There are also greedy algorithms that attain a constant-factor approximation of the maximum welfare. There are many possible utility functions for a given rated ballot: Sreedurga, Bhardwaj and Narahari study the egalitarian rule, which aims to maximize the "smallest" utility of an agent. They prove that finding an egalitarian budget-allocation is NP-hard, but give pseudo-polynomial time and polynomial-time algorithms when some natural paramerters are fixed. They propose an algorithm that achieves an additive approximation for restricted spaces of instances, and show that it gives exact optimal solutions on real-world datasets. They also prove that the egalitarian rule satisfies a new fairness axiom, which they call "maximal coverage". Annick Laruelle studies welfare maximization under weak ordinal voting, where a scoring rule is used to translate ranking to utility. She studies a greedy approximation to the utilitarian welfare and for the Chamberlin-Courant welfare. She tests three algorithms on real data from the PB in Portugalete in 2018; the results show that the algorithm including project costs in the ballot performs better than the other two. Knapsack budgeting. Fluschnik, Skowron, Triphaus and Wilker study maximization of utilitarian welfare, Chamberlin-Courant welfare, and Nash welfare, assuming cardinal utilities. The budgeting method most common in practice is a greedy solution to a variant of the knapsack problem: the projects are ordered by decreasing order of the number of votes they received, and selected one-by-one until the budget is exhausted. Alternatively, if the number of projects is sufficiently small, the knapsack problem may be solved exactly, by selecting a subset of projects that maximizes the total happiness of the citizens. Since this method (called "individually-best knapsack") might be unfair to minority groups, they suggest two fairer alternatives: Unfortunately, for general utility domains, both these rules are NP-hard to compute. However, diverse-knapsack is polynomially-solvable in specific preference domains, or when the number of voters is small. Proportional approval voting. Proportional approval voting is a rule for multiwinner voting, which was adapted to PB by Pierczynski, Peters and Skowron. It chooses a rule that maximizes the total "score", which is defined by a harmonic function of the cardinality-based satisfaction. It is not very popular, since in the context of PB it does not satisfy the strong proportionality guarantees as in the context of multiwinner voting. Sequential purchase rules. In sequential purchase rules, each candidate project should be "bought" by the voters, using some virtual currency. Several such rules are known. Phragmen's rule. This method Phragmen's rules for committee elections. Los, Christoff and Grossi describe it for approval ballots as a continuous process: Maximin support rule. This rule is an adaptation of the sequential Phragmen rule, which allows a redistribution of the loads in each round. It was first introduced as a multiwinner voting rule by Sanchez-Fernandez, Fernandez-Garcia, Fisteus and Brill. It was adapted to PB by Aziz, Lee and Talmon (though they call it 'Phragmen's rule'). They also present an efficient algorithm to compute it. Method of equal shares. This method generalizes the method of equal shares for committee elections. The generalization to PB with cardinal ballots was done by Pierczynski, Peters and Skowron. Other rules. Shapiro and Talmon present a polynomial-time algorithm for finding a budget-allocation satisfying the Condorcet criterion: the selected budget-allocation should be at least as good as any other proposed budget, according to a majority of the voters (no proposed change to it has majority support among the votes). Their algorithm uses Schwartz sets. Skowron, Slinko, Szufa and Talmon present a rule called "minimal transfers over costs" that extends the single transferable vote to participatory budgeting. Aziz and Lee present a rule called the expanding approvals rule that uses the . Pierczynski, Peters, and Skowron present a variant of the method of equal shares for weak-ordinal ballots, and show that it is an expanding approvals rule. Fairness considerations. An important consideration in budgeting is to be fair to both majority and minority groups. To illustrate the challenge, suppose that 51% of the population live in the north and 49% live in the south, and that every possible budget . suppose there are 10 projects in the north and 10 projects in the south, each of them costs 1 unit, and the available budget is 10. Voting on budgets by simple majority rule will result in the 10 projects in the north being selected, with no projects in the south, which is unfair to the southerners. To partially address this issue, many municipalities perform a separate PB process in each district, to guarantee that each district receives a proportional representation. But this introduces other problems. For example, projects on the boundary of two districts can be voted only by one district, and thus may not be funded even if they are supported by many people from the other district. Additionally, projects without a specific location, that benefit the entire city, cannot be handled. Moreover, there are groups that are not geographic, such as: parents, or bike-riders. The notion of fairness to groups is formally captured by extending the justified representation criteria from multiwinner voting. The idea of these criteria is that, if there is a sufficiently large group of voters who all agree on a sufficiently large group of projects, then these projects should receive a sufficiently large part of the budget. Formally, given a group "N" of voters and a set "P" of projects, we define: Based on these definitions, many fairness notions have been defined; see Rey and Maly for a taxonomy of the various fairness notions. Below, the chosen budget-allocation (the set of projects chosen to be funded) is denote by "X". Strong extended justified representation. Strong extended justified representation (SEJR) means that, for every group "N" of voters that can afford a set "P" of projects, the utility of "every" member of "N" from "X" is at least as high as the potential utility of "N" from "P". In particular, with approval ballots and cardinal satisfaction, if "N" can afford "P" and all members in "N" approve "P", then for each member "i" in "N", at least |"P"| projects approved by "i" should be funded. This property is too strong, even in the special case of approval ballots and unit-cost projects (committee elections) For example, suppose "n"=4 and "B"=2. There are three unit-cost projects {x, y, z}. The approval ballots are: {1:x, 2:y, 3:z, 4:xyz}. The group N={1,4} can afford P={x}, and their potential utility from {x} is 1; similarly, {2,4} can afford {y}, and {3,4} can afford {z}. Therefore, SEJR requires that the utility of each of the 4 agent must be at least 1. This can only be done by funding all 3 projects; but the budget is sufficient for only 2 projects. Note that this holds for any satisfaction function.Sec.5, fn.5 Fully justified representation. Fully justified representation (FJR) means that, for every group "N" of voters who can afford a set "P" of projects, the utility of "at least one" "member" of "N" from "X" is at least as high as the potential utility of "N" from "P". In particular, with approval ballots and cardinal satisfaction, if "N" can afford "P", and every member in "N" approves at least "k" elements of "P", then for at least one member "i" in "N", at least "k" projects approved by "i" should be funded. The "at least one member" clause may make the FJR property seem weak. But note that it should hold for "every" group "N" of voters who can afford some set "P" of projects, so it implies fairness guarantees for many individual voters. An FJR budget-allocation always exists.Sec.4 For example, in the example above, {a,b,c} satisfies FJR: in {1,2,3} and {3,4,5} and {5,6,7} all agents have utility at least 1, and in {7,8,9} voter #7 has utility at least 1. The existence proof is based on a rule called "Greedy Cohesive Rule (GCR)": It is easy to see that GCR always selects a feasible budget-allocation: whenever it funds a set "P" of projects, it removes a set "N" of voters satisfying formula_0. The total number of voters removed is at most "n"; hence, the total cost of projects added is at most formula_2. To see that GCR satisfies FJR, consider any group "N" who can afford a set "P", and has potential utility u(N,P). Let "i" be the member of "N" who was removed first. Voter "i" was removed as a member in some other voter-group "M", who could afford a set "Q", with potential utility u(M,Q). When "M" was removed, "N" was available; so the algorithm order implies that u(M,Q) ≥ u(N,P). Since the entire "Q" is funded, each agent in "M" - including agent "i" - receives utility at least u(M,Q), which is at least u(N,P). So the FJR condition is satisfied for "i". Note that the proof holds even for non-additive monotone utilities. GCR runs in time exponential in "n". Indeed, finding an FJR budget-allocation is NP-hard even there is a single voter. The proof is by reduction from the knapsack problem. Given a knapsack problem, define a PB instance with a single voter in which the budget is the knapsack capacity, and for each item with weight "w" and value "v", there is a project with cost "w" and utility "v". Let "P" be the optimal solution to the knapsack instance. Since cost("P")=weight("P") is at most the budget, it is affordable by the single voter. Therefore, his utility in an EJR budget-allocation should be at least value("P"). Therefore, finding an FJR budget-allocation yields a solution to the knapsack instance. The same hardness exists even with approval ballots and cost-based satisfaction, by reduction from the subset sum problem. Extended justified representation. Extended justified representation (EJR) is a property slightly weaker than FJR. It means that the FJR condition should apply only to groups that are sufficiently "cohesive". In particular, with approval ballots, if "N" can afford "P", and every member in "N" approves "all elements of P", then for at least one member "i" in "N", the satisfaction from "i"'s approved project in "X" should be at least as high as the satisfaction from "P". In particular: Since FJR implies EJR, an EJR budget-allocation always exists. However, similar to FJR, it is NP-hard to find an EJR allocation. The NP-hardness holds even with approval ballots, for any satisfaction function that is strictly increasing with the cost. But with cardinality-based satisfaction and approval ballots, the method of equal shares finds an EJR budget allocation. Moreover, checking whether a given budget-allocation satisfies EJR is coNP-hard even with unit costs. It is an open question, whether an EJR or an FJR budget-allocation can be found in time polynomial in "n" and "B" (that is, pseudopolynomial time).5.1.1.2 Extended justified representation up to one project. EJR up-to one project (EJR-1) means that, for every group "N" of voters who can afford a set "P" of projects, there exists at least one member "i" in "N" such that one of the following holds: With cardinal ballots, EJR-1 is weaker than EJR; with approval ballots and cardinal-satisfaction, EJR-1 is equivalent to EJR. This is because all projects' utilities are 0 or 1. Therefore, if adding a single project makes "i"'s utility strictly larger than u(N,P), then without this single project, "i"'s utility is at least u(N,P). Pierczynski, Skowron and PetersThm.2 prove that the method of equal shares, which runs in polynomial time, always finds an EJR-1 budget allocation; hence, with approval ballots and cardinality-based satisfaction, it always finds an EJR budget allocation (even for non-unit costs). EJR up-to any project (EJR-x) means that, for every group "N" of voters who can afford a set "P" of projects, and for every unfunded project "y" in "P", the utility of "i" from "X"+"y" is "strictly larger" than the potential utility of "N" from "P". Clearly, EJR implies EJR-x which implies EJR-1. Brill, Forster, Lackner, Maly and Peters prove that, for approval ballots and for any satisfaction function with decreasing normalized satisfaction (DNS), if the method of equal shares is applied with that satisfaction function, the outcome is EJR-x. However, it may not be possible to satisfy EJR-x or even EJR-1 simultaneously for different satisfaction functions: there are instances in which no budget-allocation satisfies EJR-1 simultaneously for both cost-satisfaction and cardinality-satisfaction. Proportional justified representation. Proportional justified representation (PJR) means that, for every group "N" of voters who can afford a set "P" of projects, the "group-utility of N" from the budget-allocation - defined as formula_3 - is at least the potential utility of "N" from "P". In particular, with approval ballots, if "N" can afford "P", and every member in "N" approves all elements of "P", then the satisfaction from "the set of all funded projects that are approved by at least one member of N" should be at least as high as the satisfaction from "P". In particular: Since EJR implies PJR, a PJR budget-allocation always exists. However, similar to EJR, it is NP-hard to find a PJR allocation even for a single voter (using the same reduction from knapsack). Moreover, checking whether a given budget-allocation satisfies PJR is coNP-hard even with unit costs and approval ballots. Analogously to EJR-x, one can define PJR-x, which means PJR up to any project. Brill, Forster, Lackner, Maly and Peters prove that, for approval ballots, the sequential Phragmen rule, the maximin-support rule, and the method of equal shares with cardinality-satisfaction, all guarantee PJR-x "simuntaleously for every DNS satisfaction function". Local JR conditions. Aziz, Lee and Talmon present "local" variants of the above JR criteria, that can be satisfied in polynomial time. For each of these criteria, they also present a weaker variant where, instead of the external budget-limit "B", the denominator is "W", which is the actual amount used for funding. Since usually "W"&lt;"B", the "W"-variants are easier to satisfy than their "B"-variants. Ordinal JR conditions. Aziz and Lee extend the justified-representation notions to weak-ordinal ballots, which contain approval ballots as a special case. They extend the notion of a cohesive group to a "solid coalition", and define two incomparable proportionality notions: Comparative Proportionality for Solid Coalitions (CPSC) and Inclusion Proportionality for Solid Coalitions (IPSC). CPSC may not always exist, but IPSC always exists and can be found in polynomial time. Equal shares satisfies PSC – a weaker notion than both IPSC and CPSC. Core fairness. One way to assess both fairness and stability of budget-allocations is to check whether any given group of voters could attain a higher utility by taking their share of the budget and dividing in a different way. This is captured by the notion of "core" from cooperative game theory. Formally, a budget-allocation "X" is "in the weak core" there is no group "N" of voters, and an alternative budget-allocation "Z" of formula_4, such that "all members" of N strictly prefer "Z" to "X". Core fairness is stronger than FJR, which is stronger than EJR. To see the relation between these conditions, note that, for the weak core, it is sufficient that, for each group "N" of voters, the utility of "at least one" "member" of "N" from "X" is at least as high as the potential utility of "N" from "P"; it is not required that "N" should be cohesive. For the setting of "divisible PB" and "cardinal ballots," a there are efficient algorithms for calculating a core budget-allocation for some natural classes of utility functions. However, for "indivisible PB", the weak core might be empty even with unit costs. For example: suppose there are 6 voters and 6 unit-cost projects, and the budget is 3. The utilities satisfy the following inequalities: All other utilities are 0. Any feasible budget-allocation contains either at most one project {a,b,c} or at most one project {d,e,f}. W.l.o.g. suppose the former, and suppose that "b" and "c" are not funded. Then voters 2 and 3 can take their proportional share of the budget (which is 1) and fund project c, which will give both of them a higher utility. Note that the above example requires only 3 utility values (e.g. 2, 1, 0). With only 2 utility values (i.e., approval ballots), it is an open question whether a weak-core allocation always exists, with or without unit costs; both with cardinality-satisfaction and cost-satisfaction. Some approximations to the core can be attained: equal shares attains a multiplicative approximation of formula_5. Munagala, Shen, Wang and Wang prove that, for arbitrary monotone utilities, a 67-approximate core allocation exists can be computed in polynomial time. For additive utilities, a 9.27-approximate core allocation exists, but it is not known if it can be computed in polynomial time. Jiang, Munagala and Wang consider a different notion of approximation called "entitlement-approximation"; they prove that a 32-approximate core by this notion always exists. Priceability. Priceability means that it is possible to assign a fixed budget to each voter, and split each voter's budget among candidates he approves, such that each elected candidate is 'bought' by the candidates who approve him, and no unelected candidate can be bought by the remaining money of the voters who approve him. MES can be viewed as an implementation of Lindahl equilibrium in the discrete model, with the assumption that the customers sharing an item must pay the same price for the item. The definition is the same for cardinal ballots as for approval ballots. A priceable allocation is computed by the rules of equal shares (for cardinal ballots), Sequential Phragmen (for approval ballots), and maximin support (for approval ballots). With approval ballots, priceability implies PJR-x for cost-based satisfaction. Moreover, a slightly stronger priceability notion implies PJR-x simultaneously for all DNS satisfaction functions. This stronger notion is satisfied by equal shares with cardinality satisfaction, sequential Phragmen, and maximin support. Laminar fairness. Laminar fairness is a condition on instances of a specific structure, called "laminar instances". A special case of a laminar instance is an instance in which the population is partitioned into two or more disjoint groups, such that each group supports a disjoint set of projects. Equal shares and sequential Phragmen are laminar-proportional with unit costs, but not with general costs. Fair share. Maly, Rey, Endriss and Lackner defined a new fairness notion for PB with approval ballots, that depends only on equality of resources, and not on a particular satisfaction function. The idea was first presented by Ronald Dworkin. They explain the rationale behind this new notion as follows: "we do not aim for a fair distribution of "satisfaction", but instead we strive to invest the same "effort" into satisfying each voter. The advantage is that the amount of resources spent is a quantity we can measure objectively." They define the share of an agent "i" from the set "P" of funded projects as: formula_6. Intuitively, this quantity represents the amount of resources that society put into satisfying "i". For each funded project "x", the cost of "x" contributes equally to all agents who approve "x". As an example, suppose the budget is 8, there are three projects x,y,z with costs 6,2,2, four agents with approval ballots xy, xy, y, z. A budget-allocation satisfies fair share (FS) if the share of each agent is at least min("B"/"n", share("Ai","i")). Obviously, a fair-share allocation may not exist, for example, when there are two agents each of whom wants a different project, but the budget suffices for only one project. Moreover, even fair-share up-to one project (FS-1) allocation might not exist. For example, suppose "B"=5, there are 3 projects of cost 3, and the approval ballots are xy, yz, zx. The fair share is 5/3. But in any feasible allocation, at most one project is funded, so there is an agent with no approved funded project. For this agent, even adding one project would increase his share to 3/2=1.5, which is less than 5/3. Checking whether a FS or an FS-1 allocation exists is NP-hard. On practical instances from pabulib, it is possible to give each agent between 45% and 75% of their fair share; MES rules give a larger fraction than sequential Phragmen. A weaker relaxation, called local fair-share (Local-FS), requires that, for every unfunded project "y", there exsits at least one agent "i" who approves "y" and has share("X"+"y", i) &gt; "B"/"n". Local-FS can be satisfied by a variant of the method of equal shares in which the contribution of each agent to funding a project "x" is proportional to share({"x"},"i"), rather than to "ui"("x"). Another relaxation is the Extended Justified Share (EJS): it means that, for any group of agents "N" who can afford a set of projects "P", such that every member in "N" approves "all elements of P", there is at least one member "i" in "N" for whom share("X","i") ≥ share("P","i"). It looks similar to EJR, but they are independent: there are instances in which some allocations are EJS and not EJR, while other allocations are EJR and not EJS. An EJS allocation always exists and can be found by the exponential-time Greedy Cohesive Rule, in time formula_7; finding an EJS allocation is NP-hard. But the above variant of MES satisfies EJS up-to one project (EJS-1). It is open whether EJS up-to any project (EJS-x) can be satisfied in polynomial time. District fairness. District fairness is a fairness notion that focuses on the pre-specified districts in a city. Each district "i" deserves a budget "Bi" (a part of the entire city budget), which is usually proportional to the population size in the district. In many cities, there is a separate PB process in each district. It may be more efficient to do a single city-wide PB process, but it is important to do so in a way that does not harm the districts. Thus, a city-wide budget-allocation is district fair if it gives each district "i" at least the welfare it could get by an optimal allocation of "Bi". Hershkowitz, Kahng, Peters and Procaccia study the problem of welfare maximization subject to district fairness. They show that finding an optimal deterministic allocation is NP-hard, but finding an optimal randomized allocation that is district-fair in expectation can be done efficiently. Moreover, if it is allowed to overspend (by up to 65%), it is possible to find an allocation that maximizes social welfare and guarantees district-fairness up-to one project. Monotonicity properties. It is natural to expect that, when some parameters of the PB instance change, the outcome of a PB rule would change in a predictable way. In particular: Monotonicity properties have been studied for welfare-maximization rules and for their greedy variants. Strategic properties. A PB rule is called strategyproof if no voter can increase his utility by reporting false preferences. With unit costs, the rule that maximizes the utilitarian welfare (choosing the "B" projects with the largest number of approvals) is strategyproof. This is not necessarily true with general costs. With approval ballots and cost-satisfaction, the greedy algorithm, that selects projects by the number of approvals, is strategyproof up to one project. The result does not hold for cardinality-satisfaction. The utilitarian rule is not proportional even with unit costs and approval ballots. Indeed, even in committee voting, there is a fundamental tradeoff between strategyproofness and proportionality; see multiwinner approval voting#strategy. Constraints on the allocation. Often, there are constraints that forbid some subsets of projects from being the outcome of PB. For example: Rey, Endriss and de Haan develop a general framework to handle any constraints that can be described by propositional logic, by encoding PB instances as judgement aggregation. Their framework allows dependency constraints as well as category constraints, with possibly overlapping categories. Fain, Munagala and Shah study a generalization of PB: allocating indivisible public goods, with possible constraints on the allocation. They consider matroid constraints, matching constraints, and packing constraints (which correspond to budget constraints). Jain, Sornat, Talmon and Zehavi assume that projects are partitioned into disjoint categories, and there is a budget limit on each category, in addition to the general budget limit. They study the computational complexity of maximizing the social welfare subject to these constraints. In general the problem is hard, but efficient algorithms are given for settings with few categories. Patel, Khan and Louis also assume that projects are partitioned into disjoint categories, with both upper and lower quotas on each category. They present approximation algorithms using dynamic programming. Chen, Lackner and Maly assume that projects belong to possibly-overlapping categories, with upper and lower quotas on each category. Motamed, Soeteman, Rey and Endriss show how to handle categorical constraints by reduction to PB with multiple resources. Extensions. Recently, several extensions of the basic PB model have been studied. The shortlisting stage. Rey, Endriss and de Haan consider an important stage that occurs, in real-life PB implementations, before the voting stage: choosing a short list of projects that will be presented to the voters. They model this "shortlisting stage" as a multiwinner voting process in which there is no limit on the total size or cost of the outcome. They analyze several rules that can be used in this stage, to guarantee diversity of the selected projects. They also analyze possible strategic manipulations in the shortlisting stage. Repeated PB. Lackner, Maly and Rey note that, in reality, PB is not a one-time process, but rather a repeating process, that occurs annually. They extend some fairness notions from perpetual voting to PB. In particular, they assume that voters are partitioned into types, and try to achieve fairness to types over time. Non-additive utilities. Jain, Sornat and Talmon assume that the projects may be substitute goods or complementary goods, and therefore the utility an agent receives from a set of projects is not necessarily the sum of utilities of each project. They analyze the computational complexity of welfare maximization in this extended setting. In this work, the interaction structure between the projects is fixed and identical for all voters; Jain, Talmon and Bulteau extend the model further by allowing voters to specify individual interaction structures. Non-fixed costs. Lu and Boutilier consider a model of budgeted social choice, which is very similar to PB. In their setting, the cost of each project is the sum of a fixed cost, and a variable cost that increases with the number of agents "assigned" to the project. Motamed, Soeteman, Rey and Endriss consider multi-dimensional costs, e.g. costs in terms of money, time, and other resources. They extend some fairness properties and strategic properties to this setting, and consider the computational complexity of welfare maximization. Uncertain costs. Baumeister, Boes and Laussmann assume that costs are uncertain: each cost has a probability distribution, and its actual cost is revealed only when it is completed. To reduce risk, it is possible to implement projects one after the other, so that if the first project costs too much, some later projects can be removed. But might cause some projects to be implemented very late. They show that it is impossible to both maintain a low risk of over-spending, and guarantee that all projects complete in their due time. They adapt the fairness criteria, as well as the method of equal shares, to this setting. Different degrees of funding. Projects that can be funded in different degrees. For example, a new building for youth activities could have 1, 2 or 3 floors; it can be small or large; it can be built from wood or stone; etc. This can be seen as a middle-ground between indivisible PB (which allows only two levels) and divisible PB (which allows continuously many levels). Formally, each project "j" can be implemented in any degree between 0 and "tj", where 0 means "not implemented at all" and tj is the maximum possible implementation. Each degree of implementation has a cost. The ballots are "ranged-approval ballots", that is: each voter gives, for each project, a minimum and a maximum amount of money that should be put into this project. Sreedurga considers utilitarian welfare maximization in this setting. He considers four satisfaction functions: For cardinality-satisfaction, maximizing the utilitarian welfare can be done in polynomial time by dynamic programming. For the other satisfaction functions, welfare maximization is NP-hard, but can be computed in pseudo-polynomial time or approximated by an FPTAS, and also fixed-parameter tractable for some natural parameters. Additionally, Sreedurga defines several monotonicity and consistency axioms for this setting. He shows that each welfare-maximization rule satisfies some of these axioms, but no rule satisfies all of them. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|N|\\cdot B / n \\geq \\text{cost}(P)" }, { "math_id": 1, "text": "\\sum_{x\\in P} \\min_{i\\in N} u_i(x)" }, { "math_id": 2, "text": "n\\cdot B / n = B" }, { "math_id": 3, "text": "\\sum_{x \\text{ is funded }} \\max_{i\\in N} u_i(x)" }, { "math_id": 4, "text": "|N|\\cdot B / n " }, { "math_id": 5, "text": "4 \\log (2 \\frac{u_{\\max}}{u_{\\min}})" }, { "math_id": 6, "text": "\\text{share}(P,i) := \\sum_{x \\in P\\cap A_i} \\frac{\\text{cost}(x)}{|\\{ j: x\\in A_j \\}|}" }, { "math_id": 7, "text": "O(n\\cdot 2^m)" } ]
https://en.wikipedia.org/wiki?curid=60419824
6042
Compact space
Type of mathematical space In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all "limiting values" of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers formula_0 is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers formula_1 is not compact either, because it excludes the two limiting values formula_2 and formula_3. However, the "extended" real number line "would" be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces. One such generalization is that a topological space is "sequentially" compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval [0, 1], some of those points will get arbitrarily close to some real number in that space. For instance, some of the numbers in the sequence , , , , , , ... accumulate to 0 (while others accumulate to 1). Since neither 0 nor 1 are members of the open unit interval (0, 1), those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering formula_4 (the real number line), the sequence of points 0,  1,  2,  3, ... has no subsequence that converges to any real number. Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term "compactness" — is phrased in terms of the existence of finite families of open sets that "cover" the space in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally – that is, in a neighborhood of each point – into corresponding statements that hold throughout the space, and many theorems are of this character. The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space. Historical development. In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point. Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected. The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass. In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points. The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà. The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt. For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence – or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term "compactness" to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis). However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis. In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it. The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers. This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by , who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space. Basic examples. Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval [0,1] of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence 1, , , , , , , , ... get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself — an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval [0,∞), one could choose the sequence of points 0, 1, 2, 3, ..., of which no sub-sequence ultimately gets arbitrarily close to any given real number. In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary – without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point "within" the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point. Definitions. Various definitions of compactness may apply, depending on the level of generality. A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces. In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness – originally called "bicompactness" – is defined using covers consisting of open sets (see "Open cover definition" below). That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally – in a neighbourhood of each point of the space – and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property. Open cover definition. Formally, a topological space X is called "compact" if every open cover of X has a finite subcover. That is, X is compact if for every collection C of open subsets of X such that formula_5 there is a finite subcollection F ⊆ C such that formula_6 Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term "quasi-compact" for the general notion, and reserve the term "compact" for topological spaces that are both Hausdorff and "quasi-compact". A compact set is sometimes referred to as a "compactum", plural "compacta". Compactness of subsets. A subset K of a topological space X is said to be compact if it is compact as a subspace (in the subspace topology). That is, K is compact if for every arbitrary collection C of open subsets of X such that formula_7 there is a finite subcollection F ⊆ C such that formula_8 Compactness is a topological property. That is, if formula_9, with subset Z equipped with the subspace topology, then K is compact in Z if and only if K is compact in Y. Characterization. If X is a topological space then the following are equivalent: Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above). Euclidean space. For any subset A of Euclidean space, A is compact if and only if it is closed and bounded; this is the Heine–Borel theorem. As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed n-ball. Metric spaces. For any metric space ("X", "d"), the following are equivalent (assuming countable choice): A compact metric space ("X", "d") also satisfies the following properties: Ordered spaces. For an ordered space ("X", &lt;) (i.e. a totally ordered set equipped with the order topology), the following are equivalent: An ordered space satisfying (any one of) these conditions is called a complete lattice. In addition, the following are equivalent for all ordered spaces ("X", &lt;), and (assuming countable choice) are true whenever ("X", &lt;) is compact. (The converse in general fails if ("X", &lt;) is not also metrizable.): Characterization by continuous functions. Let X be a topological space and C("X") the ring of real continuous functions on X. For each "p" ∈ "X", the evaluation map formula_11 given by ev"p"("f") = "f"("p") is a ring homomorphism. The kernel of ev"p" is a maximal ideal, since the residue field is the field of real numbers, by the first isomorphism theorem. A topological space X is pseudocompact if and only if every maximal ideal in C("X") has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though. In general, for non-pseudocompact spaces there are always maximal ideals m in C("X") such that the residue field C("X")/"m" is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space X is compact if and only if every point x of the natural extension "*X" is infinitely close to a point "x"0 of X (more precisely, x is contained in the monad of "x"0). Hyperreal definition. A space X is compact if its hyperreal extension "*X" (constructed, for example, by the ultrapower construction) has the property that every point of "*X" is infinitely close to some point of "X" ⊂ "*X". For example, an open real interval is not compact because its hyperreal extension *(0,1) contains infinitesimals, which are infinitely close to 0, which is not a point of X. Properties of compact spaces. Let {{mvar|X}} be the set of non-negative integers. We endow {{mvar|X}} with the particular point topology by defining a subset {{math|"U" ⊆ "X"}} to be open if and only if {{math|0 ∈ "U"}}. Then {{math|1="S" := {0}}} is compact, the closure of {{mvar|S}} is all of {{mvar|X}}, but {{mvar|X}} is not compact since the collection of open subsets {{math|{{0, "x"} : "x" ∈ "X"}}} does not have a finite subcover. Functions and compact spaces. Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum. (Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact. Compactifications. Every topological space {{mvar|X}} is an open dense subspace of a compact space having at most one point more than {{mvar|X}}, by the Alexandroff one-point compactification. By the same construction, every locally compact Hausdorff space {{mvar|X}} is an open dense subspace of a compact Hausdorff space having at most one point more than {{mvar|X}}. Ordered compact spaces. A nonempty compact subset of the real numbers has a greatest element and a least element. Let {{mvar|X}} be a simply ordered set endowed with the order topology. Then {{mvar|X}} is compact if and only if {{mvar|X}} is a complete lattice (i.e. all subsets have suprema and infima). Bibliography. | last = Mack | first = John | doi = 10.4153/CJM-1967-059-0 | journal = Canadian Journal of Mathematics | mr = 211382 | pages = 649–654 | title = Directed covers and paracompact spaces | volume = 19
[ { "math_id": 0, "text": "\\mathbb{Q}" }, { "math_id": 1, "text": "\\mathbb{R}" }, { "math_id": 2, "text": "+\\infty" }, { "math_id": 3, "text": "-\\infty" }, { "math_id": 4, "text": "\\mathbb{R}^1" }, { "math_id": 5, "text": "X = \\bigcup_{S \\in C}S\\ ," }, { "math_id": 6, "text": "X = \\bigcup_{S \\in F} S\\ ." }, { "math_id": 7, "text": "K \\subseteq \\bigcup_{S \\in C} S\\ ," }, { "math_id": 8, "text": "K \\subseteq \\bigcup_{S \\in F} S\\ ." }, { "math_id": 9, "text": "K \\subset Z \\subset Y" }, { "math_id": 10, "text": "X \\times Y \\to Y" }, { "math_id": 11, "text": "\\operatorname{ev}_p\\colon C(X)\\to \\mathbb{R}" }, { "math_id": 12, "text": "\\mathbb{R}^2" }, { "math_id": 13, "text": "\\left( \\frac{1}{n}, 1 - \\frac{1}{n} \\right)" }, { "math_id": 14, "text": "\\left[0, \\frac{1}{\\pi} - \\frac{1}{n}\\right]\\text{ and }\\left[\\frac{1}{\\pi} + \\frac{1}{n}, 1\\right]" }, { "math_id": 15, "text": "\\{f_n\\}" }, { "math_id": 16, "text": "\\{f_n(x)\\}" }, { "math_id": 17, "text": "d(f, g) = \\sup_{x \\in [0, 1]} |f(x) - g(x)|." }, { "math_id": 18, "text": "\\mathbb{C}" }, { "math_id": 19, "text": "\\ell^2" } ]
https://en.wikipedia.org/wiki?curid=6042
60423671
Normally flat ring
In algebraic geometry, a normally flat ring along a proper ideal "I" is a local ring "A" such that formula_0 is flat over formula_1 for each integer formula_2. The notion was introduced by Hironaka in his proof of the resolution of singularities as a refinement of equimultiplicity and was later generalized by Alexander Grothendieck and others. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I^n/I^{n+1}" }, { "math_id": 1, "text": "A/I" }, { "math_id": 2, "text": "n \\ge 0" } ]
https://en.wikipedia.org/wiki?curid=60423671
604268
Multi-index notation
Mathematical notation Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices. Definition and basic properties. An "n"-dimensional multi-index is an formula_0-tuple formula_1 of non-negative integers (i.e. an element of the "formula_0"-dimensional set of natural numbers, denoted formula_2). For multi-indices formula_3 and formula_4, one defines: formula_5 formula_6 formula_7 formula_8 formula_9 formula_10 where formula_11. formula_12. formula_13 where formula_14 (see also 4-gradient). Sometimes the notation formula_15 is also used. Some applications. The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following, formula_16 (or formula_17), formula_18, and formula_19 (or formula_20). formula_21 formula_22 Note that, since "x" + "y" is a vector and "α" is a multi-index, the expression on the left is short for ("x"1 + "y"1)"α"1⋯("x""n" + "y""n")"α""n". For smooth functions formula_23 and formula_24,formula_25 For an analytic function formula_23 in "formula_0" variables one has formula_26 In fact, for a smooth enough function, we have the similar Taylor expansion formula_27 where the last term (the remainder) depends on the exact version of Taylor's formula. For instance, for the Cauchy formula (with integral remainder), one gets formula_28 A formal linear formula_29-th order partial differential operator in formula_0 variables is written as formula_30 For smooth functions with compact support in a bounded domain formula_31 one has formula_32 This formula is used for the definition of distributions and weak derivatives. An example theorem. If formula_33 are multi-indices and formula_34, then formula_35 Proof. The proof follows from the power rule for the ordinary derivative; if "α" and "β" are in formula_36, then Suppose formula_37, formula_38, and formula_34. Then we have that formula_39 For each formula_40 in formula_41, the function formula_42 only depends on formula_43. In the above, each partial differentiation formula_44 therefore reduces to the corresponding ordinary differentiation formula_45. Hence, from equation (1), it follows that formula_46 vanishes if formula_47 for at least one formula_40 in formula_41. If this is not the case, i.e., if formula_48 as multi-indices, then formula_49 for each formula_50 and the theorem follows. Q.E.D. References. &lt;templatestyles src="Reflist/styles.css" /&gt; "This article incorporates material from multi-index derivative of a power on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\alpha = (\\alpha_1, \\alpha_2,\\ldots,\\alpha_n)" }, { "math_id": 2, "text": "\\mathbb{N}^n_0" }, { "math_id": 3, "text": "\\alpha, \\beta \\in \\mathbb{N}^n_0" }, { "math_id": 4, "text": "x = (x_1, x_2, \\ldots, x_n) \\in \\mathbb{R}^n" }, { "math_id": 5, "text": "\\alpha \\pm \\beta= (\\alpha_1 \\pm \\beta_1,\\,\\alpha_2 \\pm \\beta_2, \\ldots, \\,\\alpha_n \\pm \\beta_n)" }, { "math_id": 6, "text": "\\alpha \\le \\beta \\quad \\Leftrightarrow \\quad \\alpha_i \\le \\beta_i \\quad \\forall\\,i\\in\\{1,\\ldots,n\\}" }, { "math_id": 7, "text": "| \\alpha | = \\alpha_1 + \\alpha_2 + \\cdots + \\alpha_n" }, { "math_id": 8, "text": "\\alpha ! = \\alpha_1! \\cdot \\alpha_2! \\cdots \\alpha_n!" }, { "math_id": 9, "text": "\\binom{\\alpha}{\\beta} = \\binom{\\alpha_1}{\\beta_1}\\binom{\\alpha_2}{\\beta_2}\\cdots\\binom{\\alpha_n}{\\beta_n} = \\frac{\\alpha!}{\\beta!(\\alpha-\\beta)!}" }, { "math_id": 10, "text": "\\binom{k}{\\alpha} = \\frac{k!}{\\alpha_1! \\alpha_2! \\cdots \\alpha_n! } = \\frac{k!}{\\alpha!} " }, { "math_id": 11, "text": "k:=|\\alpha|\\in\\mathbb{N}_0" }, { "math_id": 12, "text": "x^\\alpha = x_1^{\\alpha_1} x_2^{\\alpha_2} \\ldots x_n^{\\alpha_n}" }, { "math_id": 13, "text": "\\partial^\\alpha = \\partial_1^{\\alpha_1} \\partial_2^{\\alpha_2} \\ldots \\partial_n^{\\alpha_n}," }, { "math_id": 14, "text": "\\partial_i^{\\alpha_i}:=\\partial^{\\alpha_i} / \\partial x_i^{\\alpha_i}" }, { "math_id": 15, "text": "D^{\\alpha} = \\partial^{\\alpha}" }, { "math_id": 16, "text": "x,y,h\\in\\Complex^n" }, { "math_id": 17, "text": "\\R^n" }, { "math_id": 18, "text": "\\alpha,\\nu\\in\\N_0^n" }, { "math_id": 19, "text": "f,g,a_\\alpha\\colon\\Complex^n\\to\\Complex" }, { "math_id": 20, "text": "\\R^n\\to\\R" }, { "math_id": 21, "text": " \\left( \\sum_{i=1}^n x_i\\right)^k = \\sum_{|\\alpha|=k} \\binom{k}{\\alpha} \\, x^\\alpha" }, { "math_id": 22, "text": " (x+y)^\\alpha = \\sum_{\\nu \\le \\alpha} \\binom{\\alpha}{\\nu} \\, x^\\nu y^{\\alpha - \\nu}." }, { "math_id": 23, "text": "f" }, { "math_id": 24, "text": "g" }, { "math_id": 25, "text": "\\partial^\\alpha(fg) = \\sum_{\\nu \\le \\alpha} \\binom{\\alpha}{\\nu} \\, \\partial^{\\nu}f\\,\\partial^{\\alpha-\\nu}g." }, { "math_id": 26, "text": "f(x+h) = \\sum_{\\alpha\\in\\mathbb{N}^n_0} {\\frac{\\partial^{\\alpha}f(x)}{\\alpha !}h^\\alpha}." }, { "math_id": 27, "text": "f(x+h) = \\sum_{|\\alpha| \\le n}{\\frac{\\partial^{\\alpha}f(x)}{\\alpha !}h^\\alpha}+R_{n}(x,h)," }, { "math_id": 28, "text": "R_n(x,h)= (n+1) \\sum_{|\\alpha| =n+1}\\frac{h^\\alpha}{\\alpha !} \\int_0^1(1-t)^n\\partial^\\alpha f(x+th) \\, dt." }, { "math_id": 29, "text": "N" }, { "math_id": 30, "text": "P(\\partial) = \\sum_{|\\alpha| \\le N} {a_{\\alpha}(x)\\partial^{\\alpha}}." }, { "math_id": 31, "text": "\\Omega \\subset \\R^n" }, { "math_id": 32, "text": "\\int_{\\Omega} u(\\partial^{\\alpha}v) \\, dx = (-1)^{|\\alpha|} \\int_{\\Omega} {(\\partial^{\\alpha}u)v\\,dx}." }, { "math_id": 33, "text": "\\alpha,\\beta\\in\\mathbb{N}^n_0" }, { "math_id": 34, "text": "x=(x_1,\\ldots, x_n)" }, { "math_id": 35, "text": " \\partial^\\alpha x^\\beta = \\begin{cases} \n\\frac{\\beta!}{(\\beta-\\alpha)!} x^{\\beta-\\alpha} & \\text{if}~ \\alpha\\le\\beta,\\\\\n0 & \\text{otherwise.}\n\\end{cases}" }, { "math_id": 36, "text": "\\{0, 1, 2,\\ldots\\}" }, { "math_id": 37, "text": "\\alpha=(\\alpha_1,\\ldots, \\alpha_n)" }, { "math_id": 38, "text": "\\beta=(\\beta_1,\\ldots, \\beta_n)" }, { "math_id": 39, "text": "\\begin{align}\\partial^\\alpha x^\\beta&= \\frac{\\partial^{\\vert\\alpha\\vert}}{\\partial x_1^{\\alpha_1} \\cdots \\partial x_n^{\\alpha_n}} x_1^{\\beta_1} \\cdots x_n^{\\beta_n}\\\\\n&= \\frac{\\partial^{\\alpha_1}}{\\partial x_1^{\\alpha_1}} x_1^{\\beta_1} \\cdots\n\\frac{\\partial^{\\alpha_n}}{\\partial x_n^{\\alpha_n}} x_n^{\\beta_n}.\\end{align}" }, { "math_id": 40, "text": "i" }, { "math_id": 41, "text": "\\{ 1, \\ldots , n\\}" }, { "math_id": 42, "text": "x_i^{\\beta_i}" }, { "math_id": 43, "text": "x_i" }, { "math_id": 44, "text": "\\partial/\\partial x_i" }, { "math_id": 45, "text": "d/dx_i" }, { "math_id": 46, "text": "\\partial^\\alpha x^\\beta" }, { "math_id": 47, "text": "\\alpha_i > \\beta_i" }, { "math_id": 48, "text": "\\alpha \\leq \\beta" }, { "math_id": 49, "text": " \\frac{d^{\\alpha_i}}{dx_i^{\\alpha_i}} x_i^{\\beta_i} = \\frac{\\beta_i!}{(\\beta_i-\\alpha_i)!} x_i^{\\beta_i-\\alpha_i}" }, { "math_id": 50, "text": "i" } ]
https://en.wikipedia.org/wiki?curid=604268
60427579
Carnot's theorem (conics)
Relation between conic sections and triangles Carnot's theorem (named after Lazare Carnot) describes a relation between conic sections and triangles. In a triangle formula_0 with points formula_1 on the side formula_2, formula_3 on the side formula_4 and formula_5 on the side formula_6 those six points are located on a common conic section if and only if the following equation holds: formula_7.
[ { "math_id": 0, "text": "ABC" }, { "math_id": 1, "text": "C_A, C_B" }, { "math_id": 2, "text": "AB" }, { "math_id": 3, "text": "A_B, A_C" }, { "math_id": 4, "text": "BC" }, { "math_id": 5, "text": "B_C, B_A" }, { "math_id": 6, "text": "AC" }, { "math_id": 7, "text": "\n\\frac{|AC_A|}{|BC_A|}\\cdot \\frac{|AC_B|}{|BC_B|}\\cdot \\frac{|BA_B|}{|CA_B|}\\cdot \\frac{|BA_C|}{|CA_C|} \\cdot \\frac{|CB_C|}{|AB_C|}\\cdot \\frac{|CB_A|}{|AB_A|}=1 \n" } ]
https://en.wikipedia.org/wiki?curid=60427579
60427832
Carnot's theorem (perpendiculars)
Condition for 3 lines with common point to be perpendicular to the sides of triangle Carnot's theorem (named after Lazare Carnot) describes a necessary and sufficient condition for three lines that are perpendicular to the (extended) sides of a triangle having a common point of intersection. The theorem can also be thought of as a generalization of the Pythagorean theorem. Theorem. For a triangle formula_0 with sides formula_1 consider three lines that are perpendicular to the triangle sides and intersect in a common point formula_2. If formula_3 are the pedal points of those three perpendiculars on the sides formula_1, then the following equation holds: formula_4 The converse of the statement above is true as well, that is if the equation holds for the pedal points of three perpendiculars on the three triangle sides then they intersect in a common point. Therefore, the equation provides a necessary and sufficient condition. Special cases. If the triangle formula_0 has a right angle in formula_5, then we can construct three perpendiculars on the sides that intersect in formula_6: the side formula_7, the line perpendicular to formula_7 and passing through formula_8, and the line perpendicular to formula_9 and passing through formula_8. Then we have formula_10, formula_11 and formula_12 and thus formula_13, formula_14, formula_15, formula_16, formula_17 and formula_18. The equation of Carnot's Theorem then yields the Pythagorean theorem formula_19. Another corollary is the property of perpendicular bisectors of a triangle to intersect in a common point. In the case of perpendicular bisectors you have formula_20, formula_21 and formula_22 and therefore the equation above holds. which means all three perpendicular bisectors intersect in the same point.
[ { "math_id": 0, "text": "\\triangle ABC" }, { "math_id": 1, "text": "a, b, c" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "P_a, P_b, P_c" }, { "math_id": 4, "text": " |AP_c|^2+|BP_a|^2+|CP_b|^2=|BP_c|^2+|CP_a|^2+|AP_b|^2" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "F=A" }, { "math_id": 7, "text": "b" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "c" }, { "math_id": 10, "text": "P_a=C" }, { "math_id": 11, "text": "P_b=A" }, { "math_id": 12, "text": "P_c=A" }, { "math_id": 13, "text": "|AP_b|=0" }, { "math_id": 14, "text": "|AP_c|=0" }, { "math_id": 15, "text": "|CP_a|=0" }, { "math_id": 16, "text": "|CP_b|=b" }, { "math_id": 17, "text": "|BP_a|=a" }, { "math_id": 18, "text": " |BP_c|=c" }, { "math_id": 19, "text": "a^2 + b^2 =c^2" }, { "math_id": 20, "text": " |AP_c| = |BP_c|" }, { "math_id": 21, "text": "|BP_a| = |CP_a|" }, { "math_id": 22, "text": "|CP_b| = |AP_b|" } ]
https://en.wikipedia.org/wiki?curid=60427832
60428529
RF chain
An RF chain is a cascade of electronic components and sub-units which may include amplifiers, filters, mixers, attenuators and detectors. It can take many forms, for example, as a wide-band receiver-detector for electronic warfare (EW) applications, as a tunable narrow-band receiver for communications purposes, as a repeater in signal distribution systems, or as an amplifier and up-converters for a transmitter-driver. In this article, the term RF (radio frequency) covers the frequency range "Medium Frequencies" up to "Microwave Frequencies", i.e. from 100 kHz to 20 GHz. The key electrical parameters for an RF chain are system gain, noise figure (or noise factor) and overload level. Other important parameters, related to these properties, are sensitivity (the minimum signal level which can be resolved at the output of the chain); dynamic range (the total range of signals that the chain can handle from a maximum level down to smallest level that can be reliably processed) and spurious signal levels (unwanted signals produced by devices such as mixers and non-linear amplifiers). In addition, there may be concerns regarding the immunity to incoming interference or, conversely, the amount of undesirable radiation emanating from the chain. The tolerance of a system to mechanical vibration may be important too. Furthermore, the physical properties of the chain, such as size, weight and power consumption may also be important considerations. An addition to considering the performance of the RF chain, the signal and signal-to-noise requirements of the various signal processing components, which may follow it, are discussed because they often determine the target figures for a chain. Parameter sets. Each two-port network in an RF chain can be described by a parameter set, which relates the voltages and currents appearing at the terminals of that network. Examples are: impedance parameters, i.e. z-parameters; admittance parameters, i.e. y-parameters or, for high frequency situations, scattering parameters, i.e. S-parameters. Scattering parameters avoid the need for ports to be open or short-circuited, which are difficult requirements to achieve at microwave frequencies. In theory, if the parameter set is known for each of the components in an RF chain, then the response of the chain can be calculated precisely, whatever the configuration. Unfortunately, acquiring the detailed information required to carry out this procedure is usually an onerous task, especially when more than two or three components are in cascade. A simpler approach is to assume the chain is a cascade of impedance matched components and then, subsequently, to apply a tolerance spread for mismatch effects (see later). A system spreadsheet. A system spreadsheet has been a popular way of displaying the important parameters of a chain, in a stage-by-stage manner, for the frequency range of interest. It has the advantage of highlighting key performance figures and also pin-pointing where possible problem areas may occur within the chain, which are not always apparent from a consideration of overall results. Such a chart can be compiled manually or, more conveniently, by means of a computer program In addition, 'tookits' are available which provide aids to the system designer. Some routines, useful for spreadsheet development, are given next. Key spreadsheet topics. For the parameters considered below, the chain is assumed to contain a cascade of devices, which are (nominally) impedance matched. The procedures given here allow all calculations to be displayed in the spreadsheet in sequence and no macros are used. Although this makes for a longer spreadsheet, no calculations are hidden from the user. For convenience, the spread sheet columns, show the frequency in sub-bands, with bandwidths sufficiently narrow to ensure that any gain ripple is sufficiently characterized. Consider the nth stage in a chain of RF devices. The cumulative gain, noise figure, 1 dB compression point and output thermal noise power for the preceding (n-1) devices are given by Gcumn - 1, Fcumn - 1, Pcumn - 1 and Ncumn - 1, respectively. We wish to determine the new cumulative figures, when the nth stage is included, i.e. the values of Gcumn, Fcumn, Pcumn and Ncumn, given that the nth stage has values of Gn, Fn, P1n for its gain, noise figure and 1 dB compression point, respectively. Cumulative gain. The cumulative gain, Gcumn after n stages, is given by formula_0 and Gcumn(dB) is given by formula_1 where Gcumn-1(dB) is the total gain of the first (n-1) stages and Gn(dB) is the gain of the nth stage. Conversion equations between dBs and linear terms are: formula_2 and formula_3 Cumulative noise factor (Noise Figure). The cumulative noise factor, after n stages of the overall cascade, Fcumn is given by formula_4 where Fcumn-1 is the noise factor of the first (n-1) stages, Fn is the noise factor of the nth stage, and Gcumn is the overall gain of n stages. The cumulative noise figure is then formula_5 Cumulative 1dB compression point. For spreadsheet purposes, it is convenient to refer the 1 dB compression point to the input of the RF chain, i.e. P1cumn(input), formula_6 where P1cumn-1 is the 1 dB compression point at the input of the first (n-1) stages, P1n is the 1 dB compression point for the nth stage, referred to its input and Gcumn is the overall gain including the nth stage. The units are [mW] or [Watt]. Related parameters, such as IP3 or IM3 are helpful fictive numbers used to evaluate the system. The device would burn, applying IP3 input level. Accuracy of the measurement with spectrum analyzer is (HP/Agilent specs: +-1.0 dB, and +-0.5 dB custom device). Don't chase fractions of dB. In linear systems, this all results in AGC. Cumulative noise power. The thermal noise power present at the input of an RF chain, is a maximum in a resistively matched system, and is equal to kTB, where k is Boltzmann's constant (= 1.38044 × 10−23 J/K), T is the absolute temperature, in kelvins and B is the bandwidth in Hz. At a temperature of 17 °C (≡ 290 K), kTB = 4.003 × 10−15 W/MHz ≡ -114 dBm for 1 MHz bandwidth. The thermal noise after n stages of an RF chain, with total gain GT and noise figure FT is given by formula_7 where k = Boltzmann's constant, T is the temperature in kelvins and B is the bandwidth in hertz, or formula_8 where Ncumn(dBm) is the total noise power in dBm per 1 MHz of bandwidth, In receivers, the cumulative gain is set to ensure that the output noise power of the chain at an appropriate level for the signal processing stages that follow. For example, the noise level at the input to an analog-to-digital converter (A/D) must not be at too low a level, otherwise the noise (and any signals within it) is not properly characterized (see the section on A/Ds, later). On the other hand, too high a level results in the loss of dynamic range. Other related system properties. With the basic parameters of the chain determined, other related properties can be derived. Second and third order intercept points. Sometimes performance at high signal levels is defined by means of the "second-order intercept point (I2)" and the "third-order intercept point (I3)", rather than by the 1 dB compression point. These are notional signal levels which occur in two-signal testing and correspond to the theoretical points where second and third order inter-modulation products achieve the same power level as the output signal. The figure illustrates the situation. In practice, the intercept levels are never achieved because an amplifier has gone into limiting before they are reached, but they are useful theoretical points from which to predict intercept levels at lower input powers. In dB terms, they decrease at twice the rate (IP2) and three times the rate (IP3) of the fundamental signals. When products, stage to stage, add incoherently, the cumulative results for these products are derived by similar equations to that for the 1 dB compression point. formula_9 where I2cumn-1 is the second order intercept point at the input of the first (n-1) stages, I2n is the third order intercept point for the nth stage, referred to its input and Gcumn is the overall gain including the nth stage. Similarly, formula_10 where I3cumn-1 is the third order intercept point at the input of the first (n-1) stages, I3n is the third order intercept point for the nth stage, referred to its input. The cumulative intercept points are useful when determining the "spurious free dynamic range" of a system. There is an approximate relationship between the third order intercept level and the 1 dB compression level which is formula_11 Although only an approximation, the relationship is found to apply to a large number of amplifiers. Signal-to-noise ratio. In the spread sheet, the total frequency band of interest B(Hz) is divided into M sub-bands (spreadsheet columns) of B/M (Hz) each, and for each sub-band (m = 1 to M) the thermal noise power is derived, as described above. In practice, these results will differ slightly, from column to column, if the system has gain ripple. The signal-to-noise ratio (S:N) is the peak signal power of the pulse (Psig) divided by the total noise power (Pnoise) from the M frequency bins, i.e. formula_12 This is the S:N ratio at RF frequencies. It can be related to the video S:N ratio as shown next. Relating RF and video S:N ratios. For spreadsheet purposes it can useful to find the RF signal to noise ratio which corresponds to a desired video signal to noise figure after demodulation or detection. As an RF chain usually has sufficient gain for any noise contribution from the detector diode to be ignored, the video S:N can be shown to be formula_13 where [If there is significant gain variation across the band, then it can be divided into M sub-bands and results summed for these sub-bands, as described earlier.] From the above equation, as the noise power in the RF band is PN = kTBRF', a relationship between RF and Video S:N ratios can be found. formula_14 (This result can be found elsewhere). Inverting the relationship gives the RF signal-to-noise ratio required to achieve a given video S:N ratio: formula_15 Signal sensitivity. Signal sensitivity is important for receiving systems and refers to the minimum signal level at the input that is necessary to give a signal that can be resolved reliably by the detection process at the end of the RF chain. This parameter is less important in the case of repeaters and transmitter drivers where signal levels tend to be higher and other concerns such as stage overload and spurious signal generation tend to be more relevant. Determining a value for system sensitivity can be difficult and depends on many things, including the method of detection, the signal coding method, the bandwidth of the RF channel, and whether or not digital processing is involved. Two important parameters used in assessing sensitivity performance of a system are the "Probability of Detection" and the "False Alarm Rate". Statistical methods are often used in the decision process (see Tsui and Skolnik). Tangential sensitivity. Tangential sensitivity, (TSS), defines that input power which results in a video signal to noise ratio of approximately 8 dB from the detector. The thumbnail shows an example of a typical detected pulse at the TSS limit, with the pulse + noise sitting at a level just clear of the noise floor. The TSS level is too low a value for reliable pulse detection in a practical scenario, but it can be determined with sufficient accuracy in bench tests on a receiver to give a quick guide figure for system performance. In a wideband receiver, with a square-law detector, the TSS value at the chain input terminals is given by, formula_16 From this, the S:N of the RF signal, at the input to the detector can be obtained when the video output is at TSS. formula_17 This equation shows that the S:N at RF is typically less than unity, in wideband systems, when the video output is at TSS. For example, if BR/BV = 500 then the equation gives (S:N)R = 0.17 (≈ -7.7 dB). (Note: a similar result is obtained by using the equation relating RF and video S:N ratios, given in the previous section). The thumbnail shows the simulated video output (at TSS) corresponding to an RF pulse in wideband noise with S:N = 0.17 and a bandwidth ratio of 500. A S:N guideline figure for pulse detection. The sensitivity of a system may be taken as the "minimum detectable signal". This is that level of signal that exceeds a threshold value by a suitable margin (If the level is set too low, noise spikes will exceed it too frequently and if the signal+noise does not exceed it by a sufficient margin then it may fall below the threshold giving pulse termination prematurely. So, in determining the minimum detectable signal, it is necessary to choose the "false alarm rate" and "probability of detection" values appropriate to the system requirement. To aid the designer, graphs are available, to help determine the necessary S:N ratio at the detector. In the case of pulse detection of a signal in noise, following the detector in a wideband receiver, where the RF bandwidth greatly exceeds the video bandwidth, a guideline figure for reliable performance a S:N (at video) is 16 to 18 dB. This is a useful figure for use in spreadsheets, and it corresponds to a probability of detection of over 99% for a Swerling 1 target (Although lower values of S:N can give acceptable "Probability of Detection" and "False Alarm Rate" figures, the measurement of pulse lengths become less reliable because noise spikes on pulses may extend below the chosen threshold level). As examples, thumbnails show simulated examples of a detected pulse, in noise, where S:N = 18 dB and 15 dB. As can be seen, if the S:N falls to 15 dB or lower, it becomes difficult to set a threshold level for pulse detection, that is clear of the noise floor, and yet does not result in early termination. The video S:N ratio can be related to the RF S:N ratio, as shown earlier. In scenarios, such as radar pulse detection, integration over several pulses may occur and a lower value of S:N then becomes acceptable. In general, system sensitivity and pulse detection theory are specialized topics and often involve statistical procedures not easily adapted for spreadsheets. Mismatches. In the past, devices in an RF chain have often been inter-connected by short transmission lines, such as coaxial cable, (0.414" and 0.085"semi-rigid cables are popular ), by stripline or by microstrip. Almost invariably, mismatches occur at the various interfaces. Standard equations for a transmission line, terminated in a mismatch, are The response of a mismatched transmission line. If a transmission line is mismatched at both ends, multiple-reflected signals can be present on the line, resulting in ripple on the frequency response, as seen at the load. Where only first time round echoes are considered (i.e. multiple reflections are ignored), the output response is given by formula_18 Where A typical plot is shown in the thumbnail. This response has a ripple component with a peak-to-peak value ΔA, given by formula_19 The frequency difference from peak-to-peak (or trough-to-trough) of the ripple is given by ΔΩ where formula_20 The response of multiple mismatches. An RF chain may contain many inter-stage links of various lengths. The overall result is obtained using formula_21 This can give an overall response which is far from flat. As an example, a random collection of 25 cascaded (but separated) links give the result shown. Here, a random selection of path delays are assumed, with α taken as unity and ρ1 and ρ2 taking the typical value 0.15 (a return loss ≈ 16 dB), for the frequency range 10 to 20 GHz For this example, calibration at 50 MHz intervals would be advisable, in order to characterize this response. The ripple amplitude would be reduced if the mismatches ρ1 and ρ2 were improved but, especially if the lengths of the interconnecting links were made shorter. An RF chain, made up of surface mounted components, interconnected by stripline, which can be made physically small, may achieve less than 0.5 dB ripple. The use of integrated circuits would give lower ripple still, (see, for example Monolithic microwave integrated circuits). Mixers. The presence of a mixer in an RF chain complicates the spreadsheet because the frequency range at the output differs from that at the input. In addition, because mixers are non-linear devices, they introduce many inter-modulation products, which are undesirable, especially in wide-band systems. For an input signal at frequency Fsig and a local oscillator frequency Flo , the output frequencies of a mixer are given by formula_22 where m and n are integers. Usually, for a mixer, the desired output is the frequency with n = m = 1. The other outputs are often referred to as "spurs" and are usually unwanted. Frequency plans are often drawn up, often as a separate spreadsheet, to minimize the consequences of these unwanted signals Some general points regarding mixer performance are: In a typical mixer, the 1 dB compression point is between 5 and 10 dB below the local oscillator power. Note that the approximate relationship between IP3 and P1 differs from that for amplifiers. For mixers, a very approximate expression is: formula_23 As this is very approximate, it is advisable to refer to the specification of the mixer in question, for clarification. Dynamic range. Dynamic Range (DR) is that range of input powers from that of a just detectable signal up to a level at which the chain overloads. DR is given by formula_24 where Pmax is the Maximum Signal Power, discussed earlier, and Psens is the smallest input power for signal detection (see Sensitivity, discussed earlier).. Field strength, antenna gain and signal power for receiver antennas. When the power density of an incoming signal is Pinc then the power at the antenna terminals is PR is given by formula_25 Where Aeff is the effective area of the antenna (or the Antenna aperture). Power density, which is in watts per metre squared, can be related to Electric Field Strength ER, given in volts per metre, by formula_26 The gain of the antenna is related to the effective aperture by. :formula_27 In practice, the effective aperture of the antenna is smaller than the actual physical area. For a dish, the effective area is about 0.5 to 0.6 times the actual area, and for a rectangular horn antenna it is about 0.7 to 0.8 times the actual area. For a dipole there is no actual physical area, but as a half-wave dipole has a power gain of 1.62 and the effective area can be inferred from that. Front-end losses. Front end losses are those losses which occur prior to the first active device of a receiver chain. They often arise because of the operational requirements of a particular system, but should be minimized, where possible, to ensure the best possible system sensitivity. These losses add to the effective noise figure of the first amplifier stage, dB for dB. Some losses are a consequence of the system construction, such as antenna to receiver feeder loss and, may include waveguide-to-coax. transition loss. Other losses arise from the necessity to include devices to protect the chain from high incident powers. For example, a radar system requires a transmit-receive (TR) cell to protect the chain from the high-power signals of the radar's transmitter. Similarly, a front end limiter is needed, on a ship, to protect the chain from the emissions of high-power transmitters located close by. In addition, the system may include a band-pass filter at its input, to protect it from out-of-band signals, and this device will have some pass-band loss. Signal and S:N requirements of signal processing devices. Detectors (diodes). Detector diodes for RF and Microwaves may be point contact diodes, Schottky diodes, Gallium Arsenide or p-n junction devices. Of these, Schottky diodes and junction diodes require biassing for best results. Also, silicon junction diodes perform less well at high frequencies. A typical detector diode has a TSS of -45 to -50 dBm and peak pulse powers of 20dBm, although better figures are possible). At low powers, diodes have a square-law characteristic, i.e. the output voltage is proportional to the input power, but at higher powers (above about -15dBm) the device becomes linear, with the output voltage proportional to the input voltage. Square law detectors can give detectable signals at video, in wideband systems, even when the RF S:N is less than unity. For example, using the RF-to-Video relationships given earlier, for a system which has a bandwidth of 6 GHz, and an RF S:N value of 0.185 (-7 dB), the video S:N (i.e. TSS) will be 6.31 (8 dB). (Tsui's equations give an RF S:N value of 0.171 for this example). Detector-log-video-amplifiers (DLVAs). DLVAs have been commonly found in direction finding systems, using multiple channels, squinted antennas and amplitude comparison methods. They are also useful for compressing the dynamic range of incoming signals of receivers, prior to digitising. They cover frequency ranges such as 2 – 6 GHz and 6 – 18 GHz. There are also wideband devices available which cover the range 2 – 18 GHz. A simple DLVA contains a broadband diode detector followed by an amplifier with a logarithmic characteristic and has an input power range of, typically, -45dBm to 0dBm, which may be increased to -45 to +15dBm in an extended-range DLVA. Two devices, together with an amplifier, can be combined to give an effective range of -65dBm to +15dBm. In a successive-detection DLVA, which includes a low noise amplifier, the power range may to be, typically -65dBm to +10dBm Instantaneous frequency measurement systems (IFMs), digital discriminator units DDUs). IFMs can provide a frequency measurement of a single pulse. They incorporate a set of delay-line frequency discriminators, with delay lengths increasing in a binary or other sequence. They usually incorporate some gain of their own. The discriminator with the longest delay line establishes the frequency measurement accuracy and resolution, the shortest delay line correlator defines the unambiguous bandwidth of the DFD and the remaining correlators serve to resolve ambiguities. Usually, there is an input, limiting amplifier present in the IFM. This boosts the received signal to a constant level for processing by the correlators, making the frequency-data-decoding task of the frequency processor easier, and to emphasise the "capture effect" when simultaneous signals are present. Normally the RF amplifier will produce a minimum of 10 dB limiting at the lowest specified signal input level. If the RF S:N ratio is too low, the output of the longest delay line correlator (which sets the frequency resolution of the IFM) will become degraded and noisy. At high S:N ratios (+10dBm), the measured frequency accuracy approaches the correlator-limited rms error, but at approximately -3dBm SNR, ambiguity errors appear, causing large measurement inaccuracies. The lowest input power level of a typical DDU is about -75dBm, and with a receiver noise figure of 10 dB, it gives a frequency accuracy of approximately 1 MHz They have dynamic ranges of 65 to 75 dB and cover frequency bands such as 2 – 6 GHz, 6–18 GHz and some wideband devices cover 2 – 18 GHz. With the advent of digital techniques, analogous processes to those of an analog system have been realized. Analog to digital converters (A/Ds). An Analog-to-digital converter, located at the end of the RF chain, provides digital signals for further signal processing. As the A/D operates with sampled signals, it is necessary for the Nyquist–Shannon sampling theorem to be satisfied, if data is not to be lost. As shown earlier, a low-amplitude RF pulse immersed in wideband noise, can be detected by a square-law diode detector. Similarly spread spectrum signals can be recovered from below the noise floor by compression. Consequently, to ensure no loss of data, the chain gain should be high enough to ensure that thermal noise will activate the A/D adequately, so that any signals present within the noise, can be recovered correctly by the detection or compression process. Typically, the rms noise voltage present the input to the A/D should be one or two bits of the A/D range, but no lower. On the other hand, having excessive chain gain so that the noise floor is unnecessarily high, will result in the loss of dynamic range. Consider, as an example, a chirp signal with time-bandwidth product of 200 and of amplitude of LSB which is embedded in noise with an rms voltage of 1 LSB, present at the input to an A/D. he digitized, quantised output, relative to the mean value, is similar to the example in the left-hand figure below. After compression in the signal processor, a high amplitude pulse, whose magnitude is well above the noise is obtained, as shown in the right-hand figure. This example happens to show, unintentionally, the benefits of dither which is used to improve the linearity and dynamic range of an A/D. In the case of the signal considered here, if there was no noise present, but just the signal alone, its amplitude would be insufficient to operate the A/D. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Gcum_n = Gcum_{n-1} \\times G_n" }, { "math_id": 1, "text": "Gcum_n(dB) = Gcum_{n - 1}(dB) + G_n(dB)" }, { "math_id": 2, "text": "G = 10^{G(dB)/10} = \\exp \\big (0.23026 \\times G(dB) \\big )" }, { "math_id": 3, "text": "G(dB) = 10 \\times \\log_{10}G = 4.6429 \\times \\ln(G) " }, { "math_id": 4, "text": "Fcum_n = Fcum_{n - 1} +\\frac{F_n -1}{Gcum_{n-1}}" }, { "math_id": 5, "text": "NFcum_n(dB) = 10 \\times log_{10}(Fcum_n) = 4.6429 \\times ln(Fcum_n) " }, { "math_id": 6, "text": " P1cum_n(input) = \\frac{1}{ \\frac{1}{P1cum_{n-1}(input)} + \\frac{Gcum_n}{P1_n(input)}}" }, { "math_id": 7, "text": "Ncum_n = kTB \\times Fcum_n \\times Gcum_n" }, { "math_id": 8, "text": "Ncum_n(dBm/Mhz) = -114 +Fcum_n(dB) + Gcum_n(dB)" }, { "math_id": 9, "text": " I2cum_n(input) = \\frac{1}{ \\frac{1}{I2cum_{n-1}(input)} + \\frac{Gcum_n}{I2_n(input)}}" }, { "math_id": 10, "text": " I3cum_n(input) = \\frac{1}{ \\frac{1}{I3cum_{n-1}(input)} + \\frac{Gcum_n}{I3_n(input)}}" }, { "math_id": 11, "text": "IP3(dB) \\approx P1(dB) + 11(dB) " }, { "math_id": 12, "text": " \\frac{S}{N} = \\frac{P_{sig}}{ \\sum_{m=1}^M P_{noise}(m)} " }, { "math_id": 13, "text": " \\Big ( \\frac {S}{N} \\Big )_{vid} = \\frac {G^2.P_S^2}{4.G^2.P_S.kTF'B_V + 2(kTF'G)^2.(B_V.B_R- B_v^2/2)} " }, { "math_id": 14, "text": " \\Big ( \\frac {S}{N} \\Big )_{vid} = \\frac {\\Big ( \\frac {S}{N} \\Big )_{rf}^2}{4. \\Big ( \\frac{S}{N} \\Big)_{rf} . \\frac{B_V}{B_R} +2. \\frac{B_V}{B_R} - \\Big ( \\frac{B_V}{B_R} \\Big ) ^2}" }, { "math_id": 15, "text": " \\Big ( \\frac {S}{N} \\Big )_{rf} = \\frac{B_V}{B_R}. \\Bigg [ 2.\\Big( \\frac{S}{N} \\Big )_{vid} \\pm \\sqrt { 4. \\Big (\\frac{S}{N} \\Big \n)_{vid}^2 + \\Big( \\frac{S}{N} \\Big )_{vid} . \\Big ( 2. \\frac{B_R}{B_V} -1 \\Big )} \\Bigg ] " }, { "math_id": 16, "text": "TSS(dBm) = 114 + 10.logF_T +10.log \\Big ( 6.31 B_V = 2.5 \\sqrt {2.B_V B_R - B_V^2} \\Big ) " }, { "math_id": 17, "text": "\\Big (\\frac{S}{N} \\Big )_R(dB) =10.log \\Bigg [ \\frac{B_V}{B_R} \\Bigg(6.31 + 2.5 \\sqrt{2. \\frac{B_R}{B_V} -1} \\Bigg) \\Bigg ]" }, { "math_id": 18, "text": " V_{out} = V_1( 1 + \\alpha ^2 . \\rho _1 . \\rho _2 . e^{j.2* \\pi . f . 2T_d} ) " }, { "math_id": 19, "text": " \\Delta A = 2. \\frac{1 + \\rho_1 . \\rho_2}{1 - \\rho_1 . \\rho_2} " }, { "math_id": 20, "text": " \\Delta \\Omega = \\frac{\\pi}{T_d} " }, { "math_id": 21, "text": " \\text{Overall Result} = \\prod_{n=1}^N Vout_n " }, { "math_id": 22, "text": " \\text{Mixer Output} = n\\times F_{lo} \\pm \\ m\\times F{sig} " }, { "math_id": 23, "text": "IP3(dB) \\approx P1(dB) + 15(dB) " }, { "math_id": 24, "text": "D_R = P_{max} - P_{sens} " }, { "math_id": 25, "text": "P_R = A_{eff} \\times P_{inc} " }, { "math_id": 26, "text": "E_R = \\sqrt{P_{inc}.120 \\pi} " }, { "math_id": 27, "text": "G = A_{eff}. \\frac{4 \\pi}{ \\lambda ^2} " } ]
https://en.wikipedia.org/wiki?curid=60428529
6043076
Herbrand quotient
In mathematics, the Herbrand quotient is a quotient of orders of cohomology groups of a cyclic group. It was invented by Jacques Herbrand. It has an important application in class field theory. Definition. If "G" is a finite cyclic group acting on a "G"-module "A", then the cohomology groups "H""n"("G","A") have period 2 for "n"≥1; in other words "H""n"("G","A") = "H""n"+2("G","A"), an isomorphism induced by cup product with a generator of "H""2"("G",Z). (If instead we use the Tate cohomology groups then the periodicity extends down to "n"=0.) A Herbrand module is an "A" for which the cohomology groups are finite. In this case, the Herbrand quotient "h"("G","A") is defined to be the quotient "h"("G","A") = |"H""2"("G","A")|/|"H""1"("G","A")| of the order of the even and odd cohomology groups. Alternative definition. The quotient may be defined for a pair of endomorphisms of an Abelian group, "f" and "g", which satisfy the condition "fg" = "gf" = 0. Their Herbrand quotient "q"("f","g") is defined as formula_0 if the two indices are finite. If "G" is a cyclic group with generator γ acting on an Abelian group "A", then we recover the previous definition by taking "f" = 1 - γ and "g" = 1 + γ + γ2 + ... . 0 → "A" → "B" → "C" → 0 Properties. is exact, and any two of the quotients are defined, then so is the third and "h"("G","B") = "h"("G","A")"h"("G","C") These properties mean that the Herbrand quotient is usually relatively easy to calculate, and is often much easier to calculate than the orders of either of the individual cohomology groups. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " q(f,g) = \\frac{|\\mathrm{ker} f:\\mathrm{im} g|}{|\\mathrm{ker} g:\\mathrm{im} f|} " } ]
https://en.wikipedia.org/wiki?curid=6043076
6043458
Cuspidal representation
In number theory, cuspidal representations are certain representations of algebraic groups that occur discretely in formula_0 spaces. The term "cuspidal" is derived, at a certain distance, from the cusp forms of classical modular form theory. In the contemporary formulation of automorphic representations, representations take the place of holomorphic functions; these representations may be of adelic algebraic groups. When the group is the general linear group formula_1, the cuspidal representations are directly related to cusp forms and Maass forms. For the case of cusp forms, each Hecke eigenform (newform) corresponds to a cuspidal representation. Formulation. Let "G" be a reductive algebraic group over a number field "K" and let A denote the adeles of "K". The group "G"("K") embeds diagonally in the group "G"(A) by sending "g" in "G"("K") to the tuple ("g""p")"p" in "G"(A) with "g" = "g""p" for all (finite and infinite) primes "p". Let "Z" denote the center of "G" and let ω be a continuous unitary character from "Z"("K") \ Z(A)× to C×. Fix a Haar measure on "G"(A) and let "L"20("G"("K") \ "G"(A), ω) denote the Hilbert space of complex-valued measurable functions, "f", on "G"(A) satisfying The vector space "L"20("G"("K") \ "G"(A), ω) is called the space of cusp forms with central character ω on "G"(A). A function appearing in such a space is called a cuspidal function. A cuspidal function generates a unitary representation of the group "G"(A) on the complex Hilbert space formula_4 generated by the right translates of "f". Here the action of "g" ∈ "G"(A) on formula_5 is given by formula_6. The space of cusp forms with central character ω decomposes into a direct sum of Hilbert spaces formula_7 where the sum is over irreducible subrepresentations of "L"20("G"("K") \ "G"(A), ω) and the "m"π are positive integers (i.e. each irreducible subrepresentation occurs with "finite" multiplicity). A cuspidal representation of "G"("A") is such a subrepresentation (π, "Vπ") for some "ω". The groups for which the multiplicities "m"π all equal one are said to have the multiplicity-one property.
[ { "math_id": 0, "text": "L^2" }, { "math_id": 1, "text": "\\operatorname{GL}_2" }, { "math_id": 2, "text": "\\int_{Z(\\mathbf{A})G(K)\\,\\setminus\\,G(\\mathbf{A})}|f(g)|^2\\,dg < \\infty" }, { "math_id": 3, "text": "\\int_{U(K)\\,\\setminus\\,U(\\mathbf{A})}f(ug)\\,du=0" }, { "math_id": 4, "text": " V_f" }, { "math_id": 5, "text": "V_f" }, { "math_id": 6, "text": "(g \\cdot u)(x) = u(xg), \\qquad u(x) = \\sum_j c_j f(xg_j) \\in V_f" }, { "math_id": 7, "text": "L^2_0(G(K)\\setminus G(\\mathbf{A}),\\omega)=\\widehat{\\bigoplus}_{(\\pi,V_\\pi)}m_\\pi V_\\pi" } ]
https://en.wikipedia.org/wiki?curid=6043458
60436463
Sims conjecture
Conjecture in group theory In mathematics, the Sims conjecture is a result in group theory, originally proposed by Charles Sims. He conjectured that if formula_0 is a primitive permutation group on a finite set formula_1 and formula_2 denotes the stabilizer of the point formula_3 in formula_1, then there exists an integer-valued function formula_4 such that formula_5 for formula_6 the length of any orbit of formula_2 in the set formula_7. The conjecture was proven by Peter Cameron, Cheryl Praeger, Jan Saxl, and Gary Seitz using the classification of finite simple groups, in particular the fact that only finitely many isomorphism types of sporadic groups exist. The theorem reads precisely as follows. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — There exists a function formula_8 such that whenever formula_0 is a primitive permutation group and formula_9 is the length of a non-trivial orbit of a point stabilizer formula_10 in formula_0, then the order of formula_10 is at most formula_11. Thus, in a primitive permutation group with "large" stabilizers, these stabilizers cannot have any small orbit. A consequence of their proof is that there exist only finitely many connected distance-transitive graphs having degree greater than 2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "G_\\alpha" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "f(d) \\geq |G_\\alpha|" }, { "math_id": 6, "text": "d" }, { "math_id": 7, "text": "S \\setminus \\{\\alpha\\}" }, { "math_id": 8, "text": "f: \\mathbb{N} \\to \\mathbb{N} " }, { "math_id": 9, "text": "h > 1" }, { "math_id": 10, "text": "H" }, { "math_id": 11, "text": "f(h)" } ]
https://en.wikipedia.org/wiki?curid=60436463
60437375
Mori-Zwanzig formalism
Method of statistical physics The Mori–Zwanzig formalism, named after the physicists Hajime Mori and Robert Zwanzig, is a method of statistical physics. It allows the splitting of the dynamics of a system into a relevant and an irrelevant part using projection operators, which helps to find closed equations of motion for the relevant part. It is used e.g. in fluid mechanics or condensed matter physics. Idea. Macroscopic systems with a large number of microscopic degrees of freedom are often well described by a small number of relevant variables, for example the magnetization in a system of spins. The Mori–Zwanzig formalism allows the finding of macroscopic equations that only depend on the relevant variables based on microscopic equations of motion of a system, which are usually determined by the Hamiltonian. The irrelevant part appears in the equations as noise. The formalism does not determine what the relevant variables are, these can typically be obtained from the properties of the system. The observables describing the system form a Hilbert space. The projection operator then projects the dynamics onto the subspace spanned by the relevant variables. The irrelevant part of the dynamics then depends on the observables that are orthogonal to the relevant variables. A correlation function is used as a scalar product, which is why the formalism can also be used for analyzing the dynamics of correlation functions. Derivation. A not explicitly time-dependent observable formula_0 obeys the Heisenberg equation of motion formula_1 where the Liouville operator formula_2 is defined using the commutator formula_3 in the quantum case and using the Poisson bracket formula_4 in the classical case. We assume here that the Hamiltonian does not have explicit time-dependence. The derivation can also be generalized towards time-dependent Hamiltonians. This equation is formally solved by formula_5 The projection operator acting on an observable formula_6 is defined as formula_7 where formula_0 is the relevant variable (which can also be a vector of various observables), and formula_8 is some scalar product of operators. The Mori product, a generalization of the usual correlation function, is typically used for this scalar product. For observables formula_9, it is defined as formula_10 where formula_11 is the inverse temperature, Tr is the trace (corresponding to an integral over phase space in the classical case) and formula_12 is the Hamiltonian. formula_13 is the relevant probability operator (or density operator for quantum systems). It is chosen in such a way that it can be written as a function of the relevant variables only, but is a good approximation for the actual density, in particular such that it gives the correct mean values. Now, we apply the operator identity formula_14 to formula_15 Using the projection operator introduced above and the definitions formula_16 (frequency matrix), formula_17 (random force) and formula_18 (memory function), the result can be written as formula_19 This is an equation of motion for the observable formula_20, which depends on its value at the current time formula_21, the value at previous times (memory term) and the random force (noise, depends on the part of the dynamics that is orthogonal to formula_22). Markovian approximation. The equation derived above is typically difficult to solve due to the convolution term. Since we are typically interested in slow macroscopic variables changing timescales much larger than the microscopic noise, this has the effect of integrating over an infinite time limit while disregarding the lag in the convolution. We see this by expanding the equation to second order in formula_23, to obtain formula_24, where formula_25. Generalizations. For larger deviations from thermodynamic equilibrium, the more general form of the Mori–Zwanzig formalism is used, from which the previous results can be obtained through a linearization. In this case, the Hamiltonian has explicit time-dependence. In this case, the transport equation for a variable formula_26, where formula_27 is the mean value and formula_28 is the fluctuation, be written as (use index notation with summation over repeated indices) formula_29, where formula_30, formula_31, formula_32 and formula_33. We have used the time-ordered exponential formula_34 and the time-dependent projection operator formula_35 These equations can also be re-written using a generalization of the Mori product. Further generalizations can be used to apply the formalism to time-dependent Hamiltonians, general relativity, and arbitrary dynamical systems
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": " \\frac{d}{dt} A = i L A, " }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": " L = \\frac{1}{\\hbar}[H, \\cdot]" }, { "math_id": 4, "text": " L = -i \\{H, \\cdot\\}" }, { "math_id": 5, "text": " A(t) = e^{iLt}A." }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": " P X = (A,A)^{-1}(X,A)A," }, { "math_id": 8, "text": "(\\;,\\;)" }, { "math_id": 9, "text": "X, Y " }, { "math_id": 10, "text": " (X,Y) = \\frac{1}{\\beta} \\int_{0}^{\\beta} d\\alpha \\text{Tr}(\\bar{\\rho} X e^{-\\alpha H} Y e^{\\alpha H}), " }, { "math_id": 11, "text": "\\beta = (k_B T)^{-1} " }, { "math_id": 12, "text": " H" }, { "math_id": 13, "text": "\\bar{\\rho}" }, { "math_id": 14, "text": " e^{iLt} = e^{i(1-P)Lt} + \\int_{0}^{t} ds e^{iL(t-s)}PiLe^{i(1-P)Ls}" }, { "math_id": 15, "text": "(1-P) iLA." }, { "math_id": 16, "text": " \\Omega = (iLA, A)(A,A)^{-1} " }, { "math_id": 17, "text": " F(t)= e^{t(1-P)L}(1-P)iLA " }, { "math_id": 18, "text": " K(t)=(iLF(t),A)(A,A)^{-1} " }, { "math_id": 19, "text": " \\dot{A}(t) = \\Omega A(t) + \\int_{0}^{t} ds K(s) A(t-s) + F(t). " }, { "math_id": 20, "text": " A(t) " }, { "math_id": 21, "text": " t " }, { "math_id": 22, "text": " A(t)" }, { "math_id": 23, "text": "iLA(t)" }, { "math_id": 24, "text": " \\dot{A}(t) \\approx \\Omega A(t) + \\int_{0}^{\\infty} ds K(s) A(s) + F(t) " }, { "math_id": 25, "text": " K(t)= - (e^{iLt}(1-P)iLA,(1-P)iLA)(A,A)^{-1} " }, { "math_id": 26, "text": " A(t) = a(t) - \\delta A(t) " }, { "math_id": 27, "text": "a(t)" }, { "math_id": 28, "text": "\\delta A(t)" }, { "math_id": 29, "text": " \\dot{A}_i(t) = v_i(t) + \\Omega_{ij}(t) \\delta A_j(t) + \\int_{0}^{t}ds K_i(t,s) + \\phi_{ij} (t,s) \\delta A_j(t) + F_i(t,0) " }, { "math_id": 30, "text": " v_i(t) = \\text{Tr}(\\bar{\\rho}(t)A_i) " }, { "math_id": 31, "text": " \\Omega_{ij}(t) = \\text{Tr}(\\frac{\\partial \\bar{\\rho}(t)}{\\partial a_j(t)} \\dot{A}_i) " }, { "math_id": 32, "text": " K_i(t,s) = \\text{Tr}(\\bar{\\rho}(s)iL(1-P(s))G(s,t)\\dot{A}_i)," }, { "math_id": 33, "text": " \\phi_{ij}(t,s) = \\text{Tr}(\\frac{\\partial \\bar{\\rho}(t)}{\\partial a_j(t)} iL(1-P(s))G(s,t)\\dot{A}_i) - \\dot{a}_k(t)\\text{Tr}(\\frac{\\partial^2 \\bar{\\rho}(t)}{\\partial a_k(t) \\partial a_j(t)}G(s,t)\\dot{A}_i) " }, { "math_id": 34, "text": " G(s,t) = T_- \\exp(\\int_{s}^{t} du iL(1-P(u))) " }, { "math_id": 35, "text": " P(t)X = \\text{Tr}(\\bar{\\rho}(t)X) + (A_i - a_i(t))\\text{Tr}(\\frac{\\partial \\bar{\\rho}(t)}{\\partial a_i(t)}X). " } ]
https://en.wikipedia.org/wiki?curid=60437375
6043849
Small clause
In linguistics, a small clause consists of a subject and its predicate, but lacks an overt expression of tense. Small clauses have the semantic subject-predicate characteristics of a clause, and have some, but not all, properties of a constituent. Structural analyses of small clauses vary according to whether a flat or layered analysis is pursued. The small clause is related to the phenomena of raising-to-object, exceptional case-marking, accusativus cum infinitivo, and object control. History. The two main analyses of small clauses originate with Edwin Williams (1975, 1980) and Tim Stowell (1981). Williams' analysis follows the Theory of Predication, where the "subject" is the "external argument of a maximal projection". In contrast, Stowell's theory follows the Theory of Small Clauses, supported by linguists such as Chomsky, Aarts, and Kitagawa. This theory uses X-bar theory to treat small clauses as constituents. Linguists debate which analysis to pursue, as there is evidence for both sides of the debate. Williams (1975, 1980). The term "small clause" was coined by Edwin Williams in 1975, who specifically looked at "reduced relatives, adverbial modifier phrases, and gerundive phrases". The following three examples are treated in Williams' 1975 paper as "small clauses", as cited in Balazs 2012. However, not all linguists consider these to be small clauses according to the term's modern definition. The modern definition of a small clause is an [NP XP] in a predicative relationship. This definition was proposed by Edwin Williams in 1980, who introduced the concept of Predication. He proposed that the subject NP and the predicate XP are related via co-indexation, which is made possible by c-command. In Williams' analysis, the [NP XP] of a small clause does not form a constituent. Stowell (1981). Timothy Stowell in 1981 analyzed the small clause as a constituent, and proposed a structure using X-bar theory. Stowell proposes that the subject is defined as an NP occurring in a specifier position, that case is assigned in the specifier position, and that not all categories have subjects. His analysis explains why case-marked subjects cannot occur in infinitival clauses, although NPs can be projected up to an infinitival clause's specifier position. Stowell considers the following examples to be small clauses and constituents. Contexts. What does and does not qualify as a small clause varies in the literature: the example sentences in (8) contain (what some theories of syntax judge to be) small clauses. In each example, the posited small clause is in boldface, and the underlined expression functions as a predicate over the nominal immediately to its left, which is the subject. The verbs that license small clauses are a heterogeneous set, and fall into five classes: A trait that the examples in (8a-b-c) have in common is that the small clause lacks a verb. Indeed, this is sometimes taken as a defining aspect of small clauses, i.e. to qualify as a small clause, a verb must be absent. If, however, one allows a small clause to contain a verb, then the sentences in (8d-e) can also be treated as containing small clauses: The similarity across the sentences (8a-b-c) and (8d-e) is obvious, since the same subject-predicate relationship is present in all these sentences. Hence if one treats sentences (8a-b-c) as containing small clauses, one can also treat sentences (50e-f) as containing small clauses. A defining characteristic of all five contexts for English small clauses in (8a-b-c-d-e) is that the tense associated with finite clauses, which contain a finite verb, is absent. Structural analyses. Broadly speaking, there are three competing analyses of the structure of small clauses. Flat structure. The flat structure organizes small clause material into two distinct sister constituents. The a-trees on the left are the phrase structure trees, and the b-trees on the right are the dependency trees. The key aspect of these structures is that the small clause material consists of two separate sister constituents. The flat analysis is preferred by those working in dependency grammars and representational phrase structure grammars (e.g. Generalized Phrase Structure Grammar and Head-Driven Phrase Structure Grammar). Layered structure. The layered structure organizes small clause material into one constituent. The phrase structure trees are again on the left, and the dependency trees on the right. To mark the small clause in the phrase structure trees, the node label SC is used. The layered analysis is preferred by those working in the Government and Binding framework and its tradition, for examples see Chomsky, Ouhalla, Culicover, Haegeman and Guéron. X-Bar Theory structures. X-bar theory predicts that a head (X) will project into an intermediate constituent (X') and a maximal projection (XP). There were three common analyses of the internal structure of a small clause under X-Bar theory. Here they are each presented as showing the NP AP small clause complement in the sentence (highlighted in bold), "I consider (NP)Mary (AP)smart": Analysis 1: symmetric constituent. In this analysis, neither of the constituents determine the category, meaning that it is an exocentric construction. Some linguists believe that the label of this structure can be symmetrically determined by the constituents, and others believe that this structure lacks a label altogether. In order to indicate a predicative relationship between the subject (in this case, the NP Mary), and the predicate (AP smart), some have suggested a system of co-indexation, where the subject must c-command any predicate associated with it. This analysis is not compatible with X-bar theory because X-bar theory does not allow for headless constituents, additionally this structure may not be an accurate representation of a small clause because it lacks an intermediate functional element that connects the subject with the predicate. Evidence of this element can be seen as an overt realization in a variety of languages such as Welsh, Norwegian, and English, as in the examples below (with the overt predicative functional category highlighted in bold): Some have taken this as evidence that this structure does not adequately portray the structure of a small clause, and that a better structure must include some intermediate projection that combines the subject and the predicate which would assign a head to the constituent. Analysis 2: projection of the predicate. In this analysis, the small clause can be identified as a projection of the predicate (in this example, the predicate would be the 'smart' in 'Mary smart'). In this view, the specifier of the structure (in this case, the NP 'Mary') is the subject of the head (in this case, the A 'smart'). This analysis builds on Chomsky's model of phrase structure and is proposed by Stowell and Contreras. Analysis 3: projection of a functional category. The PrP (predicate phrase) category (also analyzed as AgrP, PredP, and formula_0P), was proposed for a few reasons, some of which are outlined below: Additionally, some have theorized that a combination of the three structures can illustrate why the subjects of verbal small clauses and adjectival small clauses seem to behave differently, as noted by Basilico: Here, examples (13) and (14) show that the subject of an adjectival small clause — with our without copular "be" — can raise to the matrix subject position. However, with a verbal clause, omission of infinitival "to" leads to ungrammatically, as shown by the contrast between the well-formed (15) and the ill-formed (16), where the asterisk (*) marks ungrammaticality. From this evidence, some linguists have theorized that the subjects of adjectival and verbal small clauses must differ in syntactic position. This conclusion is bolstered by the knowledge that verbal and adjectival small clauses differ in their predication forms. While adjectival small clauses involve categorical predication where the predicate ascribes a property to the subject, verbal small clauses involve thetic predicationswhere an event that the subject is participating in is reported. Basilico uses this to argue that a small clause should be analyzed as a Topic Phrase, which is projected from the predicate head (the Topic), with the subject introduced as the specifier of the Topic Phrase. In this way, he argues that in an adjectival small clause, the predicate is formed for an individual topic, and in a verbal small clause the events form a predicate of events for a stage topic, which accounts for why verbal small clauses cannot be raised to the matrix subject position. Identification tests. A small clause divides into two constituents: the subject and its predicate. While small clauses occur cross-linguistically, different languages have different restrictions on what can and cannot be a well-formed (i.e., grammatical) small clause. Criteria for identifying a small clause include: Absence of tense-marking. A small clause is characterised as having two constituents NP and XP that enter into a predicative relation, but lacking finite tense and/or a verb. Possible predicates in small clauses typically include adjective phrases (AP), prepositional phrases (PPs), noun phrases (NPs), or determiner phrases (DPs) (see determiner phrase page on debate regarding the existence of DPs). There are two schools of thought regarding NP VP constructions. Some linguists believe that a small clause characteristically lacks a verb, while others believe that a small clause may have a verb but lacks inflected tense. The following examples, which all lack verbs, illustrate small clauses with [NP AP] (17), [NP DP] (18), and [NP PP] (19): The small clause examples in (17) to (19) contrast with the examples in (20) to (22), with the critical difference being the inclusion of the copular verb "be" preceded by infinitival "to": In some analyses the presence of the copular verb and tense (infinitival "to") makes the bolded portions a full clause rather than a small clause. However, other analyses treat infinitival clauses as a kind of small clause. The latter approach proposes that small clauses lack inflected tense but can have a bare infinitival verb. Under this theory, NP VP constructions are allowed. The following examples contrast small clauses with non-finite verbs with main clauses with finite verbs. The asterisk here represents that the sentence (24) is generally held to be ungrammatical by native English speakers. Selectional restrictions. Selected by matrix verb. Small clauses satisfy selectional requirements of the verb in the main clause in order to be grammatical. The argument structure of verbs is satisfied with small clause constructions. The following two examples show how the argument structure of the verb "consider" affects what predicate can be in the small clause. Example (18) is ungrammatical as the verb "consider" does take an NP complement, but not a PP complement. However, this theory of selectional requirement is also disputed, as substitution of different small clauses can create grammatical readings. Both examples (28) and (29) take PP complements, yet (28) is grammatical but (29) is not. The matrix verb's selection of case also supports the theory that the matrix verb's selectional requirements affect small clause licensing. The verb "consider" in (30) marks accusative case on the subject NP of the small clause. This conclusion is supported by pronoun-substitution, where the accusative caseform is grammatical (31), but the nominative case form is not (32). In Serbo-Croatian, the verb "smatrati" 'to consider' selects for accusative case for its subject argument and instrumental case as its complement argument. Semantically determined. Small clauses' grammaticality judgments are affected by their semantic value. The following examples show how semantic selection also affects predication of a small clause. Some small clauses that appear to be ungrammatical can be well-formed given the appropriate context. This suggests that the semantic relation of the main verb and the small clause affects sentences' grammaticality. Negation. Small clauses may not be negated by a negative modal or auxiliary verb, such as "don't, shan't", or "can't". Small clauses may only be negated by negative particles, such as "not". Constituency. There are a number of considerations that support or refute the one or the other analysis. The layered analysis, which, again, views the small clause as a constituent, is supported by the basic insight that the small clause functions as a single semantic unit, i.e. as a clause consisting of a subject and a predicate. Coordination. Only constituents of a like type can be joined via coordination. Small clauses can be coordinated, which suggests they are constituents of a like type, but see coordination (linguistics) on the controversy regarding the effectiveness and accuracy of coordination as a constituency test. The following examples illustrate small clause coordination for [NP AP] (32), and [NP NP/DP] (33) small clauses. Subjecthood. The layered analysis is also supported by the fact that in certain cases, a small clause can function as the subject of the greater clause, e.g. Most theories of syntax judge subjects to be single constituents, hence the small clauses "Bill behind the wheel" and "Sam drunk" here should each be construed as one constituent. Concerning small clauses in subject position, see Culicover, Haegeman and Guéron. Complement of "with". Further, small clauses can appear as the complement of "with", e.g.: These data are also easier to accommodate if the small clause is a constituent. Movement. One could argue, however, that small clauses in subject position and as the complement of "with" are fundamentally different from small clauses in object position. Some datapoints have the small clause following the matrix verb, whereby the subject of the small clause is also the object of the matrix clause. In such cases, the matrix verb appears to be subcategorizing for its object noun (phrase), which then functions as the subject of the small clause. In this regard, there are a number of observations suggesting that the object/subject noun phrase is a direct dependent of the matrix verb. If so, then this means the flat structure is the correct analysis. This captures that fact, with such object/subject noun phrases, as illustrated in (47), the small clause generally does not behave as a single constituent with respect to movement diagnostics. Thus, the "subject" of a small clause cannot participate in topicalization (47b), clefting (47c), pseudo-cleating (47d), nor can it served as an answer fragment (47e). Moreover, like ordinary object NPs, the "subject" of a small clause can becomes the subject of the corresponding passive sentence (47f), and can be realized as a reflexive pronoun that is coindexed with the matrix subject (47g). The datapoints in (47b-g) are consistent with the flat analysis of small clauses: in such an analysis the object of the matrix clause plays a dual role insofar as it is also the subject of the embedded predicate. Counter-Arguments. Small clauses' constituency status is not agreed upon by linguists. Some linguists argue that small clauses do not form a constituent, but rather form a noun phrase. One argument is that [NP AP small] clauses cannot occur in the subject position without modification, as shown by the ungrammatically of (48). However, these [NP AP] small clauses can occur after the verb if they are modified, such as in example (49). A second argument is coordination tests make incorrect predictions about constituency, particularly regarding small clauses. This casts doubt upon the status of small clauses as constituents. Another counterexample of constituency looks at depictive secondary predicates. One school of thought argues that this example has ["the water up"] behaving as a constituent small clause, while another school of thought argues that the verb "sponge" does not select for a small clause, and that "the water up" semantically, but not syntactically, shows the resultative state of the verb. Cross-linguistic variation. Raising-to-object. Complement small clauses are related to the phenomena of raising-to-object, therefore this theory will be discussed in more detail for English and Korean. English. Raising-to-object with a direct object is illustrated in (52) with the verb "proved." The bolded constituents represent the small clause of the sentence. By hypothesis, the raising-to-object analysis treats the subject of the small clause as having raised from the embedded small clause to the main clause "" Raising (linguistics) is obligatory in small clauses for the "make out" construction. This is evident by the grammaticality of (i) and ungrammaticality of (ii) without raising-to-object behaviour as demonstrated in the table below: The range of scope can also implicate the subject of Raising in small clauses. Semantically, wide scope entails a general situation, for example, "where everyone has some person that they love", whereas narrow scope entails a specific situation, for example, "where everyone love the same person". Considering only verbless small clauses, small clauses are only accessibly with the wide range of scope with respect to the main verb. Korean. In Korean, raising-to-object is optional from with complement clauses, but obligatory with complement small clauses. A fully inflected complement clause is given in (55), and the object "Mary" can be marked either with nominative case (55a) or with accusative case (55b). In contrast, with a complement small clause as in (56), the subject of the small clause can only be marked with accusative; thus while (56a) is ill-formed, (56b) is well-formed. Categorical restrictions. French (Romance). At first glance, French small clauses appear to be unrestricted relative to which category can realize a small clause. Illustrative examples are given below: there are [NP AP] small clauses (57); [NP PP] small clauses (58), as well as [NP VP] small clauses (59). However, there are some restrictions on NP VP constructions. The verb in example (59) is infinitival, without inflected tense, and takes a PP complement. However, the following example (d) is an NP VP small clause construction that is ungrammatical. Although the verb here is infinitival, it cannot grammatically take an AP complement. Coordination tests in French do not provide consistent evidence for small clauses' constituency. Below is an example (e) proving small clauses' constituency. The two small clauses in this example use an NP AP construction. However, the example (f) below makes an incorrect prediction about constituency. Sportiche provides two possible interpretations of this data: either coordination is not a reliable constituency test or the current theory of constituency should be revised to include strings such as the ones predicted above. Lithuanian (Balto-Slavic). Lithuanian small clauses may occur in a NP NP or NP AP construction. NP PP constructions are not small clauses in Lithuanian as the PP does not enter into a predicative relationship with the NP. The example (a) below is of an NP NP construction. The example (b) below is of an NP AP construction. While the English translation of the sentence includes the auxiliary verb "was", it is not present in Lithuanian. In Lithuanian, small clauses may be moved to the front of the sentence to become the topic. This suggests that the small clause operates as a single unit, or a constituent. Note that the sentence in example (c) in English is ungrammatical so it is marked with an asterisk, but the sentence is grammatical in Lithuanian. The phrase "her an immature brat" cannot be split up in example (d), which provides further evidence that the small clause behaves as a single unit. Mandarin (Sinitic). In Mandarin, a small clause does not only lack a verb and tense, but also the presence of functional projections. The reason for this is that the lexical entries for particular nouns in Mandarin not only contain the categorical feature for nouns, but also for verbs. Thus even with the lack of functional projections, nominals can be predicative in a small clause. (a) illustrates a complement small clause: it has no tense-marking, only a DP subject and an NP predicate. However, the semantic difference between Mandarin Chinese and English with regards to its small clauses are represented by example (b) and (c). Though (b) is the embedded small clause in the previous example, it cannot be a matrix clause. Despite having the same sentence structure, a small clause consisting of a DP and an NP, due to the ability of a nominal expression to also belong to a second category of verbs, example (c) is a grammatical sentence. This is evidence that there are more restrictive constraints on what is considered a small clause in Mandarin Chinese, which requires further research. Below is case of special usage of small clause used with the possessive verb "yǒu". The small clause is underlined. Here, the possessive verb "yǒu" takes a small clause complement in order to make a degree comparison between the subject and indirect object. Due to the following AP "gāo", here the possessive verb "yǒu" expresses a limit of the degree of tallness. It is only with a small clause complement that this uncommon degree use of the possessive verb can be communicated. Variable constituent order. Brazilian Portuguese. In Brazilian Portuguese, there are two types of small clauses: free small clauses and dependent small clauses. Dependent small clauses are similar to English in that they consist of an NP XP in a predicative relation. Like many other Romance languages, Brazilian Portuguese has free subject-predicate inversion, although it is restricted here to verbs with single arguments. Dependent small clauses may appear in either a standard, as in example (a), or an inverted form, as in example (b). In contrast, free small clauses cannot occur with subject-predicate order: in example (c), using an [NP AP] order renders the sentence. Free small clauses only occur in the inverted form: in example (d) the small clause has an [XP NP] order, specifically an [AP NP] order. The classification of free small clauses is under debate. Some linguists argue that these free small clauses are actually cleft sentences with finite tense, while other linguists believe that free small clauses are tense phrases without inflected tense on the surface. Spanish. In Spanish, like many Romance languages, there is some flexibility in small clause construction due to the flexibility in word order. This is posited to be due to the fact that Spanish is an example of a language that is discourse-prominent and agreement-oriented. This passing of features onto the v allows a separation of the object from the verb when the focus of the sentence changes. The final position in a sentence is reserved for the focus as seen by the differences in (a) and (b). The difference in preference for one construction over the other ([XP NP] versus [NP XP]) is determined by discourse features. Refer to the following two examples. In (c) the establish topic is the XP, AP in this case, meaning the information we are seeking is the NP. Answer In the following example (e) the reverse is true. We are given the NP in the question and are seeking the information of the XP. Answer Notice in (d) and (f) that the English answer remains the same regardless of the question, but in Spanish, one ordering is preferred over the other. When the new information being presented is the XP, the construction preferred is [NP XP]. This is because the sentence-final position is reserved for focus. It is worth noting that the non-preferred formations (d)(ii) and (f)(i) can be accepted as grammatical if the new information is given the prosodic stress or the established information is destressed, and there is a longer pause between the two constituents, making it right-dislocated. Greek. Greek is another example of a language that is discourse-prominent and agreement-oriented, allowing features to be passed onto the v. This allows for flexibility in word order depending on the changing focus of the small clause. This example can be shown in (a) and (b). The construction can either take [XP NP] or [NP XP] formations with the focused constituent appearing sentence-finally. The difference in preference for one construction over the other is determined by discourse features. Newly given information is considered the focus of the sentence and is therefore preferred in sentence-final position. Refer to examples (c) and (e). In (c) the information we are given is the XP (AP in this case) and the information we are seeking is the DP. This means that the preferred construction is [XP DP]. The reverse is true of example (e). Answer Answer It is worth noting that the non-preferred formations (d)(ii) and (f)(i) can be accepted as grammatical if the new information not in sentence-final position is given the emphatic stress. Expressive exclamatives. English. Expressive Small Clauses, like SCs are verbless and the noun does not carry descriptive content but instead carries expressive content. Expressive Small Clauses are evidence that small clauses learned in early development, last until adulthood for language speakers. ESCs are illustrated in (a). Expressive small clauses are never used in an argument position of the phrase as seen in (b-i) and do not generally occur within the embedded clause of a sentence as seen in (b-ii). Both of the examples below are ungrammatical. The bolded constituents are the ESCs. Unlike ESCs in English, Japanese ESCs differ in two ways: second person pronouns are not used, and ESCs sometimes appear in argument position. The example below shows a well-formed ESC in Japanese. Japanese. The phrase in (a) illustrates the pattern found in Japanese ESCs: [NP1"—no—"NP2]. (a) illustrates the use of a proximate demonstrative in NP1 position. Additionally, first person pronouns, kinship terms, proper names, and other nouns with a vocative use are able to appear in NP1 position"—"except for the intermediate demonstrative "so" (the/that) which is not permitted in ESCs. While (b) is not ungrammatical, it sounds odd and is uncommonly used. This is also true of other second person pronouns in Japanese: "omae", "kisama", and "temee" (in progressively impolite forms). (c) illustrates the use of an ESC in argument position. Notably, ESCs in argument positions lack contextual requirements found in regular ESCs. Japanese ESCs that are not found in argument position require the addressee to be the same as the noun in NP1 position. (c) shows that the addressee of the sentence (Yamada) does not need to be the same as the referent of the ESCs in argument position (Tanaka). Information structure. English: intonation. Because English is agreement-prominent, there is inflexible SC word order and a heavy importance on intonational focus. Though both answers in English use the same words, focus is given by prosodic stress. Spanish: word order and intonation. Spanish has a flexible SC word order, and word order determines focus but prosodic stress is able to be used to make non-preferred constructions felicitous. These examples show the non-felicitous construction but they would be accepted by speakers if the underlined constituents are given emphatic stress and precede a long pause. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Literature. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=6043849
60439463
Cartan–Ambrose–Hicks theorem
In mathematics, the Cartan–Ambrose–Hicks theorem is a theorem of Riemannian geometry, according to which the Riemannian metric is locally determined by the Riemann curvature tensor, or in other words, behavior of the curvature tensor under parallel translation determines the metric. The theorem is named after Élie Cartan, Warren Ambrose, and his PhD student Noel Hicks. Cartan proved the local version. Ambrose proved a global version that allows for isometries between general Riemannian manifolds with varying curvature, in 1956. This was further generalized by Hicks to general manifolds with affine connections in their tangent bundles, in 1959. A statement and proof of the theorem can be found in Introduction. Let formula_0 be connected, complete Riemannian manifolds. We consider the problem of isometrically mapping a small patch on formula_1 to a small patch on formula_2. Let formula_3, and let formula_4 be a linear isometry. This can be interpreted as isometrically mapping an infinitesimal patch (the tangent space) at formula_5 to an infinitesimal patch at formula_6. Now we attempt to extend it to a finite (rather than infinitesimal) patch. For sufficiently small formula_7, the exponential maps formula_8 are local diffeomorphisms. Here, formula_9 is the ball centered on formula_5 of radius formula_10 One then defines a diffeomorphism formula_11 by formula_12 When is formula_13 an isometry? Intuitively, it should be an isometry if it satisfies the two conditions: If formula_13 is an isometry, it must preserve the geodesics. Thus, it is natural to consider the behavior of formula_13 as we transport it along an arbitrary geodesic radius formula_14 starting at formula_15. By property of the exponential mapping, formula_13 maps it to a geodesic radius of formula_16 starting at formula_17. Let formula_18 be the parallel transport along formula_19 (defined by the Levi-Civita connection), and formula_20 be the parallel transport along formula_21, then we have the mapping between infinitesimal patches along the two geodesic radii: formula_22 Cartan's theorem. The original theorem proven by Cartan is the local version of the Cartan–Ambrose–Hicks theorem. formula_13 is an isometry if and only if for all geodesic radii formula_14 with formula_15, and all formula_23, we have formula_24 where formula_25 are Riemann curvature tensors of formula_0.In words, it states that formula_13 is an isometry if and only if the only way to preserve its infinitesimal isometry also preserves the Riemannian curvature. Note that formula_13 generally does not have to be a diffeomorphism, but only a locally isometric covering map. However, formula_13 must be a global isometry if formula_2 is simply connected. Cartan–Ambrose–Hicks theorem. Theorem: For Riemann curvature tensors formula_25 and all broken geodesics (a broken geodesic is a curve that is piecewise geodesic) formula_26 with formula_15, suppose that formula_27 for all formula_23. Then, if two broken geodesics beginning at formula_5 have the same endpoint, the corresponding broken geodesics (mapped by formula_28) in formula_2 also have the same end point. Consequently, there exists a map formula_29 defined by mapping the broken geodesic endpoints in formula_1 to the corresponding geodesic endpoints in formula_2. The map formula_29 is a locally isometric covering map. If formula_2 is also simply connected, then formula_30 is an isometry. Locally symmetric spaces. A Riemannian manifold is called "locally symmetric" if its Riemann curvature tensor is invariant under parallel transport: formula_31 A simply connected Riemannian manifold is locally symmetric if it is a symmetric space. From the Cartan–Ambrose–Hicks theorem, we have: Theorem: Let formula_0 be connected, complete, locally symmetric Riemannian manifolds, and let formula_1 be simply connected. Let their Riemann curvature tensors be formula_25. Let formula_3 and formula_4 be a linear isometry with formula_32. Then there exists a locally isometric covering map formula_29 with formula_33 and formula_34. Corollary: Any complete locally symmetric space is of the form formula_35, where formula_1 is a symmetric space and formula_36 is a discrete subgroup of isometries of formula_1. Classification of space forms. As an application of the Cartan–Ambrose–Hicks theorem, any simply connected, complete Riemannian manifold with constant sectional curvature formula_37 is respectively isometric to the "n"-sphere formula_38, the "n"-Euclidean space formula_39, and the "n"-hyperbolic space formula_40. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M,N" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "x\\in M,y\\in N" }, { "math_id": 4, "text": "I:T_xM\\rightarrow T_yN" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "y" }, { "math_id": 7, "text": "r>0" }, { "math_id": 8, "text": "\\exp_x:B_r(x)\\subset T_xM\\rightarrow M, \\exp_y:B_r(y)\\subset T_yN\\rightarrow N" }, { "math_id": 9, "text": "B_r(x)" }, { "math_id": 10, "text": "r." }, { "math_id": 11, "text": "f:B_r(x)\\rightarrow B_r(y)" }, { "math_id": 12, "text": "f=\\exp_y\\circ I\\circ \\exp_x^{-1}." }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "\\gamma:\\left[0,T\\right]\\rightarrow B_r(x)\\subset M" }, { "math_id": 15, "text": "\\gamma(0)=x" }, { "math_id": 16, "text": "B_r(y)" }, { "math_id": 17, "text": "f(\\gamma)(0)=y" }, { "math_id": 18, "text": "P_\\gamma(t)" }, { "math_id": 19, "text": "\\gamma" }, { "math_id": 20, "text": "P_{f(\\gamma)(t)}" }, { "math_id": 21, "text": "f(\\gamma)" }, { "math_id": 22, "text": "I_\\gamma(t)=P_{f(\\gamma)(t)}\\circ I\\circ P_{\\gamma(t)}^{-1}:T_{\\gamma(t)}M\\rightarrow T_{f(\\gamma(t))}N \\quad \\text{ for all } t\\in [0, T]" }, { "math_id": 23, "text": "t\\in [0, T], X,Y,Z\\in T_{\\gamma(t)}M" }, { "math_id": 24, "text": "I_{\\gamma}(t)(R(X,Y,Z))=\\overline{R}(I_{\\gamma}(t)(X), I_\\gamma(t)(Y), I_{\\gamma}(t)(Z))" }, { "math_id": 25, "text": "R,\\overline{R}" }, { "math_id": 26, "text": "\\gamma:\\left[0,T\\right]\\rightarrow M" }, { "math_id": 27, "text": "I_\\gamma(t)(R(X,Y,Z))=\\overline{R}(I_\\gamma(t)(X), I_\\gamma(t)(Y), I_\\gamma(t)(Z))" }, { "math_id": 28, "text": "I_\\gamma" }, { "math_id": 29, "text": "F:M\\rightarrow N" }, { "math_id": 30, "text": "F" }, { "math_id": 31, "text": "\\nabla R=0." }, { "math_id": 32, "text": "I(R(X,Y,Z))=\\overline{R}(I(X),I(Y),I(Z))" }, { "math_id": 33, "text": "F(x)=y" }, { "math_id": 34, "text": "D_xF=I" }, { "math_id": 35, "text": "M/\\Gamma" }, { "math_id": 36, "text": "\\Gamma\\subset \\mathrm{Isom}(M)" }, { "math_id": 37, "text": "\\in\\{+1, 0, -1\\}" }, { "math_id": 38, "text": "S^n" }, { "math_id": 39, "text": "E^n" }, { "math_id": 40, "text": "\\mathbb H^n" } ]
https://en.wikipedia.org/wiki?curid=60439463
60442605
Region (model checking)
In model checking, a field of computer science, a region is a convex polytope in formula_0 for some dimension formula_1, and more precisely a zone, satisfying some minimality property. The regions partition formula_0. The set of zones depends on a set formula_2 of constraints of the form formula_3, formula_4, formula_5 and formula_6, with formula_7 and formula_8 some variables, and formula_9 a constant. The regions are defined such that if two vectors formula_10 and formula_11 belong to the same region, then they satisfy the same constraints of formula_2. Furthermore, when those vectors are considered as a tuple of clocks, both vectors have the same set of possible futures. Intuitively, it means that any timed propositional temporal logic-formula, or timed automaton or signal automaton using only the constraints of formula_2 can not distinguish both vectors. The set of region allows to create the region automaton, which is a directed graph in which each node is a region, and each edge formula_12 ensure that formula_13 is a possible future of formula_14. Taking a product of this region automaton and of a timed automaton formula_15 which accepts a language formula_16 creates a finite automaton or a Büchi automaton which accepts untimed formula_16. In particular, it allows to reduce the emptiness problem for formula_15 to the emptiness problem for a finite or Büchi automaton. This technique is used for example by the software UPPAAL. Definition. Let formula_17 a set of clocks. For each formula_18 let formula_19. Intuitively, this number represents an upper bound on the values to which the clock formula_20 can be compared. The definition of a region over the clocks of formula_21 uses those numbers formula_22's. Three equivalent definitions are now given. Given a clock assignment formula_23, formula_24 denotes the region in which formula_23 belongs. The set of regions is denoted by formula_25. Equivalence of clocks assignment. The first definition allow to easily test whether two assignments belong to the same region. A region may be defined as an equivalence class for some equivalence relation. Two clocks assignments formula_26 and formula_27 are equivalent if they satisfy the following constraints:202 The first kind of constraints ensures that formula_26 and formula_27 satisfies the same constraints. Indeed, if formula_38 and formula_39, then only the second assignment satisfies formula_40. On the other hand, if formula_38 and formula_41, both assignment satisfies exactly the same set of constraint, since the constraints use only integral constants. The second kind of constraints ensures that the future of two assignments satisfy the same constraints. For example, let formula_42 and formula_43. Then, the constraint formula_44 is eventually satisfied by the future of formula_45 without clock reset, but not by the future of formula_27 without clock reset. Explicit definition of a region. While the previous definition allow to test whether two assignments belong to the same region, it does not allow to easily represents a region as a data structure. The third definition given below allow to give a canonical encoding of a region. A region can be explicitly defined as a zone, using a set formula_46 of equations and inequations satisfying the following constraints: Since, when formula_9 and formula_55 are fixed, the last constraint is equivalent to formula_56. This definition allow to encode a region as a data structure. It suffices, for each clock, to state to which interval it belongs and to recall the order of the fractional part of the clocks which belong in an open interval of length 1. It follows that the size of this structure is formula_57 with formula_58 the number of clocks. Timed bisimulation. Let us now give a third definition of regions. While this definition is more abstract, it is also the reason why regions are used in model checking. Intuitively, this definition states that two clock assignments belong to the same region if the differences between them are such that no timed automaton can notice them. Given any run formula_14 starting with a clock assignment formula_23, for any other assignment formula_59 in the same region, there is a run formula_13, going through the same locations, reading the same letters, where the only difference is that the time waited between two successive transition may be different, and thus the successive clock variations are different. The formal definition is now given. Given a set of clock formula_21, two assignments two clocks assignments formula_26 and formula_27 belongs to the same region if for each timed automaton formula_15 in which the guards never compare a clock formula_20 to a number greater than formula_22, given any location formula_60 of formula_15, there is a timed bisimulation between the extended states formula_61 and formula_62. More precisely, this bisimulation preserves letters and locations but not the exact clock assignments.7 Operation on regions. Some operations are now defined over regions: Resetting some of its clock, and letting time pass. Resetting clocks. Given a region formula_63 defined by a set of (in)equations formula_46, and a set of clocks formula_64, the region similar to formula_63 in which the clocks of formula_65 are restarted is now defined. This region is denoted by formula_66, it is defined by the following constraints: The set of assignments defined by formula_66 is exactly the set of assignments formula_69 for formula_70. Time-successor. Given a region formula_63, the regions which can be attained without resetting a clock are called the time-successors of formula_63. Two equivalent definitions are now given. Definition. A clock region formula_71 is a time-successor of another clock region formula_63 if for each assignment formula_70, there exists some positive real formula_72 such that formula_73. Note that it does not mean that formula_74. For example, the region formula_63 defined by the set of constraint formula_75 has the time-successor formula_71 defined by the set of constraint formula_76. Indeed, for each formula_70, it suffices to take formula_77. However, there exists no real formula_78 such that formula_79 or even such that formula_80; indeed, formula_63 defines a triangle while formula_71 defines a segment. Computable definition. The second definition now given allow to explicitly compute the set of time-successor of a region, given by its set of constraints. Given a region formula_63 defined as a set of constraints formula_46, let us define its set of time-successors. In order to do so, the following variables are required. Let formula_81 the set of constraints of formula_46 of the form formula_82. Let formula_83 the set of clocks formula_84 such that formula_46 contains the constraint formula_85. Let formula_86 the set of clocks formula_87 such that there are no constraints of the form formula_88 in formula_46. If formula_89 is empty, formula_63 is its own time successor. If formula_90, then formula_63 is the only time-successor of formula_63. Otherwise, there is a least time-successor of formula_63 not equal to formula_63. The least time-successor, if formula_89 is non-empty, contains: If formula_89 is empty, the least time-successor is defined by the following constraints: Properties. There are at most formula_99 regions, where formula_58 is the number of clocks.203 Region automaton. Given a timed automaton formula_15, its region automaton is a finite automaton or a Büchi automaton which accepts untimed formula_16. This automaton is similar to formula_15, where clocks are replaced by region. Intuitively, the region automaton is contructude as a product of formula_15 and of the region graph. This region graph is defined first. Region graph. The region graph is a rooted directed graph which models the set of possible clock valuations during a run of a timed-autoamton. It is defined as follows: Region automaton. Let formula_103 a timed automaton. For each clock formula_30, let formula_22 the greatest number formula_9 such that there exists a guard of the form formula_104 in formula_15. The region automaton of formula_15, denoted by formula_105 is a finite or Büchi automaton which is essentially a product of formula_15 and of the region graph defined above. That is, each state of the region automaton is a pair containing a location of formula_15 and a region. Since two clocks assignment belonging to the same region satisfies the same guard, each region contains enough information to decide which transitions can be taken. Formally, the region automaton is defined as follows: Given any run formula_114 of formula_15, the sequence formula_115 is denoted formula_116, it is a run of formula_105 and is accepting if and only if formula_14 is accepting207. It follows that formula_117. In particular, formula_15 accepts a timed-word if and only if formula_105 accepts a word. Furthermore, an accepting run of formula_15 can be computed from an accepting run of formula_105.
[ { "math_id": 0, "text": "\\mathbb R^d" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "x\\le c" }, { "math_id": 4, "text": "x\\ge c" }, { "math_id": 5, "text": "x_1\\le x_2+c" }, { "math_id": 6, "text": "x_1\\ge x_2+c" }, { "math_id": 7, "text": "x_1" }, { "math_id": 8, "text": "x_2" }, { "math_id": 9, "text": "c" }, { "math_id": 10, "text": "\\vec x" }, { "math_id": 11, "text": "\\vec x'" }, { "math_id": 12, "text": "r\\to r'" }, { "math_id": 13, "text": "r'" }, { "math_id": 14, "text": "r" }, { "math_id": 15, "text": "\\mathcal A" }, { "math_id": 16, "text": "L" }, { "math_id": 17, "text": "C=\\{x_1,\\dots,x_d\\}" }, { "math_id": 18, "text": "x\\in\\mathbb N" }, { "math_id": 19, "text": "c_x\\in\\mathbb N" }, { "math_id": 20, "text": "x" }, { "math_id": 21, "text": "C" }, { "math_id": 22, "text": "c_x" }, { "math_id": 23, "text": "\\nu" }, { "math_id": 24, "text": "[\\nu]" }, { "math_id": 25, "text": "\\mathcal R" }, { "math_id": 26, "text": "\\nu_1" }, { "math_id": 27, "text": "\\nu_2" }, { "math_id": 28, "text": "\\nu_1(x)\\sim c" }, { "math_id": 29, "text": "\\nu_2(x)\\sim c" }, { "math_id": 30, "text": "x\\in C" }, { "math_id": 31, "text": "0\\le c\\le c_x" }, { "math_id": 32, "text": "\\{\\nu_1(x)\\}\\sim \\{\\nu_1(y)\\}" }, { "math_id": 33, "text": "\\{\\nu_2(x)\\}\\sim \\{\\nu_2(y)\\}" }, { "math_id": 34, "text": "x,y\\in C" }, { "math_id": 35, "text": "\\nu_1(x)\\le c_x" }, { "math_id": 36, "text": "\\nu_1(y)\\le c_y" }, { "math_id": 37, "text": "\\{r\\}" }, { "math_id": 38, "text": "\\nu_1(x)=0.5" }, { "math_id": 39, "text": "\\nu_2(x)=1" }, { "math_id": 40, "text": "x=1" }, { "math_id": 41, "text": "\\nu_2(x)=0.6" }, { "math_id": 42, "text": "\\nu_1=\\{x\\mapsto 0.5, y\\mapsto 0.6\\}" }, { "math_id": 43, "text": "\\nu_2=\\{x\\mapsto 0.5, y\\mapsto 0.4\\}" }, { "math_id": 44, "text": "y=1\\land x< 1" }, { "math_id": 45, "text": "\\nu_1 " }, { "math_id": 46, "text": "S" }, { "math_id": 47, "text": "x=c" }, { "math_id": 48, "text": "x\\in(c,c+1)" }, { "math_id": 49, "text": "0\\le c< c_x" }, { "math_id": 50, "text": "x> c_x" }, { "math_id": 51, "text": "x\\in (c,c+1)" }, { "math_id": 52, "text": "y\\in (c',c'+1)" }, { "math_id": 53, "text": "\\{x\\}\\sim \\{y\\}" }, { "math_id": 54, "text": "\\sim" }, { "math_id": 55, "text": "c'" }, { "math_id": 56, "text": "x\\sim y+c-c'" }, { "math_id": 57, "text": "O\\left(\\sum\\log(c_k)+|C|\\log(|C|)\\right)" }, { "math_id": 58, "text": "|C|" }, { "math_id": 59, "text": "\\nu'" }, { "math_id": 60, "text": "\\ell" }, { "math_id": 61, "text": "(\\ell,\\nu_1)" }, { "math_id": 62, "text": "(\\ell,\\nu_2)" }, { "math_id": 63, "text": "\\alpha" }, { "math_id": 64, "text": "C'\\subseteq C" }, { "math_id": 65, "text": "C'" }, { "math_id": 66, "text": "\\alpha[C'\\mapsto 0]" }, { "math_id": 67, "text": "x=0" }, { "math_id": 68, "text": "x\\in C'" }, { "math_id": 69, "text": "\\nu[C'\\mapsto0]" }, { "math_id": 70, "text": "\\nu\\in\\alpha" }, { "math_id": 71, "text": "\\alpha'" }, { "math_id": 72, "text": "t_{\\nu,\\alpha'}>0" }, { "math_id": 73, "text": "\\nu+t_{\\nu,\\alpha'}\\in\\alpha'" }, { "math_id": 74, "text": "\\alpha+t_{\\nu,\\alpha'}=\\alpha'" }, { "math_id": 75, "text": "\\{0< x< 1, 0< y<1, x< y\\}" }, { "math_id": 76, "text": "\\{0< x< 1, y=1\\}" }, { "math_id": 77, "text": "t_{\\nu,\\alpha'}=1-\\nu(y)" }, { "math_id": 78, "text": "t" }, { "math_id": 79, "text": "\\alpha+t=\\alpha'" }, { "math_id": 80, "text": "\\alpha+t\\subseteq\\alpha'" }, { "math_id": 81, "text": "T\\subseteq S" }, { "math_id": 82, "text": "x_i=c_i" }, { "math_id": 83, "text": "Y\\subseteq C" }, { "math_id": 84, "text": "y" }, { "math_id": 85, "text": "y>c_y" }, { "math_id": 86, "text": "Z\\subseteq C\\setminus Y" }, { "math_id": 87, "text": "\\{z\\}" }, { "math_id": 88, "text": "\\{x\\}<\\{z\\}" }, { "math_id": 89, "text": "T" }, { "math_id": 90, "text": "Y=C" }, { "math_id": 91, "text": "S\\setminus T" }, { "math_id": 92, "text": "x_i>c_i" }, { "math_id": 93, "text": "\\{x_i\\}=\\{x_j\\}" }, { "math_id": 94, "text": "x_i< y" }, { "math_id": 95, "text": "Z" }, { "math_id": 96, "text": "z=c+1" }, { "math_id": 97, "text": "c< z< c+1" }, { "math_id": 98, "text": "z\\in Z" }, { "math_id": 99, "text": "|C|!2^{|C|}\\prod_{x\\in C}(2c_x+2)" }, { "math_id": 100, "text": "\\alpha_0" }, { "math_id": 101, "text": "\\{x=0\\mid x\\in C\\}" }, { "math_id": 102, "text": "(\\alpha,\\alpha'[C'\\mapsto 0])" }, { "math_id": 103, "text": "\\mathcal A=\\langle \\Sigma,L,L_0,C,F,E\\rangle" }, { "math_id": 104, "text": "x\\sim c" }, { "math_id": 105, "text": "R(\\mathcal A)" }, { "math_id": 106, "text": "\\Sigma" }, { "math_id": 107, "text": "L\\times\\mathcal R" }, { "math_id": 108, "text": "L_0\\times\\{\\alpha_0\\}" }, { "math_id": 109, "text": "F\\times\\mathcal R" }, { "math_id": 110, "text": "\\delta" }, { "math_id": 111, "text": "((\\ell,\\alpha),a,(\\ell',\\alpha'[C'\\mapsto0]))" }, { "math_id": 112, "text": "(\\ell,a,g,C',\\ell')\\in E" }, { "math_id": 113, "text": "\\gamma\\models\\alpha'" }, { "math_id": 114, "text": "r=(\\ell_0,\\nu_0)\\xrightarrow[t_1]{\\sigma_1}(\\ell_1,\\nu_1)\\dots" }, { "math_id": 115, "text": "(\\ell_0,[\\nu_0])\\xrightarrow{\\sigma_1}(\\ell_1,[\\nu_1])\\dots" }, { "math_id": 116, "text": "[r]" }, { "math_id": 117, "text": "L(R(\\mathcal A))=\\operatorname{Untime}(L(\\mathcal A))" } ]
https://en.wikipedia.org/wiki?curid=60442605
60444111
Fracture of soft materials
The fracture of soft materials involves large deformations and crack blunting before propagation of the crack can occur. Consequently, the stress field close to the crack tip is significantly different from the traditional formulation encountered in the Linear elastic fracture mechanics. Therefore, fracture analysis for these applications requires a special attention. The Linear Elastic Fracture Mechanics (LEFM) and K-field (see Fracture Mechanics) are based on the assumption of infinitesimal deformation, and as a result are not suitable to describe the fracture of soft materials. However, LEFM general approach can be applied to understand the basics of fracture on soft materials. The solution for the deformation and crack stress field in soft materials considers large deformation and is derived from the finite strain elastostatics framework and hyperelastic material models. Soft materials (Soft matter) consist of a type of material that e.g. includes soft biological tissues as well as synthetic elastomers, and that is very sensitive to thermal variations. Hence, soft materials can become highly deformed before crack propagation. Hyperelastic material models. Hyperelastic material models are utilized to obtain the stress–strain relationship through a strain energy density function. Relevant models for deriving stress-strain relations for soft materials are: Mooney-Rivlin solid, Neo-Hookean, Exponentially hardening material and Gent hyperelastic models. On this page, the results will be primarily derived from the Neo-Hookean model. Generalized neo-Hookean (GNH). The Neo-Hookean model is generalized to account for the hardening factor: formula_0 where b&gt;0 and n&gt;1/2 are material parameters, and formula_1 is the first invariant of the Cauchy-Green deformation tensor: formula_2 where formula_3are the principal stretches. Specific Neo-Hookean model. Setting n=1, the specific stress-strain function for the neo-Hookean model is derived: formula_4. Finite strain crack tip solutions (under large deformation). Since LEFM is no longer applicable, alternative methods are adapted to capture large deformations in the calculation of stress and deformation fields. In this context the method of asymptotic analysis is of relevance. Method of asymptotic analysis. The method of asymptotic analysis consists of analyzing the crack-tip asymptotically to find a series expansion of the deformed coordinates capable to characterize the solution near the crack tip. The analysis is reducible to a nonlinear eigenvalue problem. The problem is formulated based on a crack in an infinite solid, loaded at infinity with uniform uni-axial tension under condition of plane strain (see Fig.1). As the crack deforms and progresses, the coordinates in the current configuration are represented by formula_6 and formula_7 in cartesian basis and formula_8 and formula_9 in polar basis. The coordinates formula_6 and formula_7 are functions of the undeformed coordinates (formula_5) and near the crack tip, as r→0, can be specified as: formula_10 formula_11 formula_12 where formula_13, formula_14 are unknown exponents, and formula_15, formula_16 are unknown functions describing the angular variation. In order to obtain the eigenvalues, the equation above is substituted into the constitutive model, which yields the corresponding nominal stress components. Then, the stresses are substituted into the equilibrium equations (the same formulation as in LEFM theory) and the boundary conditions are applied. The most dominating terms are retained resulting in an eigenvalue problem for formula_15 and formula_13. Deformation and stress field in a plane strain crack. For the case of a homogeneous neo-Hookean solid (n=1) under Mode I condition the deformed coordinates for a plane strain configuration are given by formula_17 where a and formula_18are unknown positive amplitudes that depends on the applied loading and specimen geometry. The leading terms for the nominal stress (or first Piola–Kirchhoff stress, denoted by formula_19 on this page) are: formula_20 formula_21 formula_22 Thus, formula_23and formula_24are bounded at the crack tip and formula_24and formula_25have the same singularity. The leading terms for the true stress (or Cauchy stress, denoted by formula_26 on this page), formula_27 formula_28 formula_29 The only true stress component completely defined by a is formula_30. It also presents the most severe singularity. With that, it is clear that the singularity differs if the stress is given in the current or reference configuration. Additionally, in LEFM, the true stress field under Mode I has a singularity of formula_31, which is weaker than the singularity in formula_30. While in LEFM the near tip displacement field depends only on the Mode I stress intensity factor, it is shown here that for large deformations, the displacement depends on two parameters (a and formula_18 for a plane strain condition). Deformation and stress field in a plane stress crack. The crack tip deformation field for a Mode I configuration in a homogeneous material neo-Hookean solid (n=1) is given by formula_32 where a and c are positive independent amplitudes determined by far field boundary conditions. The dominant terms of the nominal stress are formula_33 formula_34 formula_35 And the true stress components are formula_36 formula_37 Analogously, the displacement depends on two parameters (a and c for a plane stress condition) and the singularity is stronger in the formula_30 term. The distribution of the true stress in the deformed coordinates (as shown in Fig. 1B) can be relevant when analyzing the crack propagation and blunt phenomenon. Additionally, it is useful when verifying experimental results of the deformation of the crack. J-integral. The J-integral represents the energy that flows to the crack, hence, it is used to calculate the energy release rate, G. Additionally, it can be used as a fracture criterion. This integral is found to be path independent as long as the material is elastic and damages to the microstructure are not occurring. Evaluating J on a circular path in the reference configuration yields formula_38 for plane strain Mode I, where a is the amplitude of the leading order term of formula_39 and A and n are material parameters from the strain-energy function. For plane stress Mode I in a neo-Heookean material J is given by formula_40 where b and n are material parameters of GNH solids. For the specific case of a neo-Hookean model, where n=1, b=1 and formula_41, the J-integral for plane stress and plane strain in Mode I are the same: formula_42 J-integral in the pure-shear experiment. The J-integral can be determined by experiments. One common experiment is the pure-shear in an infinite long strip, as shown in Fig. 2. The upper and bottom edges are clamped by grips and the loading is applied by pulling the grips vertically apart by ± ∆. This set generates a condition of plane stress. Under these conditions, the J-integral is evaluated, therefore, as formula_43 where formula_44 formula_45 and formula_46is the high of the strip undeformed state. The function formula_47is determined by measuring the nominal stress acting on the strip stretched by formula_48: formula_49 Therefore, from the imposed displacement of each grip, ± ∆, it is possible to determine the J-integral for the corresponding nominal stress. With the J-integral, the amplitude (parameter a) of some true stress components can be found. Some other stress components amplitudes, however, depend on other parameters such as c (e.g. formula_23 under plane stress condition) and cannot be determined by the pure shear experiment. Nevertheless, the pure shear experiment is very important because it allows the characterization of fracture toughness of soft materials. Interface cracks. To approach the interaction of adhesion between soft adhesives and rigid substrates, the asymptotic solution for an interface crack problem between a GNH material and a rigid substrate is specified. The interface crack configuration considered here is shown in Fig.3 where the lateral slip is disregarded. For the special neo-Hookean case with n=1, and formula_50, the solution for the deformed coordinates is formula_51 formula_52 which is equivalent to formula_53 According to the above equation, the crack on this type of interface is found to open with a parabolic shape. This is confirmed by plotting the normalized coordinates formula_54 vs formula_55 for different formula_56 ratios (see Fig. 4). To go through the analysis of the interface between two GNH sheets with the same hardening characteristics, refer to the model described by Gaubelle and Knauss.
[ { "math_id": 0, "text": "W = \\frac{\\mu}{2b} \\left \\{ \\left[ 1 + \\frac{b}{n}(I-3) \\right]^{n} - 1 \\right \\} ," }, { "math_id": 1, "text": "I = I_{1}" }, { "math_id": 2, "text": "I_{1} = \\lambda_{1}^{2}+\\lambda_{2}^{2}+\\lambda_{3}^{2} ," }, { "math_id": 3, "text": "\\lambda_{\\alpha} " }, { "math_id": 4, "text": "W = \\frac{\\mu}{2} (I-3)" }, { "math_id": 5, "text": "r, \\theta" }, { "math_id": 6, "text": "y_{1}" }, { "math_id": 7, "text": "y_{2}" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "\\phi" }, { "math_id": 10, "text": "y_{\\alpha}(r,\\theta) = r^{m_{\\alpha}} \\upsilon_{\\alpha}(\\theta)+r^{p_{\\alpha}} q_{\\alpha}(\\theta)+... " }, { "math_id": 11, "text": "m_{\\alpha}<p_{\\alpha}, " }, { "math_id": 12, "text": "\\alpha = 1,2 ," }, { "math_id": 13, "text": "m_{\\alpha} " }, { "math_id": 14, "text": "p_{\\alpha} " }, { "math_id": 15, "text": "\\upsilon_{\\alpha}(\\theta) " }, { "math_id": 16, "text": "q_{\\alpha}(\\theta) " }, { "math_id": 17, "text": "y_{1}=-b_{0}r \\sin^{2}(\\theta/2), \\quad y_{2}=ar^{1/2}\\sin(\\theta/2) ," }, { "math_id": 18, "text": "b_{0}" }, { "math_id": 19, "text": "\\sigma" }, { "math_id": 20, "text": "\\sigma_{11}=\\mu b_{0}, \\quad \\sigma_{12}=o(1) ," }, { "math_id": 21, "text": "\\sigma_{21}=-\\frac{\\mu a}{2} r^{-1/2}\\sin(\\theta/2) ," }, { "math_id": 22, "text": "\\sigma_{22}=\\frac{\\mu a}{2} r^{-1/2}\\cos(\\theta/2) ." }, { "math_id": 23, "text": "\\sigma_{11}" }, { "math_id": 24, "text": "\\sigma_{21}" }, { "math_id": 25, "text": "\\sigma_{22}" }, { "math_id": 26, "text": "\\tau" }, { "math_id": 27, "text": "\\tau_{11}=\\mu b_{0}^{2}\\sin^{2}(\\theta/2) ," }, { "math_id": 28, "text": "\\tau_{12}=\\tau_{21} = -\\frac{\\mu}{2} a b_{0} r^{-1/2} \\sin^{2}(\\theta/2) ," }, { "math_id": 29, "text": "\\tau_{22}=\\frac{\\mu}{4} a^{2} r^{-1}." }, { "math_id": 30, "text": "\\tau_{22}" }, { "math_id": 31, "text": "{r}^{-1/2}" }, { "math_id": 32, "text": "y_{1}= cr \\, \\cos\\theta, \\quad y_{2}=a\\sqrt{r} \\sin(\\theta/2) ," }, { "math_id": 33, "text": "\\sigma_{11}=\\mu c, \\quad \\sigma_{12}=o(1) ," }, { "math_id": 34, "text": "\\sigma_{21}=-\\frac{\\mu a}{2} r^{-1/2} \\sin(\\theta/2) ," }, { "math_id": 35, "text": "\\sigma_{22}=\\frac{\\mu a}{2} r^{-1/2} \\cos(\\theta/2) ." }, { "math_id": 36, "text": "\\tau_{11}=\\mu c^{2}, \\quad \\tau_{12}=\\tau_{21} = -\\frac{\\mu }{2} ac r^{-1/2} \\sin(\\theta/2) ," }, { "math_id": 37, "text": "\\tau_{22}=\\frac{\\mu}{4} a^{2}r^{-1}." }, { "math_id": 38, "text": "J = \\pi A \\left( \\frac{2n-1}{2n} \\right)^{2n-1} n^{2-n} a^{2n} ," }, { "math_id": 39, "text": "y_2" }, { "math_id": 40, "text": "J = \\frac{\\mu \\pi}{2} \\left(\\frac{b}{n} \\right)^{n-1} \\left( \\frac{2n-1}{2n} \\right)^{2n-1} n^{1-n} a^{2n} ," }, { "math_id": 41, "text": "A = \\mu/2" }, { "math_id": 42, "text": "J = \\frac{\\mu \\pi a^{2}}{4} ." }, { "math_id": 43, "text": "J = 2h_{0}W(I_{1},I_{2})=2h_{0}\\Psi(\\lambda) ," }, { "math_id": 44, "text": "I_{1}=I_{2}=\\lambda^{2}+\\lambda^{-2}+1 ," }, { "math_id": 45, "text": "\\lambda = 1 + \\frac{\\Delta}{h_{0}} ," }, { "math_id": 46, "text": "h_{0}" }, { "math_id": 47, "text": "\\Psi(\\lambda)" }, { "math_id": 48, "text": "\\lambda" }, { "math_id": 49, "text": "\\Psi = \\int_{1}^{\\lambda} \\sigma(\\lambda) d\\lambda ." }, { "math_id": 50, "text": "\\overline{v_{1}}=\\overline{v_{2}}=\\cos \\theta " }, { "math_id": 51, "text": "y_{1}=a_{1}r^{\\frac{1}{2}} \\sin\\left( \\frac{\\theta}{2} \\right) + r \\cos \\theta ," }, { "math_id": 52, "text": "y_{2}=a_{2}r^{\\frac{1}{2}} \\sin\\left( \\frac{\\theta}{2} \\right), " }, { "math_id": 53, "text": "y_{1} = \\frac{a_{1}}{a_{2}}y_{2}-\\left( \\frac{y_{2}}{a_{2}} \\right)^{2} ." }, { "math_id": 54, "text": "y_{1}/a_{2}^2" }, { "math_id": 55, "text": "y_{2}/a_{2}^{2}" }, { "math_id": 56, "text": "a_{1}/a_{2}" } ]
https://en.wikipedia.org/wiki?curid=60444111
6044675
Faber–Jackson relation
The Faber–Jackson relation provided the first empirical power-law relation between the luminosity formula_0 and the central stellar velocity dispersion formula_1 of elliptical galaxy, and was presented by the astronomers Sandra M. Faber and Robert Earl Jackson in 1976. Their relation can be expressed mathematically as: formula_2 with the index formula_3 approximately equal to 4. In 1962, Rudolph Minkowski had discovered and wrote that a "correlation between velocity dispersion and [luminosity] exists, but it is poor" and that "it seems important to extend the observations to more objects, especially at low and medium absolute magnitudes". This was important because the value of formula_3 depends on the range of galaxy luminosities that is fitted, with a value of 2 for low-luminosity elliptical galaxies discovered by a team led by Roger Davies, and a value of 5 reported by Paul L. Schechter for luminous elliptical galaxies. The Faber–Jackson relation is understood as a projection of the fundamental plane of elliptical galaxies. One of its main uses is as a tool for determining distances to external galaxies. Theory. The gravitational potential of a mass distribution of radius formula_4 and mass formula_5 is given by the expression: formula_6 Where α is a constant depending e.g. on the density profile of the system and G is the gravitational constant. For a constant density, formula_7 The kinetic energy is: formula_8 (Recall formula_1 is the 1-dimensional velocity dispersion. Therefore, formula_9.) From the virial theorem (formula_10 ) it follows formula_11 If we assume that the mass to light ratio, formula_12, is constant, e.g. formula_13 we can use this and the above expression to obtain a relation between formula_4 and formula_14: formula_15 Let us introduce the surface brightness, formula_16 and assume this is a constant (which from a fundamental theoretical point of view, is a totally unjustified assumption) to get formula_17 Using this and combining it with the relation between formula_4 and formula_0, this results in formula_18 and by rewriting the above expression, we finally obtain the relation between luminosity and velocity dispersion: formula_19 that is formula_20 Given that massive galaxies originate from homologous merging, and the fainter ones from dissipation, the assumption of constant surface brightness can no longer be supported. Empirically, surface brightness exhibits a peak at about formula_21. The revised relation then becomes formula_22 for the less massive galaxies, and formula_23 for the more massive ones. With these revised formulae, the fundamental plane splits into two planes inclined by about 11 degrees to each other. Even first-ranked cluster galaxies do not have constant surface brightness. A claim supporting constant surface brightness was presented by astronomer Allan R. Sandage in 1972 based on three logical arguments and his own empirical data. In 1975, Donald Gudehus showed that each of the logical arguments was incorrect and that first-ranked cluster galaxies exhibited a standard deviation of about half a magnitude. Estimating distances to galaxies. Like the Tully–Fisher relation, the Faber–Jackson relation provides a means of estimating the distance to a galaxy, which is otherwise hard to obtain, by relating it to more easily observable properties of the galaxy. In the case of elliptical galaxies, if one can measure the central stellar velocity dispersion, which can be done relatively easily by using spectroscopy to measure the Doppler shift of light emitted by the stars, then one can obtain an estimate of the true luminosity of the galaxy via the Faber–Jackson relation. This can be compared to the apparent magnitude of the galaxy, which provides an estimate of the distance modulus and, hence, the distance to the galaxy. By combining a galaxy's central velocity dispersion with measurements of its central surface brightness and radius parameter, it is possible to improve the estimate of the galaxy's distance even more. This standard yardstick, or "reduced galaxian radius-parameter", formula_24, devised by Gudehus in 1991, can yield distances, free of systematic bias, accurate to about 31%.
[ { "math_id": 0, "text": "L" }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "\nL \\propto \\sigma^ \\gamma\n" }, { "math_id": 3, "text": "\\gamma" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "M" }, { "math_id": 6, "text": "\nU=-\\alpha \\frac{GM^2}{R}\n" }, { "math_id": 7, "text": "\\alpha\\ = \\frac{3}{5}" }, { "math_id": 8, "text": "\nK = \\frac{1}{2}MV^2 = \\frac{3}{2}M \\sigma^2\n" }, { "math_id": 9, "text": "3\\sigma^2 = V^2" }, { "math_id": 10, "text": "2 K + U = 0" }, { "math_id": 11, "text": "\n\\sigma^2 =\\frac{1}{5}\\frac{GM}{R}.\n" }, { "math_id": 12, "text": "M/L" }, { "math_id": 13, "text": "M \\propto L" }, { "math_id": 14, "text": "\\sigma^2" }, { "math_id": 15, "text": "\nR \\propto\\frac{LG}{\\sigma^2}.\n" }, { "math_id": 16, "text": "B = L/(4 \\pi R^2)" }, { "math_id": 17, "text": "\nL=4\\pi R^2 B.\n" }, { "math_id": 18, "text": "\nL \\propto 4\\pi\\left(\\frac{LG}{\\sigma^2}\\right)^2B\n" }, { "math_id": 19, "text": "\nL \\propto\\frac{\\sigma^4}{4\\pi G^2 B},\n" }, { "math_id": 20, "text": "\nL \\propto \\sigma^4.\n" }, { "math_id": 21, "text": "M_V=-23" }, { "math_id": 22, "text": "\nL \\propto \\sigma^{3.1}\n" }, { "math_id": 23, "text": "\nL \\propto \\sigma^{15.0} \n" }, { "math_id": 24, "text": "r_g" } ]
https://en.wikipedia.org/wiki?curid=6044675
60448698
Fracture of biological materials
Fracture of biological materials may occur in biological tissues making up the musculoskeletal system, commonly called orthopedic tissues: bone, cartilage, ligaments, and tendons. Bone and cartilage, as load-bearing biological materials, are of interest to both a medical and academic setting for their propensity to fracture. For example, a large health concern is in preventing bone fractures in an aging population, especially since fracture risk increases ten fold with aging. Cartilage damage and fracture can contribute to osteoarthritis, a joint disease that results in joint stiffness and reduced range of motion. Biological materials, especially orthopedic materials, have specific material properties which allow them to resist damage and fracture for a prolonged period of time. Nevertheless, acute damage or continual wear through a lifetime of use can contribute to breakdown of biological materials. Studying bone and cartilage can motivate the design of resilient synthetic materials that could aid in joint replacements. Similarly, studying polymer fracture and soft material fracture could aid in understanding biological material fracture. The analysis of fracture in biological materials is complicated by multiple factors such as anisotropy, complex loading conditions, and the biological remodeling response and inflammatory response. Bone fracture. "For the medical perspective, see bone fracture." Fracture in bone could occur because of an acute injury (monotonic loading) or fatigue (cyclic loading). Generally, bone can withstand physiological loading conditions, but aging and diseases like osteoporosis that compromise the hierarchical structure of bone can contribute to bone breakage. Furthermore, the analysis of bone fracture is complicated by the bone remodeling response, where there is a competition between microcrack accumulation and the remodeling rate. If the remodeling rate is slower than the rate microcracks accumulate, bone fracture can occur. Furthermore, the orientation and location of the crack matters because bone is anisotropic. Bone characterization. The hierarchical structure of bone provides it with toughness, the ability to resist crack initiation, propagation, and fracture, as well as strength, the resistance to inelastic deformation. Early analysis of bone material properties, specifically resistance to crack growth, concentrated on yielding a single value for the critical stress-intensity factor, formula_0, and the critical strain-energy release rate, formula_1. While this method yielded important insights into bone behavior, it did not lend insight to crack propagation like the resistance curve. The resistance curve (R-curve) is utilized to study crack propagation and toughness development of a material by plotting the crack extension force versus crack extension. In bone literature, the R-curve is said to characterize "fracture toughness" behavior, but this term is not favored in engineering literature and the term "crack growth resistance" is used instead. This term is used to emphasize the material behavior over a change in crack length. The R-curve linear elastic fracture mechanics approach allowed researchers to gain insight into two competing mechanisms that contribute to bone toughness. Bone displays a rising R-curve which is indicative of material toughness and stable crack propagation. There are two types of mechanisms that can impede crack propagation and contribute to toughness, intrinsic and extrinsic mechanisms. Intrinsic mechanisms produce resistance ahead of the crack and extrinsic mechanisms create resistance behind the crack tip in the crack wake. Extrinsic mechanisms are said to contribute to crack-tip shielding which reduces the local stress intensity experienced by the crack. An important difference is that intrinsic mechanisms can impede crack initiation and propagation while extrinsic mechanisms can only inhibit crack propagation. Intrinsic mechanisms. Intrinsic toughening mechanisms are not as well defined as extrinsic mechanisms, because they operate on a smaller length-scale than extrinsic mechanisms (usually ~1 μm). Plasticity is usually associated with “soft” materials such as polymers and cartilage, but bone also experiences plastic deformation. One example of an extrinsic mechanism is fibrils (length scale ~10’s nm) sliding against one another, stretching, deforming, and/or breaking. This movement of fibrils causes plastic deformation resulting in crack tip blunting. Extrinsic mechanisms. Extrinsic toughening mechanisms are more well elucidated than intrinsic mechanisms. While the length-scale of intrinsic mechanisms are in the nanometers, the length scale of extrinsic mechanisms are along the micron/micrometer scale. Scanning electron microscopy (SEM) images of bone have allowed imaging of extrinsic mechanisms such as crack bridging (by collagen fibers, or by un-cracked "ligaments"), crack deflection, and micro-cracking. Crack bridging by un-cracked ligaments and crack deflection are the major contributors to crack-shielding while crack bridging by collagen fibers and micro-cracking are minor contributors to crack-shielding. Crack bridging. The extrinsic mechanism of crack bridging is when material spans in the crack wake behind the crack reducing the stress intensity factor. The stress intensity experienced at the crack tip, formula_2, is decreased by the stress intensity of bridging, formula_3. formula_4 where formula_5is the applied stress intensity factor. Crack bridging can occur by two mechanisms of different length scales. Crack bridging by Type I collagen fibers otherwise known as collagen-fibril bridging is on a smaller length-scale than untracked ligament bridging. The structure of collagen is in itself hierarchical, consisting of three alpha-chains wrapped together to form pro-collagen which undergoes processing and assembles into fibrils and fibers. The diameter of the collagen molecule is approximately 1.5 nanometers, and the collagen fibril is approximately 10X the diameter of the collagen (~10's nm). The process of crack bridging is analogous to the way polymers yield through crazing. Polymers plastically deform through crazing, where molecular chains bridge the crack reducing the stress intensity at the crack tip. Just as the Dugdale model is used to predict the stress intensity factor during crazing, the uniform-traction Dugdale-zone model can be used to estimate the decrease in the stress-intensity factor due to crack bridging, formula_6. formula_7 where the normal bridging stress on the fibers is denoted by formula_8, the effective area-fraction of the collagen fibers is denoted by formula_9, and the bridging zone length is denoted by formula_10. "Note: Ligament refers to the appearance of the extrinsic mechanism under imaging and not to the orthopedic ligament." Uncracked ligament bridging is one of the larger contributors to crack-shielding because the "ligaments" are on the length-scale of hundreds of micrometers in contrast to tens of nanometers. The formation of these ligaments are attributed to non-uniform advancement of the crack front or several microcracks semi-linked together forming bridges of uncracked material. Crack deflection. Crack deflection and twist occurs due to osteons, the structural unit of cortical bone. Osteons have a cylindrical structure and are approximately 0.2 mm in diameter. As the crack tip reaches an osteon, crack propagation is deflected along the lateral surface of the osteon slowing crack growth. Because osteons are larger scale, than both collagen fibers and uncracked "ligaments, crack deflection through osteons are one of the major toughening mechanisms of bone. Micro-cracking. As the name suggests, microcracking is the formation of cracks on the micron scale of various orientations and sizes. The formation of microcracks before and in the wake of the crack tip can delay crack propagation. Since bone often remodels both its trabecular and cortical structure to optimize strength in the longitudinal direction, the formation of microcracks in human bone are also formed longitudinally. This directionally in human bone contrasts with the more random orientation in bovine bone and lends to longitudinal bone toughness in humans. As with the other mechanisms of crack-shielding, the resistance curve (R-curve) can be used to study the resistance of cortical bone (trabecular bone is removed before experiments) to fracture. A generally accepted model for crack propagation under microcrack formation was proposed by Vashishth and colleagues. They studied the crack propagation velocity as the crack grew and identified two stages of crack growth that alternate as the crack progresses Cartilage fracture. Studying cartilage damage and fracture from a mechanical perspective can lend insight to medical professionals on treatment of diseases affecting cartilage. Cartilage is a highly complex material with depth-variation of biological properties leading to differences in mechanical properties. Furthermore, cartilage has high water content and collagen content contributing to poroelastic and viscoelastic effects respectively. Experimentally, impact tests of cartilage samples can be done to simulate physiological high-intensity impact. Common type of experiments include, drop tower tests, pendulum tests, and spring-loaded systems. These impact tests serve to simplify the way the material is analyzed from poroelastic to elastic, because under high-velocity short-duration impacts, fluid does not have time to flow out of the cartilage sample.
[ { "math_id": 0, "text": "K_c" }, { "math_id": 1, "text": "G_c" }, { "math_id": 2, "text": "K_{tip}" }, { "math_id": 3, "text": "K_{br}" }, { "math_id": 4, "text": "K_{tip} = K_{app} - K_{br}" }, { "math_id": 5, "text": "K_{app}" }, { "math_id": 6, "text": "K_b^f" }, { "math_id": 7, "text": "K_b^f = 2 \\sigma_b f_f \\bigg( \\sqrt{\\frac{2 l_f}{\\pi}} \\bigg)" }, { "math_id": 8, "text": "\\sigma_b" }, { "math_id": 9, "text": "f_f" }, { "math_id": 10, "text": "l_f" } ]
https://en.wikipedia.org/wiki?curid=60448698
60455
Line-of-sight propagation
Characteristic of electromagnetic radiation Line-of-sight propagation is a characteristic of electromagnetic radiation or acoustic wave propagation which means waves can only travel in a direct visual path from the source to the receiver without obstacles. Electromagnetic transmission includes light emissions traveling in a straight line. The rays or waves may be diffracted, refracted, reflected, or absorbed by the atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles. In contrast to line-of-sight propagation, at low frequency (below approximately 3 MHz) due to diffraction, radio waves can travel as ground waves, which follow the contour of the Earth. This enables AM radio stations to transmit beyond the horizon. Additionally, frequencies in the shortwave bands between approximately 1 and 30 MHz, can be refracted back to Earth by the ionosphere, called skywave or "skip" propagation, thus giving radio transmissions in this range a potentially global reach. However, at frequencies above 30 MHz (VHF and higher) and in lower levels of the atmosphere, neither of these effects are significant. Thus, any obstruction between the transmitting antenna (transmitter) and the receiving antenna (receiver) will block the signal, just like the light that the eye may sense. Therefore, since the ability to visually see a transmitting antenna (disregarding the limitations of the eye's resolution) roughly corresponds to the ability to receive a radio signal from it, the propagation characteristic at these frequencies is called "line-of-sight". The farthest possible point of propagation is referred to as the "radio horizon". In practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal (a function of both the transmitter and the antenna characteristics). Broadcast FM radio, at comparatively low frequencies of around 100 MHz, are less affected by the presence of buildings and forests. Impairments to line-of-sight propagation. Low-powered microwave transmitters can be foiled by tree branches, or even heavy rain or snow. The presence of objects not in the direct line-of-sight can cause diffraction effects that disrupt radio transmissions. For the best propagation, a volume known as the first Fresnel zone should be free of obstructions. Reflected radiation from the surface of the surrounding ground or salt water can also either cancel out or enhance the direct signal. This effect can be reduced by raising either or both antennas further from the ground: The reduction in loss achieved is known as "height gain". See also Non-line-of-sight propagation for more on impairments in propagation. It is important to take into account the curvature of the Earth for calculation of line-of-sight paths from maps, when a direct visual fix cannot be made. Designs for microwave formerly used &lt;templatestyles src="Fraction/styles.css" /&gt;4⁄3 Earth radius to compute clearances along the path. Mobile telephones. Although the frequencies used by mobile phones (cell phones) are in the line-of-sight range, they still function in cities. This is made possible by a combination of the following effects: The combination of all these effects makes the mobile phone propagation environment highly complex, with multipath effects and extensive Rayleigh fading. For mobile phone services, these problems are tackled using: A Faraday cage is composed of a conductor that completely surrounds an area on all sides, top, and bottom. Electromagnetic radiation is blocked where the wavelength is longer than any gaps. For example, mobile telephone signals are blocked in windowless metal enclosures that approximate a Faraday cage, such as elevator cabins, and parts of trains, cars, and ships. The same problem can affect signals in buildings with extensive steel reinforcement. Radio horizon. The "radio horizon" is the locus of points at which direct rays from an antenna are tangential to the surface of the Earth. If the Earth were a perfect sphere without an atmosphere, the radio horizon would be a circle. The radio horizon of the transmitting and receiving antennas can be added together to increase the effective communication range. Radio wave propagation is affected by atmospheric conditions, ionospheric absorption, and the presence of obstructions, for example mountains or trees. Simple formulas that include the effect of the atmosphere give the range as: formula_0 formula_1 The simple formulas give a best-case approximation of the maximum propagation distance, but are not sufficient to estimate the quality of service at any location. Earth bulge. In telecommunications, Earth bulge refers to the effect of earth's curvature on radio propagation. It is a consequence of a circular segment of earth profile that blocks off long-distance communications. Since the vacuum line of sight passes at varying heights over the Earth, the propagating radio wave encounters slightly different propagation conditions over the path. Vacuum distance to horizon. Assuming a perfect sphere with no terrain irregularity, the distance to the horizon from a high altitude transmitter (i.e., line of sight) can readily be calculated. Let "R" be the radius of the Earth and "h" be the altitude of a telecommunication station. The line of sight distance "d" of this station is given by the Pythagorean theorem; formula_2 Since the altitude of the station is much less than the radius of the Earth, formula_3 If the height is given in metres, and distance in kilometres, formula_4 If the height is given in feet, and the distance in statute miles, formula_5 Atmospheric refraction. The usual effect of the declining pressure of the atmosphere with height (vertical pressure variation) is to bend (refract) radio waves down towards the surface of the Earth. This results in an effective Earth radius, increased by a factor around &lt;templatestyles src="Fraction/styles.css" /&gt;4⁄3. This "k"-factor can change from its average value depending on weather. Refracted distance to horizon. The previous vacuum distance analysis does not consider the effect of atmosphere on the propagation path of RF signals. In fact, RF signals do not propagate in straight lines: Because of the refractive effects of atmospheric layers, the propagation paths are somewhat curved. Thus, the maximum service range of the station is not equal to the line of sight vacuum distance. Usually, a factor "k" is used in the equation above, modified to be formula_6 "k" &gt; 1 means geometrically reduced bulge and a longer service range. On the other hand, "k" &lt; 1 means a shorter service range. Under normal weather conditions, "k" is usually chosen to be &lt;templatestyles src="Fraction/styles.css" /&gt;4⁄3. That means that the maximum service range increases by 15%. formula_7 for "h" in metres and "d" in kilometres; or formula_8 for "h" in feet and "d" in miles. But in stormy weather, "k" may decrease to cause fading in transmission. (In extreme cases "k" can be less than 1.) That is equivalent to a hypothetical decrease in Earth radius and an increase of Earth bulge. For example, in normal weather conditions, the service range of a station at an altitude of 1500 m with respect to receivers at sea level can be found as, formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{horizon}_\\mathrm{mi} \\approx 1.23 \\cdot \\sqrt{\\mathrm{height}_\\mathrm{feet}}" }, { "math_id": 1, "text": "\\mathrm{horizon}_\\mathrm{km} \\approx 3.57 \\cdot \\sqrt{\\mathrm{height}_\\mathrm{metres}}" }, { "math_id": 2, "text": "d^2=(R+h)^{2}-R^2= 2\\cdot R \\cdot h +h^2" }, { "math_id": 3, "text": "d \\approx \\sqrt{ 2\\cdot R \\cdot h}" }, { "math_id": 4, "text": "d \\approx 3.57 \\cdot \\sqrt{h}" }, { "math_id": 5, "text": "d \\approx 1.23 \\cdot \\sqrt{h}" }, { "math_id": 6, "text": "d \\approx \\sqrt{2 \\cdot k \\cdot R \\cdot h}" }, { "math_id": 7, "text": "d \\approx 4.12 \\cdot \\sqrt{h} " }, { "math_id": 8, "text": "d \\approx 1.41 \\cdot\\sqrt{h} " }, { "math_id": 9, "text": "d \\approx 4.12 \\cdot \\sqrt{1500} = 160 \\mbox { km.}" } ]
https://en.wikipedia.org/wiki?curid=60455
6045801
Map (higher-order function)
Computer programming function In many programming languages, map is a higher-order function that applies a given function to each element of a collection, e.g. a list or set, returning the results in a collection of the same type. It is often called "apply-to-all" when considered in functional form. The concept of a map is not limited to lists: it works for sequential containers, tree-like containers, or even abstract containers such as futures and promises. Examples: mapping a list. Suppose we have a list of integers codice_0 and would like to calculate the square of each integer. To do this, we first define a function to codice_1 a single number (shown here in Haskell): square x = x * x Afterwards we may call »&gt; map square [1, 2, 3, 4, 5] which yields codice_2, demonstrating that codice_3 has gone through the entire list and applied the function codice_1 to each element. Visual example. Below, you can see a view of each step of the mapping process for a list of integers codice_5 that we want to map into a new list codice_6 according to the function formula_0 : The codice_3 is provided as part of the Haskell's base prelude (i.e. "standard library") and is implemented as: map :: (a -&gt; b) -&gt; [a] -&gt; [b] map _ [] = [] map f (x : xs) = f x : map f xs Generalization. In Haskell, the polymorphic function codice_8 is generalized to a polytypic function codice_9, which applies to any type belonging the codice_10 type class. The type constructor of lists codice_11 can be defined as an instance of the codice_10 type class using the codice_3 function from the previous example: instance Functor [] where fmap = map Other examples of codice_10 instances include trees: -- a simple binary tree data Tree a = Leaf a | Fork (Tree a) (Tree a) instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Fork l r) = Fork (fmap f l) (fmap f r) Mapping over a tree yields: »&gt; fmap square (Fork (Fork (Leaf 1) (Leaf 2)) (Fork (Leaf 3) (Leaf 4))) Fork (Fork (Leaf 1) (Leaf 4)) (Fork (Leaf 9) (Leaf 16)) For every instance of the codice_10 type class, codice_16 is contractually obliged to obey the functor laws: fmap id ≡ id -- identity law fmap (f . g) ≡ fmap f . fmap g -- composition law where codice_17 denotes function composition in Haskell. Among other uses, this allows defining element-wise operations for various kinds of collections. Category-theoretic background. In category theory, a functor formula_1 consists of two maps: one that sends each object formula_2 of the category to another object formula_3, and one that sends each morphism formula_4 to another morphism formula_5, which acts as a homomorphism on categories (i.e. it respects the category axioms). Interpreting the universe of data types as a category formula_6, with morphisms being functions, then a type constructor codice_18 that is a member of the codice_10 type class is the object part of such a functor, and codice_20 is the morphism part. The functor laws described above are precisely the category-theoretic functor axioms for this functor. Functors can also be objects in categories, with "morphisms" called natural transformations. Given two functors formula_7, a natural transformation formula_8 consists of a collection of morphisms formula_9, one for each object formula_2 of the category formula_10, which are 'natural' in the sense that they act as a 'conversion' between the two functors, taking no account of the objects that the functors are applied to. Natural transformations correspond to functions of the form codice_21, where codice_22 is a universally quantified type variable – codice_23 knows nothing about the type which inhabits codice_22. The naturality axiom of such functions is automatically satisfied because it is a so-called free theorem, depending on the fact that it is parametrically polymorphic. For example, codice_25, which reverses a list, is a natural transformation, as is codice_26, which flattens a tree from left to right, and even codice_27, which sorts a list based on a provided comparison function. Optimizations. The mathematical basis of maps allow for a number of optimizations. The composition law ensures that both lead to the same result; that is, formula_11. However, the second form is more efficient to compute than the first form, because each codice_3 requires rebuilding an entire list from scratch. Therefore, compilers will attempt to transform the first form into the second; this type of optimization is known as "map fusion" and is the functional analog of loop fusion. Map functions can be and often are defined in terms of a fold such as codice_31, which means one can do a "map-fold fusion": codice_32 is equivalent to codice_33. The implementation of map above on singly linked lists is not tail-recursive, so it may build up a lot of frames on the stack when called with a large list. Many languages alternately provide a "reverse map" function, which is equivalent to reversing a mapped list, but is tail-recursive. Here is an implementation which utilizes the fold-left function. reverseMap f = foldl (\ys x -&gt; f x : ys) [] Since reversing a singly linked list is also tail-recursive, reverse and reverse-map can be composed to perform normal map in a tail-recursive way, though it requires performing two passes over the list. Language comparison. The map function originated in functional programming languages. The language Lisp introduced a map function called codice_34 in 1959, with slightly different versions already appearing in 1958. This is the original definition for codice_34, mapping a function over successive rest lists: The function codice_34 is still available in newer Lisps like Common Lisp, though functions like codice_37 or the more generic codice_3 would be preferred. Squaring the elements of a list using codice_34 would be written in S-expression notation like this: Using the function codice_37, above example would be written like this: Today mapping functions are supported (or may be defined) in many procedural, object-oriented, and multi-paradigm languages as well: In C++'s Standard Library, it is called codice_41, in C# (3.0)'s LINQ library, it is provided as an extension method called codice_42. Map is also a frequently used operation in high level languages such as ColdFusion Markup Language (CFML), Perl, Python, and Ruby; the operation is called codice_3 in all four of these languages. A codice_44 alias for codice_3 is also provided in Ruby (from Smalltalk). Common Lisp provides a family of map-like functions; the one corresponding to the behavior described here is called codice_37 (codice_47 indicating access using the CAR operation). There are also languages with syntactic constructs providing the same functionality as the map function. Map is sometimes generalized to accept dyadic (2-argument) functions that can apply a user-supplied function to corresponding elements from two lists. Some languages use special names for this, such as "map2" or "zipWith". Languages using explicit variadic functions may have versions of map with variable arity to support "variable-arity" functions. Map with 2 or more lists encounters the issue of handling when the lists are of different lengths. Various languages differ on this. Some raise an exception. Some stop after the length of the shortest list and ignore extra items on the other lists. Some continue on to the length of the longest list, and for the lists that have already ended, pass some placeholder value to the function indicating no value. In languages which support first-class functions and currying, codice_3 may be partially applied to "lift" a function that works on only one value to an element-wise equivalent that works on an entire container; for example, codice_49 is a Haskell function which squares each element of a list. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = x + 1" }, { "math_id": 1, "text": "F : C \\rarr D" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "F A" }, { "math_id": 4, "text": "f : A \\rarr B" }, { "math_id": 5, "text": "Ff : FA \\rarr FB" }, { "math_id": 6, "text": "Type" }, { "math_id": 7, "text": "F, G : C \\rarr D" }, { "math_id": 8, "text": "\\eta : F \\rarr G" }, { "math_id": 9, "text": "\\eta_A : FA \\rarr GA" }, { "math_id": 10, "text": "D" }, { "math_id": 11, "text": "\\operatorname{map}(f) \\circ \\operatorname{map}(g) = \\operatorname{map}(f \\circ g)" } ]
https://en.wikipedia.org/wiki?curid=6045801
60459
Abductive reasoning
Inference seeking the simplest and most likely explanation Abductive reasoning (also called abduction, abductive inference, or retroduction) is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It was formulated and advanced by American philosopher and logician Charles Sanders Peirce beginning in the latter half of the 19th century. Abductive reasoning, unlike deductive reasoning, yields a plausible conclusion but does not definitively verify it. Abductive conclusions do not eliminate uncertainty or doubt, which is expressed in retreat terms such as "best available" or "most likely". While inductive reasoning draws general conclusions that apply to many situations, abductive conclusions are confined to the particular observations in question. In the 1990s, as computing power grew, the fields of law, computer science, and artificial intelligence research spurred renewed interest in the subject of abduction. Diagnostic expert systems frequently employ abduction. Deduction, induction, and abduction. Deduction. Deductive reasoning allows deriving formula_0 from formula_1 only where formula_0 is a formal logical consequence of formula_1. In other words, deduction derives the consequences of the assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. For example, given that "Wikis can be edited by anyone" (formula_2) and "Wikipedia is a wiki" (formula_3), it follows that "Wikipedia can be edited by anyone" (formula_0). Induction. Inductive reasoning is the process of inferring some "general" principle formula_0 from a body of knowledge formula_1, where formula_0 does not necessarily follow from formula_1. formula_1 might give us very good reason to accept formula_0 but does not ensure formula_0. For example, if it is given that 95% percent of the elephants are gray, and Louise is an elephant, one can "induce" that Louise is gray. Still, this is not necessarily the case: 5 percent of the time this conclusion will be wrong. However, an inference being derived from statistical data is not sufficient to classify it as inductive. For example, if all swans that a person has observed so far are white, they may instead "abduce" the possibility that all swans are white. They have good reason to believe the conclusion from the premise because it is the "best explanation" for their observations, and the truth of the conclusion is still not guaranteed. (Indeed, it turns out that some swans are black.) Abduction. Abductive reasoning allows inferring formula_1 as an explanation of formula_0. As a result of this inference, abduction allows the precondition formula_1 to be abducted from the consequence formula_0. Deductive reasoning and abductive reasoning thus differ in which end, left or right, of the proposition "formula_1 entails formula_0" serves as conclusion. For example, in a billiard game, after glancing and seeing the eight ball moving towards us, we may abduce that the cue ball struck the eight ball. The strike of the cue ball would account for the movement of the eight ball. It serves as a hypothesis that "best explains" our observation. Given the many possible explanations for the movement of the eight ball, our abduction does not leave us certain that the cue ball in fact struck the eight ball, but our abduction, still useful, can serve to orient us in our surroundings. Despite many possible explanations for any physical process that we observe, we tend to abduce a single explanation (or a few explanations) for this process in the expectation that we can better orient ourselves in our surroundings and disregard some possibilities. Properly used, abductive reasoning can be a useful source of priors in Bayesian statistics. One can understand abductive reasoning as inference to the best explanation, although not all usages of the terms "abduction" and "inference to the best explanation" are equivalent. Formalizations of abduction. Logic-based abduction. In logic, explanation is accomplished through the use of a logical theory formula_4 representing a domain and a set of observations formula_5. Abduction is the process of deriving a set of explanations of formula_5 according to formula_4 and picking out one of those explanations. For formula_6 to be an explanation of formula_5 according to formula_4, it should satisfy two conditions: In formal logic, formula_5 and formula_6 are assumed to be sets of literals. The two conditions for formula_6 being an explanation of formula_5 according to theory formula_4 are formalized as: formula_7 formula_8 is consistent. Among the possible explanations formula_6 satisfying these two conditions, some other condition of minimality is usually imposed to avoid irrelevant facts (not contributing to the entailment of formula_5) being included in the explanations. Abduction is then the process that picks out some member of formula_6. Criteria for picking out a member representing "the best" explanation include the simplicity, the prior probability, or the explanatory power of the explanation. A proof-theoretical abduction method for first-order classical logic based on the sequent calculus and a dual one, based on semantic tableaux (analytic tableaux) have been proposed. The methods are sound and complete and work for full first-order logic, without requiring any preliminary reduction of formulae into normal forms. These methods have also been extended to modal logic. Abductive logic programming is a computational framework that extends normal logic programming with abduction. It separates the theory formula_4 into two components, one of which is a normal logic program, used to generate formula_6 by means of backward reasoning, the other of which is a set of integrity constraints, used to filter the set of candidate explanations. Set-cover abduction. A different formalization of abduction is based on inverting the function that calculates the visible effects of the hypotheses. Formally, we are given a set of hypotheses formula_9 and a set of manifestations formula_10; they are related by the domain knowledge, represented by a function formula_11 that takes as an argument a set of hypotheses and gives as a result the corresponding set of manifestations. In other words, for every subset of the hypotheses formula_12, their effects are known to be formula_13. Abduction is performed by finding a set formula_12 such that formula_14. In other words, abduction is performed by finding a set of hypotheses formula_15 such that their effects formula_13 include all observations formula_10. A common assumption is that the effects of the hypotheses are independent, that is, for every formula_12, it holds that formula_16. If this condition is met, abduction can be seen as a form of set covering. Abductive validation. Abductive validation is the process of validating a given hypothesis through abductive reasoning. This can also be called reasoning through successive approximation. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data. The best possible explanation is often defined in terms of simplicity and elegance (see Occam's razor). Abductive validation is common practice in hypothesis formation in science; moreover, Peirce claims that it is a ubiquitous aspect of thought: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Looking out my window this lovely spring morning, I see an azalea in full bloom. No, no! I don't see that; though that is the only way I can describe what I see. That is a proposition, a sentence, a fact; but what I perceive is not proposition, sentence, fact, but only an image, which I make intelligible in part by means of a statement of fact. This statement is abstract; but what I see is concrete. I perform an abduction when I so much as express in a sentence anything I see. The truth is that the whole fabric of our knowledge is one matted felt of pure hypothesis confirmed and refined by induction. Not the smallest advance can be made in knowledge beyond the stage of vacant staring, without making an abduction at every step. It was Peirce's own maxim that "Facts cannot be explained by a hypothesis more extraordinary than these facts themselves; and of various hypotheses the least extraordinary must be adopted." After obtaining possible hypotheses that may explain the facts, abductive validation is a method for identifying the most likely hypothesis that should be adopted. Subjective logic abduction. Subjective logic generalises probabilistic logic by including degrees of epistemic uncertainty in the input arguments, i.e. instead of probabilities, the analyst can express arguments as subjective opinions. Abduction in subjective logic is thus a generalization of probabilistic abduction described above. The input arguments in subjective logic are subjective opinions which can be binomial when the opinion applies to a binary variable or multinomial when it applies to an "n"-ary variable. A subjective opinion thus applies to a state variable formula_17 which takes its values from a domain formula_18 (i.e. a state space of exhaustive and mutually disjoint state values formula_19), and is denoted by the tuple formula_20, where formula_21 is the belief mass distribution over formula_18, formula_22 is the epistemic uncertainty mass, and formula_23 is the base rate distribution over formula_18. These parameters satisfy formula_24 and formula_25 as well as formula_26. Assume the domains formula_18 and formula_27 with respective variables formula_17 and formula_28, the set of conditional opinions formula_29 (i.e. one conditional opinion for each value formula_30), and the base rate distribution formula_31. Based on these parameters, the subjective Bayes' theorem denoted with the operator formula_32 produces the set of inverted conditionals formula_33 (i.e. one inverted conditional for each value formula_19) expressed by: formula_34. Using these inverted conditionals together with the opinion formula_35 subjective deduction denoted by the operator formula_36 can be used to abduce the marginal opinion formula_37. The equality between the different expressions for subjective abduction is given below: formula_38 The symbolic notation for subjective abduction is "formula_39", and the operator itself is denoted as "formula_40". The operator for the subjective Bayes' theorem is denoted "formula_41", and subjective deduction is denoted "formula_36". The advantage of using subjective logic abduction compared to probabilistic abduction is that both aleatoric and epistemic uncertainty about the input argument probabilities can be explicitly expressed and taken into account during the analysis. It is thus possible to perform abductive analysis in the presence of uncertain arguments, which naturally results in degrees of uncertainty in the output conclusions. History. The idea that the simplest, most easily verifiable solution should be preferred over its more complicated counterparts is a very old one. To this point, George Pólya, in his treatise on problem-solving, makes reference to the following Latin truism: "simplex sigillum veri" (simplicity is the seal of truth). Introduction and development by Peirce. Overview. The American philosopher Charles Sanders Peirce introduced abduction into modern logic. Over the years he called such inference "hypothesis", "abduction", "presumption", and "retroduction". He considered it a topic in logic as a normative field in philosophy, not in purely formal or mathematical logic, and eventually as a topic also in economics of research. As two stages of the development, extension, etc., of a hypothesis in scientific inquiry, abduction and also induction are often collapsed into one overarching concept—the hypothesis. That is why, in the scientific method known from Galileo and Bacon, the abductive stage of hypothesis formation is conceptualized simply as induction. Thus, in the twentieth century this collapse was reinforced by Karl Popper's explication of the hypothetico-deductive model, where the hypothesis is considered to be just "a guess" (in the spirit of Peirce). However, when the formation of a hypothesis is considered the result of a process it becomes clear that this "guess" has already been tried and made more robust in thought as a necessary stage of its acquiring the status of hypothesis. Indeed, many abductions are rejected or heavily modified by subsequent abductions before they ever reach this stage. Before 1900, Peirce treated abduction as the use of a known rule to explain an observation. For instance: it is a known rule that, if it rains, grass gets wet; so, to explain the fact that the grass on this lawn is wet, one "abduces" that it has rained. Abduction can lead to false conclusions if other rules that might explain the observation are not taken into account—e.g. the grass could be wet from dew. This remains the common use of the term "abduction" in the social sciences and in artificial intelligence. Peirce consistently characterized it as the kind of inference that originates a hypothesis by concluding in an explanation, though an unassured one, for some very curious or surprising (anomalous) observation stated in a premise. As early as 1865 he wrote that all conceptions of cause and force are reached through hypothetical inference; in the 1900s he wrote that all explanatory content of theories is reached through abduction. In other respects Peirce revised his view of abduction over the years. In later years his view came to be: Writing in 1910, Peirce admits that "in almost everything I printed before the beginning of this century I more or less mixed up hypothesis and induction" and he traces the confusion of these two types of reasoning to logicians' too "narrow and formalistic a conception of inference, as necessarily having formulated judgments from its premises." He started out in the 1860s treating hypothetical inference in a number of ways which he eventually peeled away as inessential or, in some cases, mistaken: "The Natural Classification of Arguments" (1867). In 1867, Peirce's "On the Natural Classification of Arguments", hypothetical inference always deals with a cluster of characters (call them "P′, P′′, P′′′," etc.) known to occur at least whenever a certain character ("M") occurs. Note that categorical syllogisms have elements traditionally called middles, predicates, and subjects. For example: All "men" [middle] are "mortal" [predicate]; "Socrates" [subject] is a "man" [middle]; ergo "Socrates" [subject] is "mortal" [predicate]". Below, 'M' stands for a middle; 'P' for a predicate; 'S' for a subject. Peirce held that all deduction can be put into the form of the categorical syllogism Barbara (AAA-1). "Deduction, Induction, and Hypothesis" (1878). In 1878, in "Deduction, Induction, and Hypothesis", there is no longer a need for multiple characters or predicates in order for an inference to be hypothetical, although it is still helpful. Moreover, Peirce no longer poses hypothetical inference as concluding in a "probable" hypothesis. In the forms themselves, it is understood but not explicit that induction involves random selection and that hypothetical inference involves response to a "very curious circumstance". The forms instead emphasize the modes of inference as rearrangements of one another's propositions (without the bracketed hints shown below). "A Theory of Probable Inference" (1883). Peirce long treated abduction in terms of induction from characters or traits (weighed, not counted like objects), explicitly so in his influential 1883 "A theory of probable inference", in which he returns to involving probability in the hypothetical conclusion. Like "Deduction, Induction, and Hypothesis" in 1878, it was widely read (see the historical books on statistics by Stephen Stigler), unlike his later amendments of his conception of abduction. Today abduction remains most commonly understood as induction from characters and extension of a known rule to cover unexplained circumstances. Sherlock Holmes used this method of reasoning in the stories of Arthur Conan Doyle, although Holmes refers to it as "deductive reasoning". "Minute Logic" (1902) and after. In 1902 Peirce wrote that he now regarded the syllogistical forms and the doctrine of extension and comprehension (i.e., objects and characters as referenced by terms), as being less fundamental than he had earlier thought. In 1903 he offered the following form for abduction: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is reason to suspect that A is true. The hypothesis is framed, but not asserted, in a premise, then asserted as rationally suspectable in the conclusion. Thus, as in the earlier categorical syllogistic form, the conclusion is formulated from some premise(s). But all the same the hypothesis consists more clearly than ever in a new or outside idea beyond what is known or observed. Induction in a sense goes beyond observations already reported in the premises, but it merely amplifies ideas already known to represent occurrences, or tests an idea supplied by hypothesis; either way it requires previous abductions in order to get such ideas in the first place. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. Note that the hypothesis ("A") could be of a rule. It need not even be a rule strictly necessitating the surprising observation ("C"), which needs to follow only as a "matter of course"; or the "course" itself could amount to some known rule, merely alluded to, and also not necessarily a rule of strict necessity. In the same year, Peirce wrote that reaching a hypothesis may involve placing a surprising observation under either a newly hypothesized rule or a hypothesized combination of a known rule with a peculiar state of facts, so that the phenomenon would be not surprising but instead either necessarily implied or at least likely. Peirce did not remain quite convinced about any such form as the categorical syllogistic form or the 1903 form. In 1911, he wrote, "I do not, at present, feel quite convinced that any logical form can be assigned that will cover all 'Retroductions'. For what I mean by a Retroduction is simply a conjecture which arises in the mind." Pragmatism. In 1901 Peirce wrote, "There would be no logic in imposing rules, and saying that they ought to be followed, until it is made out that the purpose of hypothesis requires them." In 1903 Peirce called pragmatism "the logic of abduction" and said that the pragmatic maxim gives the necessary and sufficient logical rule to abduction in general. The pragmatic maxim is: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object. It is a method for fruitful clarification of conceptions by equating the meaning of a conception with the conceivable practical implications of its object's conceived effects. Peirce held that that is precisely tailored to abduction's purpose in inquiry, the forming of an idea that could conceivably shape informed conduct. In various writings in the 1900s he said that the conduct of abduction (or retroduction) is governed by considerations of economy, belonging in particular to the economics of research. He regarded economics as a normative science whose analytic portion might be part of logical methodeutic (that is, theory of inquiry). Three levels of logic about abduction. Peirce came over the years to divide (philosophical) logic into three departments: Peirce had, from the start, seen the modes of inference as being coordinated together in scientific inquiry and, by the 1900s, held that hypothetical inference in particular is inadequately treated at the level of critique of arguments. To increase the assurance of a hypothetical conclusion, one needs to deduce implications about evidence to be found, predictions which induction can test through observation so as to evaluate the hypothesis. That is Peirce's outline of the scientific method of inquiry, as covered in his inquiry methodology, which includes pragmatism or, as he later called it, pragmaticism, the clarification of ideas in terms of their conceivable implications regarding informed practice. Classification of signs. As early as 1866, Peirce held that: 1. Hypothesis (abductive inference) is inference through an "icon" (also called a "likeness"). 2. Induction is inference through an "index" (a sign by factual connection); a sample is an index of the totality from which it is drawn. 3. Deduction is inference through a "symbol" (a sign by interpretive habit irrespective of resemblance or connection to its object). In 1902, Peirce wrote that, in abduction: "It is recognized that the phenomena are "like", i.e. constitute an Icon of, a replica of a general conception, or Symbol." Critique of arguments. At the critical level Peirce examined the forms of abductive arguments (as discussed above), and came to hold that the hypothesis should economize explanation for plausibility in terms of the feasible and natural. In 1908 Peirce described this plausibility in some detail. It involves not likeliness based on observations (which is instead the inductive evaluation of a hypothesis), but instead optimal simplicity in the sense of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity" (Peirce does not dismiss logical simplicity entirely but sees it in a subordinate role; taken to its logical extreme it would favor adding no explanation to the observation at all). Even a well-prepared mind guesses oftener wrong than right, but our guesses succeed better than random luck at reaching the truth or at least advancing the inquiry, and that indicates to Peirce that they are based in instinctive attunement to nature, an affinity between the mind's processes and the processes of the real, which would account for why appealingly "natural" guesses are the ones that oftenest (or least seldom) succeed; to which Peirce added the argument that such guesses are to be preferred since, without "a natural bent like nature's", people would have no hope of understanding nature. In 1910 Peirce made a three-way distinction between probability, verisimilitude, and plausibility, and defined plausibility with a normative "ought": "By plausibility, I mean the degree to which a theory ought to recommend itself to our belief independently of any kind of evidence other than our instinct urging us to regard it favorably." For Peirce, plausibility does not depend on observed frequencies or probabilities, or on verisimilitude, or even on testability, which is not a question of the critique of the hypothetical inference "as" an inference, but rather a question of the hypothesis's relation to the inquiry process. The phrase "inference to the best explanation" (not used by Peirce but often applied to hypothetical inference) is not always understood as referring to the most simple and natural hypotheses (such as those with the fewest assumptions). However, in other senses of "best", such as "standing up best to tests", it is hard to know which is the best explanation to form, since one has not tested it yet. Still, for Peirce, any justification of an abductive inference as "good" is not completed upon its formation as an argument (unlike with induction and deduction) and instead depends also on its methodological role and promise (such as its testability) in advancing inquiry. Methodology of inquiry. At the methodeutical level Peirce held that a hypothesis is judged and selected for testing because it offers, via its trial, to expedite and economize the inquiry process itself toward new truths, first of all by being testable and also by further economies, in terms of cost, value, and relationships among guesses (hypotheses). Here, considerations such as probability, absent from the treatment of abduction at the critical level, come into play. For examples: Uberty. Peirce indicated that abductive reasoning is driven by the need for "economy in research"—the expected fact-based productivity of hypotheses, prior to deductive and inductive processes of verification. A key concept proposed by him in this regard is "uberty"—the expected fertility and pragmatic value of reasoning. This concept seems to be gaining support via association to the Free Energy Principle. Gilbert Harman (1965). Gilbert Harman was a professor of philosophy at Princeton University. Harman's 1965 account of the role of "inference to the best explanation" – inferring the existence of that which we need for the best explanation of observable phenomena – has been very influential. Stephen Jay Gould (1995). Stephen Jay Gould, in answering the Omphalos hypothesis, claimed that only hypotheses that can be proved incorrect lie within the domain of science and only these hypotheses are good explanations of facts worth inferring to. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"[W]hat is so desperately wrong with Omphalos? Only this really (and perhaps paradoxically): that we can devise no way to find out whether it is wrong—or for that matter, right. Omphalos is the classic example of an utterly untestable notion, for the world will look exactly the same in all its intricate detail whether fossils and strata are prochronic [signs of a fictitious past] or products of an extended history. . . . Science is a procedure for testing and rejecting hypotheses, not a compendium of certain knowledge. Claims that can be proved incorrect lie within its domain. . . . But theories that cannot be tested in principle are not part of science. . . . [W]e reject Omphalos as useless, not wrong." Applications. Artificial intelligence. Applications in artificial intelligence include fault diagnosis, belief revision, and automated planning. The most direct application of abduction is that of automatically detecting faults in systems: given a theory relating faults with their effects and a set of observed effects, abduction can be used to derive sets of faults that are likely to be the cause of the problem. Medicine. In medicine, abduction can be seen as a component of clinical evaluation and judgment. The Internist-I diagnostic system, the first AI system that covered the field of Internal Medicine, used abductive reasoning to converge on the most likely causes of a set of patient symptoms that it acquired through an interactive dialog with an expert user. Automated planning. Abduction can also be used to model automated planning. Given a logical theory relating action occurrences with their effects (for example, a formula of the event calculus), the problem of finding a plan for reaching a state can be modeled as the problem of abducting a set of literals implying that the final state is the goal state. Intelligence analysis. In intelligence analysis, analysis of competing hypotheses and Bayesian networks, probabilistic abductive reasoning is used extensively. Similarly in medical diagnosis and legal reasoning, the same methods are being used, although there have been many examples of errors, especially caused by the base rate fallacy and the prosecutor's fallacy. Belief revision. Belief revision, the process of adapting beliefs in view of new information, is another field in which abduction has been applied. The main problem of belief revision is that the new information may be inconsistent with the prior web of beliefs, while the result of the incorporation cannot be inconsistent. The process of updating the web of beliefs can be done by the use of abduction: once an explanation for the observation has been found, integrating it does not generate inconsistency. Gärdenfors’ paper contains a brief survey of the area of belief revision and its relation to updating of logical databases, and explores the relationship between belief revision and nonmonotonic logic. This use of abduction is not straightforward, as adding propositional formulae to other propositional formulae can only make inconsistencies worse. Instead, abduction is done at the level of the ordering of preference of the possible worlds. Preference models use fuzzy logic or utility models. Philosophy of science. In the philosophy of science, abduction has been the key inference method to support scientific realism, and much of the debate about scientific realism is focused on whether abduction is an acceptable method of inference. Historical linguistics. In historical linguistics, abduction during language acquisition is often taken to be an essential part of processes of language change such as reanalysis and analogy. Applied linguistics. In applied linguistics research, abductive reasoning is starting to be used as an alternative explanation to inductive reasoning, in recognition of anticipated outcomes of qualitative inquiry playing a role in shaping the direction of analysis. It is defined as "The use of an unclear premise based on observations, pursuing theories to try to explain it" (Rose et al., 2020, p. 258) Anthropology. In anthropology, Alfred Gell in his influential book "Art and Agency" defined abduction (after Eco) as "a case of synthetic inference 'where we find some very curious circumstances, which would be explained by the supposition that it was a case of some general rule, and thereupon adopt that supposition'". Gell criticizes existing "anthropological" studies of art for being too preoccupied with aesthetic value and not preoccupied enough with the central anthropological concern of uncovering "social relationships", specifically the social contexts in which artworks are produced, circulated, and received. Abduction is used as the mechanism for getting from art to agency. That is, abduction can explain how works of art inspire a "sensus communis:" the commonly held views shared by members that characterize a given society. The question Gell asks in the book is, "how does it initially 'speak' to people?" He answers by saying that "No reasonable person could suppose that art-like relations between people and things do not involve at least some form of semiosis." However, he rejects any intimation that semiosis can be thought of as a language because then he would have to admit to some pre-established existence of the "sensus communis" that he wants to claim only emerges afterwards out of art. Abduction is the answer to this conundrum because the tentative nature of the abduction concept (Peirce likened it to guessing) means that not only can it operate outside of any pre-existing framework, but moreover, it can actually intimate the existence of a framework. As Gell reasons in his analysis, the physical existence of the artwork prompts the viewer to perform an abduction that imbues the artwork with intentionality. A statue of a goddess, for example, in some senses actually becomes the goddess in the mind of the beholder; and represents not only the form of the deity but also her intentions (which are adduced from the feeling of her very presence). Therefore, through abduction, Gell claims that art can have the kind of agency that plants the seeds that grow into cultural myths. The power of agency is the power to motivate actions and inspire ultimately the shared understanding that characterizes any given society. Computer programming. In formal methods, logic is used to specify and prove properties of computer programs. Abduction has been used in mechanized reasoning tools to increase the level of automation of the proof activity. A technique known as bi-abduction, which mixes abduction and the frame problem, was used to scale reasoning techniques for memory properties to millions of lines of code; logic-based abduction was used to infer pre-conditions for individual functions in a program, relieving the human of the need to do so. It led to a program-proof startup company, which was acquired by Facebook, and the Infer program analysis tool, which led to thousands of bugs being prevented in industrial codebases. In addition to inference of function preconditions, abduction has been used to automate inference of invariants for program loops, inference of specifications of unknown code, and in synthesis of the programs themselves. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "b" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "a_1" }, { "math_id": 3, "text": "a_2" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "O" }, { "math_id": 6, "text": "E" }, { "math_id": 7, "text": "T \\cup E \\models O;" }, { "math_id": 8, "text": "T \\cup E" }, { "math_id": 9, "text": "H" }, { "math_id": 10, "text": "M" }, { "math_id": 11, "text": "e" }, { "math_id": 12, "text": "H' \\subseteq H" }, { "math_id": 13, "text": "e(H')" }, { "math_id": 14, "text": "M \\subseteq e(H')" }, { "math_id": 15, "text": "H'" }, { "math_id": 16, "text": "e(H') = \\bigcup_{h \\in H'} e(\\{h\\})" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "\\mathbf{X}" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": "\\omega_{X}=(b_{X}, u_{X}, a_{X})\\,\\!" }, { "math_id": 21, "text": "b_{X}\\,\\!" }, { "math_id": 22, "text": "u_{X}\\,\\!" }, { "math_id": 23, "text": "a_{X}\\,\\!" }, { "math_id": 24, "text": "u_{X}+\\sum b_{X}(x) = 1\\,\\!" }, { "math_id": 25, "text": "\\sum a_{X}(x) = 1\\,\\!" }, { "math_id": 26, "text": "b_{X}(x),u_{X},a_{X}(x) \\in [0,1]\\,\\!" }, { "math_id": 27, "text": "\\mathbf{Y}" }, { "math_id": 28, "text": "Y" }, { "math_id": 29, "text": "\\omega_{X\\mid Y}" }, { "math_id": 30, "text": "y" }, { "math_id": 31, "text": "a_{Y}" }, { "math_id": 32, "text": "\\;\\widetilde{\\phi}" }, { "math_id": 33, "text": "\\omega_{Y\\tilde{\\mid} X}" }, { "math_id": 34, "text": "\\omega_{Y\\tilde{|}X}=\\omega_{X|Y}\\;\\widetilde{\\phi\\,}\\;a_{Y}" }, { "math_id": 35, "text": "\\omega_{X}" }, { "math_id": 36, "text": "\\circledcirc" }, { "math_id": 37, "text": "\\omega_{Y\\,\\overline{\\|}\\,X}" }, { "math_id": 38, "text": "\\begin{align}\n\\omega_{Y\\,\\widetilde{\\|}\\,X} \n&= \\omega_{X\\mid Y} \\;\\widetilde{\\circledcirc}\\; \\omega_{X}\\\\\n&= (\\omega_{X\\mid Y} \\;\\widetilde{\\phi\\,}\\; a_{Y}) \\;\\circledcirc\\;\\omega_{X}\\\\\n&= \\omega_{Y\\widetilde{|}X} \\;\\circledcirc\\;\\omega_{X}\\;.\n\\end{align}" }, { "math_id": 39, "text": "\\widetilde{\\|}" }, { "math_id": 40, "text": "\\widetilde{\\circledcirc}" }, { "math_id": 41, "text": "\\widetilde{\\phi\\,}" } ]
https://en.wikipedia.org/wiki?curid=60459
6046005
Erdős–Kac theorem
Fundamental theorem of probabilistic number theory In number theory, the Erdős–Kac theorem, named after Paul Erdős and Mark Kac, and also known as the fundamental theorem of probabilistic number theory, states that if "ω"("n") is the number of distinct prime factors of "n", then, loosely speaking, the probability distribution of formula_0 is the standard normal distribution. (formula_1 is sequence in the OEIS.) This is an extension of the Hardy–Ramanujan theorem, which states that the normal order of "ω"("n") is log log "n" with a typical error of size formula_2. Precise statement. For any fixed "a" &lt; "b", formula_3 where formula_4 is the normal (or "Gaussian") distribution, defined as formula_5 More generally, if "f"("n") is a strongly additive function (formula_6) with formula_7 for all prime "p", then formula_8 with formula_9 Kac's original heuristic. Intuitively, Kac's heuristic for the result says that if "n" is a randomly chosen large integer, then the number of distinct prime factors of "n" is approximately normally distributed with mean and variance log log "n". This comes from the fact that given a random natural number "n", the events "the number "n" is divisible by some prime "p"" for each "p" are mutually independent. Now, denoting the event "the number "n" is divisible by "p"" by formula_10, consider the following sum of indicator random variables: formula_11 This sum counts how many distinct prime factors our random natural number "n" has. It can be shown that this sum satisfies the Lindeberg condition, and therefore the Lindeberg central limit theorem guarantees that after appropriate rescaling, the above expression will be Gaussian. The actual proof of the theorem, due to Erdős, uses sieve theory to make rigorous the above intuition. Numerical examples. The Erdős–Kac theorem means that the construction of a number around one billion requires on average three primes. For example, 1,000,000,003 = 23 × 307 × 141623. The following table provides a numerical summary of the growth of the average number of distinct prime factors of a natural number formula_12 with increasing formula_12. Around 12.6% of 10,000 digit numbers are constructed from 10 distinct prime numbers and around 68% are constructed from between 7 and 13 primes. A hollow sphere the size of the planet Earth filled with fine sand would have around 1033 grains. A volume the size of the observable universe would have around 1093 grains of sand. There might be room for 10185 quantum strings in such a universe. Numbers of this magnitude—with 186 digits—would require on average only 6 primes for construction. It is very difficult if not impossible to discover the Erdős-Kac theorem empirically, as the Gaussian only shows up when formula_12 starts getting to be around formula_13. More precisely, Rényi and Turán showed that the best possible uniform asymptotic bound on the error in the approximation to a Gaussian is formula_14 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{\\omega(n) - \\log\\log n}{\\sqrt{\\log\\log n}} " }, { "math_id": 1, "text": "\\omega(n)" }, { "math_id": 2, "text": "\\sqrt{\\log\\log n}" }, { "math_id": 3, "text": "\\lim_{x \\rightarrow \\infty} \\left ( \\frac {1}{x} \\cdot \\#\\left\\{ n \\leq x : a \\le \\frac{\\omega(n) - \\log \\log n}{\\sqrt{\\log \\log n}} \\le b \\right\\} \\right ) = \\Phi(a,b) " }, { "math_id": 4, "text": "\\Phi(a,b)" }, { "math_id": 5, "text": "\\Phi(a,b)= \\frac{1}{\\sqrt{2\\pi}}\\int_a^b e^{-t^2/2} \\, dt. " }, { "math_id": 6, "text": "\\scriptstyle f(p_1^{a_1}\\cdots p_k^{a_k})=f(p_1)+\\cdots+f(p_k)" }, { "math_id": 7, "text": "\\scriptstyle |f(p)|\\le1" }, { "math_id": 8, "text": "\\lim_{x \\rightarrow \\infty} \\left ( \\frac {1}{x} \\cdot \\#\\left\\{ n \\leq x : a \\le \\frac{f(n) - A(n)}{B(n)} \\le b \\right\\} \\right ) = \\Phi(a,b) " }, { "math_id": 9, "text": "A(n)=\\sum_{p\\le n}\\frac{f(p)}{p},\\qquad B(n)=\\sqrt{\\sum_{p\\le n}\\frac{f(p)^2}{p}}." }, { "math_id": 10, "text": "n_{p}" }, { "math_id": 11, "text": "I_{n_{2}} + I_{n_{3}} + I_{n_{5}} + I_{n_{7}} + \\ldots " }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": "10^{100}" }, { "math_id": 14, "text": " O\\left(\\frac{1}{\\sqrt{\\log \\log n}}\\right). " } ]
https://en.wikipedia.org/wiki?curid=6046005
60466332
Sinusoidal plane wave
In physics, a sinusoidal plane wave is a special case of plane wave: a field whose value varies as a sinusoidal function of time and of the distance from some fixed plane. It is also called a monochromatic plane wave, with constant frequency (as in monochromatic radiation). Basic representation. For any position formula_0 in space and any time formula_1, the value of such a field can be written as formula_2 where formula_3 is a unit-length vector, the "direction of propagation" of the wave, and "formula_4" denotes the dot product of two vectors. The parameter formula_5, which may be a scalar or a vector, is called the "amplitude" of the wave; the coefficient formula_6, a positive scalar, its "spatial frequency"; and the adimensional scalar formula_7, an angle in radians, is its "initial phase" or "phase shift". The scalar quantity formula_8 gives the (signed) displacement of the point formula_0 from the plane that is perpendicular to formula_3 and goes through the origin of the coordinate system. This quantity is constant over each plane perpendicular to formula_3. At time formula_9, the field formula_10 varies with the displacement formula_11 as a sinusoidal function formula_12 The spatial frequency formula_6 is the number of full cycles per unit of length along the direction formula_3. For any other value of formula_1, the field values are displaced by the distance formula_13 in the direction formula_3. That is, the whole field seems to travel in that direction with velocity formula_14. For each displacement formula_11, the moving plane perpendicular to formula_3 at distance formula_15 from the origin is called a "wavefront". This plane lies at distance formula_11 from the origin when formula_9, and travels in the direction formula_3 also with speed formula_14; and the value of the field is then the same, and constant in time, at every one of its points. A sinusoidal plane wave could be a suitable model for a sound wave within a volume of air that is small compared to the distance of the source (provided that there are no echos from nearly objects). In that case, formula_16 would be a scalar field, the deviation of air pressure at point formula_0 and time formula_1, away from its normal level. At any fixed point formula_0, the field will also vary sinusoidally with time; it will be a scalar multiple of the amplitude formula_5, between formula_17 and formula_18 When the amplitude formula_5 is a vector orthogonal to formula_3, the wave is said to be "transverse". Such waves may exhibit polarization, if formula_5 can be oriented along two non-collinear directions. When formula_5 is a vector collinear with formula_3, the wave is said to be "longitudinal". These two possibilities are exemplified by the S (shear) waves and P (pressure) waves studied in seismology. The formula above gives a purely "kinematic" description of the wave, without reference to whatever physical process may be causing its motion. In a mechanical or electromagnetic wave that is propagating through an isotropic medium, the vector formula_3 of the apparent propagation of the wave is also the direction in which energy or momentum is actually flowing. However, the two directions may be different in an anisotropic medium.(See also: Wave vector#Direction of the wave vector.) Alternative representations. The same sinusoidal plane wave formula_10 above can also be expressed in terms of sine instead of cosine using the elementary identity formula_19 formula_20 where formula_21. Thus the value and meaning of the phase shift depends on whether the wave is defined in terms of sine or co-sine. Adding any integer multiple of formula_22 to the initial phase formula_7 has no effect on the field. Adding an odd multiple of formula_23 has the same effect as negating the amplitude formula_5. Assigning a negative value for the spatial frequency formula_6 has the effect of reversing the direction of propagation, with a suitable adjustment of the initial phase. The formula of a sinusoidal plane wave can be written in several other ways: Complex exponential form. A plane sinusoidal wave may also be expressed in terms of the complex exponential function formula_24 where formula_25 is the base of the natural exponential function, and formula_26 is the imaginary unit, defined by the equation formula_27. With those tools, one defines the complex exponential plane wave as formula_28 where formula_29 are as defined for the (real) sinusoidal plane wave. This equation gives a field formula_30 whose value is a complex number, or a vector with complex coordinates. The original wave expression is now simply the real part, formula_31 To appreciate this equation's relationship to the earlier ones, below is this same equation expressed using sines and cosines. Observe that the first term equals the real form of the plane wave just discussed. formula_32 The introduced complex form of the plane wave can be simplified by using a complex-valued amplitude formula_33 substitute the real valued amplitude formula_34. &lt;br&gt; Specifically, since the complex form formula_35 one can absorb the phase factor formula_36 into a complex amplitude by letting formula_37, resulting in the more compact equation formula_38 While the complex form has an imaginary component, after the necessary calculations are performed in the complex plane, its real value (which corresponds to the wave one would actually physically observe or measure) can be extracted giving a real valued equation representing an actual plane wave. formula_39 The main reason one would choose to work with complex exponential form of plane waves is that complex exponentials are often algebraically easier to handle than the trigonometric sines and cosines. Specifically, the angle-addition rules are extremely simple for exponentials. Additionally, when using Fourier analysis techniques for waves in a lossy medium, the resulting attenuation is easier to deal with using complex Fourier coefficients. If a wave is traveling through a lossy medium, the amplitude of the wave is no longer constant, and therefore the wave is strictly speaking no longer a true plane wave. In quantum mechanics the solutions of the Schrödinger wave equation are by their very nature complex-valued and in the simplest instance take a form identical to the complex plane wave representation above. The imaginary component in that instance however has not been introduced for the purpose of mathematical expediency but is in fact an inherent part of the “wave”. In special relativity, one can utilize an even more compact expression by using four-vectors. Thus, formula_40 becomes formula_41 Applications. The equations describing electromagnetic radiation in a homogeneous dielectric medium admit as special solutions that are sinusoidal plane waves. In electromagnetism, the field formula_10 is typically the electric field, magnetic field, or vector potential, which in an isotropic medium is perpendicular to the direction of propagation formula_3. The amplitude formula_5 is then a vector of the same nature, equal to the maximum-strength field. The propagation speed formula_14 will be the speed of light in the medium. The equations that describe vibrations in a homogeneous elastic solid also admit solutions that are sinusoidal plane waves, both transverse and longitudinal. These two types have different propagation speeds, that depend on the density and the Lamé parameters of the medium. The fact that the medium imposes a propagation speed means that the parameters formula_42 and formula_43 must satisfy a dispersion relation characteristic of the medium. The dispersion relation is often expressed as a function, formula_44. The ratio formula_45 gives the magnitude of the phase velocity, and the derivative formula_46 gives the group velocity. For electromagnetism in an isotropic medium with index of refraction formula_47, the phase velocity is formula_48, which equals the group velocity if the index is not frequency-dependent. In linear uniform media, a general solution to the wave equation can be expressed as a superposition of sinusoidal plane waves. This approach is known as the angular spectrum method. The form of the planewave solution is actually a general consequence of translational symmetry. More generally, for periodic structures having discrete translational symmetry, the solutions take the form of Bloch waves, most famously in crystalline atomic materials but also in photonic crystals and other periodic wave equations. As another generalization, for structures that are only uniform along one direction formula_49 (such as a waveguide along the formula_49 direction), the solutions (waveguide modes) are of the form formula_50 multiplied by some amplitude function formula_51. This is a special case of a separable partial differential equation. Polarized electromagnetic plane waves. Represented in the first illustration toward the right is a linearly polarized, electromagnetic wave. Because this is a plane wave, each blue vector, indicating the perpendicular displacement from a point on the axis out to the sine wave, represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the axis. Represented in the second illustration is a circularly polarized, electromagnetic plane wave. Each blue vector indicating the perpendicular displacement from a point on the axis out to the helix, also represents the magnitude and direction of the electric field for an entire plane perpendicular to the axis. In both illustrations, along the axes is a series of shorter blue vectors which are scaled down versions of the longer blue vectors. These shorter blue vectors are extrapolated out into the block of black vectors which fill a volume of space. Notice that for a given plane, the black vectors are identical, indicating that the magnitude and direction of the electric field is constant along that plane. In the case of the linearly polarized light, the field strength from plane to plane varies from a maximum in one direction, down to zero, and then back up to a maximum in the opposite direction. In the case of the circularly polarized light, the field strength remains constant from plane to plane but its direction steadily changes in a rotary type manner. Not indicated in either illustration is the electric field’s corresponding magnetic field which is proportional in strength to the electric field at each point in space but is at a right angle to it. Illustrations of the magnetic field vectors would be virtually identical to these except all the vectors would be rotated 90 degrees about the axis of propagation so that they were perpendicular to both the direction of propagation and the electric field vector. The ratio of the amplitudes of the electric and magnetic field components of a plane wave in free space is known as the free-space wave-impedance, equal to 376.730313 ohms. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec x" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "F(\\vec x, t) = A \\cos\\left(2\\pi \\nu (\\vec x \\cdot \\hat n - c t) + \\varphi\\right)" }, { "math_id": 3, "text": "\\hat n" }, { "math_id": 4, "text": "\\cdot" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "\\nu" }, { "math_id": 7, "text": "\\varphi" }, { "math_id": 8, "text": "d = \\vec x \\cdot \\hat n" }, { "math_id": 9, "text": "t = 0" }, { "math_id": 10, "text": "F" }, { "math_id": 11, "text": "d" }, { "math_id": 12, "text": "F(\\vec x, 0)=A \\cos\\left(2\\pi \\nu (\\vec x \\cdot \\hat n) + \\varphi\\right)" }, { "math_id": 13, "text": "c t" }, { "math_id": 14, "text": "c" }, { "math_id": 15, "text": "d + c t" }, { "math_id": 16, "text": "F(\\vec x,t)\\," }, { "math_id": 17, "text": "+A" }, { "math_id": 18, "text": "-A" }, { "math_id": 19, "text": "\\cos a = \\sin(a + \\pi/2)" }, { "math_id": 20, "text": "F(\\vec x, t)=A \\sin\\left(2\\pi \\nu (\\vec x \\cdot \\hat n - c t) + \\varphi'\\right)" }, { "math_id": 21, "text": "\\varphi' = \\varphi + \\pi/2" }, { "math_id": 22, "text": "2\\pi" }, { "math_id": 23, "text": "\\pi" }, { "math_id": 24, "text": "e^{\\mathrm{i} z} = \\exp(\\mathrm{i}z) = \\cos z + \\mathrm{i}\\sin z" }, { "math_id": 25, "text": "e" }, { "math_id": 26, "text": "\\mathrm{i}" }, { "math_id": 27, "text": "\\mathrm{i}^2 = -1" }, { "math_id": 28, "text": "U(\\vec x,t)\\;=\\; A \\exp[\\mathrm{i}(2\\pi\\nu(\\vec x\\cdot\\hat n - c t) +\\varphi)]\\;=\\; A \\exp[\\mathrm{i}(2\\pi\\vec x \\cdot \\vec v - \\omega t + \\varphi)]" }, { "math_id": 29, "text": "A,\\nu,\\hat n,c,\\vec v,\\omega, \\varphi" }, { "math_id": 30, "text": "U(\\vec x,t)" }, { "math_id": 31, "text": "F(\\vec x,t) = \\operatorname{Re}{\\left[U(\\vec x,t)\\right]}" }, { "math_id": 32, "text": "\\begin{array}{rcccc}\nU (\\vec x, t ) &=& A \\cos (2\\pi\\nu \\hat n \\cdot \\vec x - \\omega t + \\varphi ) & + & \\mathrm{i} A \\sin (2\\pi\\nu \\hat n \\cdot \\vec x - \\omega t + \\varphi ) \\\\[1ex]\nU (\\vec x, t ) &=& F (\\vec x, t ) & + & \\mathrm{i} A \\sin (2\\pi\\nu \\hat n \\cdot \\vec x - \\omega t + \\varphi )\n\\end{array}" }, { "math_id": 33, "text": "C\\," }, { "math_id": 34, "text": "A\\," }, { "math_id": 35, "text": "\\exp[\\mathrm{i}(2\\pi\\vec x \\cdot \\vec v - \\omega t +\\varphi)] \\;=\\; \\exp[\\mathrm{i}(2\\pi\\nu \\hat n\\cdot\\vec x - \\omega t )]\\,e^{\\mathrm{i} \\varphi}" }, { "math_id": 36, "text": "e^{\\mathrm{i} \\varphi}" }, { "math_id": 37, "text": "C = A e^{\\mathrm{i} \\varphi}" }, { "math_id": 38, "text": "U(\\vec x,t)= C \\exp[\\mathrm{i}(2\\pi\\vec x\\cdot\\vec v - \\omega t)]" }, { "math_id": 39, "text": "\\operatorname{Re}[U(\\vec x,t)]= F (\\vec x, t ) = A \\cos (2\\pi\\nu \\hat n \\cdot \\vec x - \\omega t + \\varphi )" }, { "math_id": 40, "text": "U(\\vec x,t)= C \\exp[\\mathrm{i}(2\\pi\\nu \\hat n\\cdot\\vec x - \\omega t )]" }, { "math_id": 41, "text": "U(\\vec x)= C \\exp[-\\mathrm{i}(2\\pi\\nu \\hat n\\cdot\\vec x)]" }, { "math_id": 42, "text": "\\omega" }, { "math_id": 43, "text": "k" }, { "math_id": 44, "text": "\\omega(k)" }, { "math_id": 45, "text": "\\omega/|k|" }, { "math_id": 46, "text": "\\partial\\omega / \\partial k" }, { "math_id": 47, "text": "r" }, { "math_id": 48, "text": "c/r" }, { "math_id": 49, "text": "x" }, { "math_id": 50, "text": "exp[i(kx-\\omega t)]" }, { "math_id": 51, "text": "a(y,z)" } ]
https://en.wikipedia.org/wiki?curid=60466332
60466580
Traveling plane wave
Type of plane wave In mathematics and physics, a traveling plane wave is a special case of plane wave, namely a field whose evolution in time can be described as simple translation of its values at a constant "wave speed" formula_0, along a fixed "direction of propagation" formula_1. Such a field can be written as formula_2 where formula_3 is a function of a single real parameter formula_4. The function formula_5 describes the profile of the wave, namely the value of the field at time formula_6, for each displacement formula_7. For each displacement formula_8, the moving plane perpendicular to formula_1 at distance formula_9 from the origin is called a wavefront. This plane too travels along the direction of propagation formula_1 with velocity formula_0; and the value of the field is then the same, and constant in time, at every one of its points. The wave formula_10 may be a scalar or vector field; its values are the values of formula_5. A sinusoidal plane wave is a special case, when formula_3 is a sinusoidal function of formula_11. Properties. A traveling plane wave can be studied by ignoring the dimensions of space perpendicular to the vector formula_1; that is, by considering the wave formula_12 on a one-dimensional medium, with a single position coordinate formula_13. For a scalar traveling plane wave in two or three dimensions, the gradient of the field is always collinear with the direction formula_1; specifically, formula_14, where formula_15 is the derivative of formula_5. Moreover, a traveling plane wave formula_10 of any shape satisfies the partial differential equation formula_16 Plane traveling waves are also special solutions of the wave equation in an homogeneous medium. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "\\vec n" }, { "math_id": 2, "text": "F(\\vec x, t)=G\\left(\\vec x \\cdot \\vec n - c t\\right)\\," }, { "math_id": 3, "text": "G(u)" }, { "math_id": 4, "text": "u = d - c t" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "t = 0" }, { "math_id": 7, "text": "d = \\vec x \\cdot \\vec n" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "d + c t" }, { "math_id": 10, "text": "F" }, { "math_id": 11, "text": "u" }, { "math_id": 12, "text": "F(z\\vec n,t) = G(z - ct)" }, { "math_id": 13, "text": "z" }, { "math_id": 14, "text": "\\nabla F(\\vec x,t) = \\vec n G'(\\vec x \\cdot \\vec n - ct)" }, { "math_id": 15, "text": "G'" }, { "math_id": 16, "text": "\\nabla F = -\\frac{\\vec n}{c}\\frac{\\partial F}{\\partial t}" } ]
https://en.wikipedia.org/wiki?curid=60466580
60469161
Zero-weight cycle problem
In computer science and graph theory, the zero-weight cycle problem is the problem of deciding whether a directed graph with weights on the edges (which may be positive or negative or zero) has a cycle in which the sum of weights is 0. A related problem is to decide whether the graph has a negative cycle, a cycle in which the sum of weights is "less than" 0. This related problem can be solved in polynomial time using the Bellman–Ford algorithm. If there is no negative cycle, then the distances found by the Bellman–Ford algorithm can be used, as in Johnson's algorithm, to reweight the edges of the graph in such a way that all edge weights become non-negative and all cycle lengths remain unchanged. With this reweighting, a zero-weight cycle becomes trivial to detect: it exists if and only if the zero-weight edges do not form a directed acyclic graph. Therefore, the special case of the zero-weight cycle problem, on graphs with no negative cycle, has a polynomial-time algorithm. In contrast, for graphs that contain negative cycles, detecting a simple cycle of weight "exactly" 0 is an NP-complete problem. This is true even when the weights are integers of polynomial magnitude. In particular, there is a reduction from the Hamiltonian path problem, on an formula_0-vertex unweighted graph formula_1 with specified starting and ending vertices formula_2 and formula_3, to the zero-weight cycle problem on a weighted graph obtained by giving all edges of formula_1 weight equal to one, and adding an additional edge from formula_3 to formula_2 with weight formula_4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "1-n" } ]
https://en.wikipedia.org/wiki?curid=60469161
604707
Truth function
A function in Classical logic In logic, a truth function is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic, wherein a compound statement is constructed using individual statements connected by logical connectives; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional. Classical propositional logic is a truth-functional logic, in that every statement has exactly one truth value which is either true or false, and every logical connective is truth functional (with a correspondent truth table), thus every compound statement is a truth function. On the other hand, modal logic is non-truth-functional. Overview. A logical connective is truth-functional if the truth-value of a compound sentence is a function of the truth-value of its sub-sentences. A class of connectives is truth-functional if each of its members is. For example, the connective ""and" is truth-functional since a sentence like "Apples are fruits and carrots are vegetables" is true "if, and only if," each of its sub-sentences "apples are fruits" and "carrots are vegetables"" is true, and it is false otherwise. Some connectives of a natural language, such as English, are not truth-functional. Connectives of the form "x "believes that" ..." are typical examples of connectives that are not truth-functional. If e.g. Mary mistakenly believes that Al Gore was President of the USA on April 20, 2000, but she does not believe that the moon is made of green cheese, then the sentence ""Mary believes that Al Gore was President of the USA on April 20, 2000" is true while "Mary believes that the moon is made of green cheese" is false. In both cases, each component sentence (i.e. "Al Gore was president of the USA on April 20, 2000" and "the moon is made of green cheese") is false, but each compound sentence formed by prefixing the phrase "Mary believes that" differs in truth-value. That is, the truth-value of a sentence of the form "Mary believes that..."" is not determined solely by the truth-value of its component sentence, and hence the (unary) connective (or simply "operator" since it is unary) is non-truth-functional. The class of classical logic connectives (e.g. &amp;, →) used in the construction of formulas is truth-functional. Their values for various truth-values as argument are usually given by truth tables. Truth-functional propositional calculus is a formal system whose formulae may be interpreted as either true or false. Table of binary truth functions. In two-valued logic, there are sixteen possible truth functions, also called Boolean functions, of two inputs "P" and "Q". Any of these functions corresponds to a truth table of a certain logical connective in classical logic, including several degenerate cases such as a function not depending on one or both of its arguments. Truth and falsehood are denoted as 1 and 0, respectively, in the following truth tables for sake of brevity. Functional completeness. Because a function may be expressed as a composition, a truth-functional logical calculus does not need to have dedicated symbols for all of the above-mentioned functions to be functionally complete. This is expressed in a propositional calculus as logical equivalence of certain compound statements. For example, classical logic has ¬"P" ∨ "Q" equivalent to "P" → "Q". The conditional operator "→" is therefore not necessary for a classical-based logical system if "¬" (not) and "∨" (or) are already in use. A minimal set of operators that can express every statement expressible in the propositional calculus is called a "minimal functionally complete set". A minimally complete set of operators is achieved by NAND alone {↑} and NOR alone {↓}. The following are the minimal functionally complete sets of operators whose arities do not exceed 2: Algebraic properties. Some truth functions possess properties which may be expressed in the theorems containing the corresponding connective. Some of those properties that a binary truth function (or a corresponding logical connective) may have are: A set of truth functions is functionally complete if and only if for each of the following five properties it contains at least one member lacking it: Arity. A concrete function may be also referred to as an "operator". In two-valued logic there are 2 nullary operators (constants), 4 unary operators, 16 binary operators, 256 ternary operators, and formula_32 "n"-ary operators. In three-valued logic there are 3 nullary operators (constants), 27 unary operators, 19683 binary operators, 7625597484987 ternary operators, and formula_33 "n"-ary operators. In "k"-valued logic, there are "k" nullary operators, formula_34 unary operators, formula_35 binary operators, formula_36 ternary operators, and formula_37 "n"-ary operators. An "n"-ary operator in "k"-valued logic is a function from formula_38. Therefore, the number of such operators is formula_39, which is how the above numbers were derived. However, some of the operators of a particular arity are actually degenerate forms that perform a lower-arity operation on some of the inputs and ignore the rest of the inputs. Out of the 256 ternary boolean operators cited above, formula_40 of them are such degenerate forms of binary or lower-arity operators, using the inclusion–exclusion principle. The ternary operator formula_41 is one such operator which is actually a unary operator applied to one input, and ignoring the other two inputs. "Not" is a unary operator, it takes a single term (¬"P"). The rest are binary operators, taking two terms to make a compound statement ("P" ∧ "Q", "P" ∨ "Q", "P" → "Q", "P" ↔ "Q"). The set of logical operators Ω may be partitioned into disjoint subsets as follows: formula_42 In this partition, formula_43 is the set of operator symbols of "arity" j. In the more familiar propositional calculi, formula_44 is typically partitioned as follows: nullary operators: formula_45 unary operators: formula_46 binary operators: formula_47 Principle of compositionality. Instead of using truth tables, logical connective symbols can be interpreted by means of an interpretation function and a functionally complete set of truth-functions (Gamut 1991), as detailed by the principle of compositionality of meaning. Let "I" be an interpretation function, let "Φ", "Ψ" be any two sentences and let the truth function "f"nand be defined as: Then, for convenience, "f"not, "f"or "f"and and so on are defined by means of "f"nand: or, alternatively "f"not, "f"or "f"and and so on are defined directly: Then etc. Thus if "S" is a sentence that is a string of symbols consisting of logical symbols "v"1..."v""n" representing logical connectives, and non-logical symbols "c"1..."c""n", then if and only if "I"("v"1)..."I"("v""n") have been provided interpreting "v"1 to "v""n" by means of "f"nand (or any other set of functional complete truth-functions) then the truth-value of &amp;NoBreak;&amp;NoBreak; is determined entirely by the truth-values of "c"1..."c""n", i.e. of "I"("c"1)..."I"("c""n"). In other words, as expected and required, "S" is true or false only under an interpretation of all its non-logical symbols. Computer science. Logical operators are implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates. NAND and NOR gates with 3 or more inputs rather than the usual 2 inputs are fairly common, although they are logically equivalent to a cascade of 2-input gates. All other operators are implemented by breaking them down into a logically equivalent combination of 2 or more of the above logic gates. The "logical equivalence" of "NAND alone", "NOR alone", and "NOT and AND" is similar to Turing equivalence. The fact that all truth functions can be expressed with NOR alone is demonstrated by the Apollo guidance computer. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{\\vee, \\neg\\}" }, { "math_id": 1, "text": "\\{\\wedge, \\neg\\}" }, { "math_id": 2, "text": "\\{\\to, \\neg\\}" }, { "math_id": 3, "text": "\\{\\gets, \\neg\\}" }, { "math_id": 4, "text": "\\{\\to, \\bot\\}" }, { "math_id": 5, "text": "\\{\\gets, \\bot\\}" }, { "math_id": 6, "text": "\\{\\to, \\nleftrightarrow\\}" }, { "math_id": 7, "text": "\\{\\gets, \\nleftrightarrow\\}" }, { "math_id": 8, "text": "\\{\\to, \\nrightarrow\\}" }, { "math_id": 9, "text": "\\{\\to, \\nleftarrow\\}" }, { "math_id": 10, "text": "\\{\\gets, \\nrightarrow\\}" }, { "math_id": 11, "text": "\\{\\gets, \\nleftarrow\\}" }, { "math_id": 12, "text": "\\{\\nrightarrow, \\neg\\}" }, { "math_id": 13, "text": "\\{\\nleftarrow, \\neg\\}" }, { "math_id": 14, "text": "\\{\\nrightarrow, \\top\\}" }, { "math_id": 15, "text": "\\{\\nleftarrow, \\top\\}" }, { "math_id": 16, "text": "\\{\\nrightarrow, \\leftrightarrow\\}" }, { "math_id": 17, "text": "\\{\\nleftarrow, \\leftrightarrow\\}" }, { "math_id": 18, "text": "\\{\\lor, \\leftrightarrow, \\bot\\}" }, { "math_id": 19, "text": "\\{\\lor, \\leftrightarrow, \\nleftrightarrow\\}" }, { "math_id": 20, "text": "\\{\\lor, \\nleftrightarrow, \\top\\}" }, { "math_id": 21, "text": "\\{\\land, \\leftrightarrow, \\bot\\}" }, { "math_id": 22, "text": "\\{\\land, \\leftrightarrow, \\nleftrightarrow\\}" }, { "math_id": 23, "text": "\\{\\land, \\nleftrightarrow, \\top\\}" }, { "math_id": 24, "text": "\\land, \\lor" }, { "math_id": 25, "text": "a\\land(a\\lor b)=a\\lor(a\\land b)=a" }, { "math_id": 26, "text": "\\vee, \\wedge, \\top, \\bot" }, { "math_id": 27, "text": "\\neg, \\leftrightarrow" }, { "math_id": 28, "text": "\\not\\leftrightarrow, \\top, \\bot" }, { "math_id": 29, "text": "\\neg" }, { "math_id": 30, "text": "\\vee, \\wedge, \\top, \\rightarrow, \\leftrightarrow, \\subset" }, { "math_id": 31, "text": "\\vee, \\wedge, \\nleftrightarrow, \\bot, \\not\\subset, \\not\\supset" }, { "math_id": 32, "text": "2^{2^n}" }, { "math_id": 33, "text": "3^{3^n}" }, { "math_id": 34, "text": "k^k" }, { "math_id": 35, "text": "k^{k^2}" }, { "math_id": 36, "text": "k^{k^3}" }, { "math_id": 37, "text": "k^{k^n}" }, { "math_id": 38, "text": "\\mathbb{Z}_k^n \\to \\mathbb{Z}_k" }, { "math_id": 39, "text": "|\\mathbb{Z}_k|^{|\\mathbb{Z}_k^n|} = k^{k^n}" }, { "math_id": 40, "text": "\\binom{3}{2}\\cdot 16 - \\binom{3}{1}\\cdot 4 + \\binom{3}{0}\\cdot 2" }, { "math_id": 41, "text": "f(x,y,z)=\\lnot x" }, { "math_id": 42, "text": "\\Omega = \\Omega_0 \\cup \\Omega_1 \\cup \\ldots \\cup \\Omega_j \\cup \\ldots \\cup \\Omega_m \\,." }, { "math_id": 43, "text": "\\Omega_j" }, { "math_id": 44, "text": "\\Omega" }, { "math_id": 45, "text": "\\Omega_0 = \\{\\bot, \\top \\} " }, { "math_id": 46, "text": "\\Omega_1 = \\{ \\lnot \\} " }, { "math_id": 47, "text": "\\Omega_2 \\supset \\{ \\land, \\lor, \\rightarrow, \\leftrightarrow \\} " } ]
https://en.wikipedia.org/wiki?curid=604707
60470875
Loop-gap resonator
Type of electromagnetic resonator A loop-gap resonator (LGR) is an electromagnetic resonator that operates in the radio and microwave frequency ranges. The simplest LGRs are made from a conducting tube with a narrow slit cut along its length. The LGR dimensions are typically much smaller than the free-space wavelength of the electromagnetic fields at the resonant frequency. Therefore, relatively compact LGRs can be designed to operate at frequencies that are too low to be accessed using, for example, cavity resonators. These structures can have very sharp resonances (high quality factors) making them useful for electron spin resonance (ESR) experiments, and precision measurements of electromagnetic material properties (permittivity and permeability). Background. Loop-gap resonators (LGRs) can be modelled as lumped-element circuits. The slit along the length of the resonator has an effective capacitance formula_1 and the bore of the resonator has effective inductance formula_2. At, or near, the resonance frequency, a circumferential current is established along the inner wall of the resonator. The effective resistance formula_3 that limits this current is, in part, determined by the resistivity formula_4 and electromagnetic skin depth formula_5 of the conductor used to make the LGR. It is, therefore, possible to model the LGR as an formula_6 circuit. Since the LGR current is a maximum at the resonant frequency, the equivalent circuit model is a series formula_6 circuit. This circuit model works well provided the dimensions of the resonator remain small compared to the free-space wavelength of the electromagnetic fields. One advantage of the LGR is that it produces regions of uniform electric and magnetic fields that are isolated from one another. A uniform electric field exists within the slit of the LGR and a uniform magnetic field exists within the bore of the resonator. The uniform magnetic field makes the LGR a good source of microwave magnetic fields in ESR experiments. Furthermore, because the electric and magnetic fields are isolated from one another, one can use the LGR to independently probe the electric and magnetic properties of materials. For example, if the gap of the LGR is filled with a dielectric material, the effective capacitance of the LGR will be modified which will change the frequency formula_7 and quality factor formula_8 of the resonance. Measurements of the changes in formula_7 and formula_8 can be used to fully determine the complex permittivity of the dielectric material. Likewise, if the bore of the LGR is filled with a magnetic material, the effective inductance of the LGR will be modified and the resulting changes in formula_7 and formula_8 can be used to extract the complex permeability of the magnetic material. Resonant Frequency and Quality Factor. Resonance frequency. The capacitance of the gap of the LGR is given by formula_9 where formula_10 is the permittivity of free space, formula_11 is the thickness of the bore wall, formula_12 is the gap width, and formula_0 is the length of the resonator. The resonator bore acts as a single-turn solenoid with inductance given by formula_13 where formula_14 is the permeability of free space and formula_15 is the inner radius of the LGR bore. For a high-formula_8 resonator, the resonant frequency is, to an approximation, given by formula_16 where formula_17 is the vacuum speed of light. Therefore, the resonant frequency of the LGR is determined from its geometry and is, to first approximation, independent of its length. Quality factor. For a highly underdamped series formula_6 circuit, the quality factor, which determines the sharpness of the resonance, is given by formula_18 The effective resistance of a LGR can be estimated by considering the length of conductor through which the current travels and the cross-sectional area available to it. The relevant conductor length is the circumference formula_19 of the conductor's inner surface. The depth that the current penetrates into the inner surface of the LGR bore is determined by the electromagnetic skin depth formula_5. Therefore, the cross-sectional area through which charge flows is formula_20. Combining these results gives an effective resistance formula_21 where formula_4 is the resistivity of the conductor. The effective capacitance, inductance, and resistance then lead to a simple expression for the expected quality factor of the LGR formula_22 where, for a good conductor, the electromagnetic skin depth at the resonance frequency is given by formula_23 and formula_24. For an aluminum resonator with formula_25 and formula_26 the above analysis predicts formula_27. Radiative losses. In practice, the measured quality factor a cylindrical LGR, without additional electromagnetic shielding, will be much less than the predicted value of formula_28. The suppression of the quality factor is due to radiative power loss from magnetic field lines that extend out of LGR bore and into free space. An order-of-magnitude estimate of the effective radiation resistance can be made by treating the LGR as a conducting loop. In the limit that the wavelength of the radiation is much larger than the loop radius formula_15, the radiation resistance is formula_29 and can be much larger than the resistance formula_30 due to the resistivity of the LGR conductor. The radiative losses can be suppressed by placing the LGR inside a circular waveguide. Provided that the cutoff frequency of the lowest TE11 waveguide mode is well above the resonant frequency of the LGR, the magnetic field lines will be prevented from propagating into free space. The presence of the electromagnetic shield will alter the resonant frequency and quality factor of the LGR, but typically by only a few percent. Toroidal LGR. In some applications requiring high quality factors, the electromagnetic shielding provided by a concentric circular waveguide surrounding a cylindrical LGR can be bulky and awkward to work. A toroidal LGR can be used for high-formula_8 measurements without requiring additional electromagnetic shielding. In the toroidal geometry the two ends of a cylindrical LGR are joined to form a completely closed structure. In this case, the magnetic field is completely confined within the bore of the resonator and there is no radiative power loss. The toroidal LGR consists of two halves that are bolted together along the outer diameter of the structure. Like the cylindrical LGR, the toroidal LGR can be modelled as a series formula_6 circuit. In general, the effective capacitance, inductance, and resistance of the toroidal LGR will differ from the expressions given above for the cylindrical LGR. However, in limit that the radius of the torus is large compared to the bore radius formula_15, the capacitance, inductance, and resistance of the toroidal LGR are approximated by the expressions above if one takes formula_0 to be equal to the circumference of the torus. The toroidal LGR is particularly convenient when characterizing the electromagnetic properties of liquid samples or particles suspended in a liquid. In these cases, the bore of the toroidal LGR can be partially filled with the liquid sample without requiring a special sample holder. This setup allows one to characterize the magnetic properties of, for example, a ferrofluid. Alternatively, if the liquid sample is nonmagnetic, the entire toroidal LGR can be submerged in the liquid (or gas). In this case, the dielectric properties of the sample only modify the effective capacitance of the resonator and the changes in formula_7 and formula_8 can be used to determine the complex permittivity of the sample. Coupling to a LGR. Inductive coupling loops are typically used to couple magnetic flux into and out of the LGR. The coupling loops are made by first removing a length of outer conductor and dielectric from a semi-rigid coaxial cable. The exposed centre conductor is then bent into a loop and short-circuited to the outer conductor. The opposite end of the coaxial cable is connected to either a signal generator or a receiver. In the case of a signal generator, an oscillating current is established in the coupling loop. By Faraday's law of induction, this current creates and oscillating magnetic flux which can be coupled into the bore of the LGR. This magnetic flux, in turn, induces circumferential currents along the inner wall of the LGR. The induced current, once again by Faraday's law, creates an approximately uniform oscillating magnetic field in the bore of the LGR. A second coupling loop, connected to a receiver, can be used to detect the magnetic flux produced by the LGR. Alternatively, using a vector network analyzer (VNA), a single coupling loop can be used to both inject a signal into the LGR and measure its response. The VNA can measure the ratio of the forward and reflected voltages (formula_31, or reflection coefficient) as a function of microwave frequency. Far away from resonance, the magnitude of the reflection coefficient will be close to one since very little power is coupled into the LGR at these frequencies. However, near the resonance frequency formula_7, the magnitude of the reflection coefficient will fall below one as power is transferred into the LGR. The coupling between the external circuits and the LGR can be tuned by adjusting the relative positions and orientations of the coupling loop and LGR. At critical coupling, impedance matching is achieved and the reflection coefficient approaches zero. It is also possible to capacitively couple electric fields into and out of the gap of the LGR using suitably-fashioned electrodes at the end of a coaxial cable. Multi-Loop, Multi-Gap LGRs. Multi-loop, multi-gap LGRs have also been developed. The simplest of these is the two-loop, one-gap LGR. In this case, magnetic field lines form closed loops by passing through each of the bores of the LGR and the currents on the inner walls propagate in opposite directions - clockwise in one bore and counterclockwise in the other. The equivalent circuit, neglecting losses, is a parallel combination of inductors formula_2 and formula_32 in series with capacitance formula_1. If formula_33, then the resonant frequency of the two-loop, one-gap LGR is formula_34 times greater than that of the conventional one-loop, one-gap LGR having the same bore and gap dimensions. It is also worth noting that, since magnetic field lines pass from one bore to the other, radiative power losses are strongly suppressed and the resonator maintains a high quality factor without requiring additional electromagnetic shielding. The multi-loop, multi-gap LGRs with more than two loops have more than one resonant mode. If the central bore is singled out as having inductance formula_32, then one of the resonant modes is one in which all of the magnetic flux from each of the external loops of inductance formula_2 is shared with the central loop. For this mode, the resonant frequency of an formula_35-loop, formula_36-gap LGR is given by formula_37 where it has been assumed that all loops have the same inductance formula_2. LGRs and superconductivity. Loop-gap resonators have been used to make precise measurements of the electrodynamic properties of unconventional superconductors. Most notably, a LGR was used to reveal the linear temperature dependence of the magnetic penetration depth, characteristic of a d-wave superconductor, in a single crystal of YBa2Cu3O6.95. In these experiments, a superconducting sample is placed inside the bore of a LGR. The diamagnetic response of the superconductor alters in the inductance of the LGR and, therefore, its resonant frequency. As described below, tracking the change in the resonant frequency as the temperature of the sample is changed allows one to deduce the temperature dependence of the magnetic penetration depth. Theory. The inductance of the LGR can be expressed as formula_39, where formula_40 is the volume of the LGR bore. Since the resonant frequency formula_7 of the LGR is proportional to formula_41, a small change in the effective volume of the resonator bore will result in a change in the resonant frequency given by formula_42 Due to the Meissner effect, when a superconducting sample is place in the bore of a LGR, the magnetic flux is expelled from the interior of the sample to within a penetration depth formula_38 of its surface. Therefore, the effective volume of the resonator bore is reduced by an amount equal to the volume from which the magnetic flux has been excluded. This excluded volume is given by formula_43 where formula_44, formula_45, and formula_46 are the sample dimensions along the three crystallographic directions and formula_47 is the sample volume formula_48. In the above expression, it has been assumed that the microwave magnetic field is applied parallel to the formula_45-axis of the sample. Since the presence of the superconductor reduces the LGR volume, formula_49 and formula_50 Solving this expression for the formula_44-axis penetration depth yields formula_51 Generally, it is not possible to use LGR frequency-shift measurements to determine the absolute value of the penetration depth because it would require knowing the sample thickness formula_46 very precisely. For example, in fully doped YBa2Cu3O7, formula_52 at low temperature. Therefore, to use the LGR measurement to determine formula_53 to within 10%, one would have to know the value of formula_46 with an accuracy of formula_54 which is typically not possible. Instead, the strategy is to track the changes in frequency as the sample temperature varies (while keeping the LGR at a fixed temperature). The absolute penetration depth can be expressed as formula_55 where formula_56 is temperature, formula_57 is the experimental base temperature, and formula_58 is the change in penetration depth as the sample temperature is increased above the base temperature. One can, therefore, express the change in penetration depth as formula_59 Finally, defining formula_60, one has formula_61 This final expression shows how the LGR shifts in resonant frequency can be used to determine the temperature dependence of the magnetic penetration depth in a superconducting sample. Experimental details. In a d-wave superconductor, the penetration depth typically changes by a few ångströms per degree kelvin, which corresponds to formula_62 for a formula_63 platelet sample in a LGR with a bore volume of formula_64. Measuring such small changes in relative frequency requires an extremely high-formula_8 resonator. The ultrahigh quality factors are obtained by coating the LGR surfaces with a superconducting material, such as a lead-tin alloy. The resonator is then cooled below the superconducting transition temperature of the coating using a bath of superfluid liquid helium. Quality factors of formula_65 have been achieved using copper LGRs coated with lead-tin and cooled to formula_66. Measuring permittivity and permeability. This section describes how LGRs can be used to determine the electromagnetic properties of materials. When there are no materials filling either the gap or bore of the resonator, the impedance formula_67 of the LGR can be expressed as formula_68 where formula_69. Re-expressed in terms of the resonant frequency formula_70 and quality factor formula_8, the impedance is given by formula_71 A measurement of the frequency dependence of the impedance of an empty LGR can be used to determine formula_70 and formula_8. The impedance measurement is most easily done using the vector network analyzer (VNA) to measure the reflection coefficient formula_31 from an inductively-coupled LGR. The impedance and reflection coefficient are related by formula_72 where formula_73 is the output impedance of the VNA (usually, formula_74). Complex permittivity. Now suppose that the gap of resonator has been completely filled with a dielectric material that has complex relative permittivity formula_75. In this case, the effective capacitance becomes formula_76 and the impedance of the LGR is given by formula_77 Separating the real and imaginary terms leads to formula_78 This expression shows that a nonzero formula_79 enhances the effective resistance of the LGR and, therefore, lowers its quality factor. A nonzero formula_80, on the other hand, alters the imaginary part of the impedance and modifies the resonant frequency. Written in terms of the empty-resonator resonant frequency and quality factor, the above impedance can be expressed as formula_81 Provided that formula_70 and formula_8 are known before hand, a measurement of the frequency dependence of formula_82 can be used to determine formula_83 and formula_79 of the material filling the gap of the LGR. This analysis gives the values of formula_83 and formula_79 at the resonant frequency of the filled LGR. Complex permeability. Next, suppose that the bore of a LGR is filled with a magnetic material have complex relative permeability formula_84. In this case, the effective inductance becomes formula_85 and the impedance of the LGR is given by formula_86 Separating formula_87 into its real and imaginary components and writing the impedance in terms of formula_70 and formula_8 of the empty LGR yields formula_88 Once again, formula_89 contributes additional dissipation which lowers the quality factor of the filled resonator and formula_90 shifts the resonant frequency. A measurement of the frequency dependence of formula_87 can be used to extract the values of formula_90 and formula_89 at the resonant frequency of the filled LGR. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ell" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "\\delta" }, { "math_id": 6, "text": "LRC" }, { "math_id": 7, "text": "f_0" }, { "math_id": 8, "text": "Q" }, { "math_id": 9, "text": " C = \\varepsilon_0 \\frac{w\\, \\ell}{t} \\,," }, { "math_id": 10, "text": "\\varepsilon_0" }, { "math_id": 11, "text": "w" }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": " L = \\mu_0 \\frac{\\pi \\, r_0^2}{\\ell} \\,," }, { "math_id": 14, "text": "\\mu_0" }, { "math_id": 15, "text": "r_0" }, { "math_id": 16, "text": " f_0 \\approx \\frac{1}{2\\pi}\\frac{1}{\\sqrt{LC}}=\\frac{c}{2\\pi r_0}\\sqrt{\\frac{t}{\\pi w}}\\,," }, { "math_id": 17, "text": "c=1/\\sqrt{\\varepsilon_0\\mu_0}" }, { "math_id": 18, "text": " Q \\approx \\frac{1}{R}\\sqrt{\\frac{L}{C}}\\,." }, { "math_id": 19, "text": "2\\pi r_0" }, { "math_id": 20, "text": "\\delta\\,\\ell" }, { "math_id": 21, "text": " R_\\rho \\approx \\rho\\frac{2\\pi r_0}{\\delta\\,\\ell}\\,," }, { "math_id": 22, "text": " Q \\approx \\frac{r_0}{\\delta}\\,," }, { "math_id": 23, "text": " \\delta \\approx \\sqrt{\\frac{2\\rho}{\\mu_0\\omega_0}}\\,," }, { "math_id": 24, "text": "\\omega_0=2\\pi f_0" }, { "math_id": 25, "text": "r_0 =1\\,\\mathrm{cm}" }, { "math_id": 26, "text": "f_0=1\\,\\mathrm{GHz}" }, { "math_id": 27, "text": "Q\\approx 3900" }, { "math_id": 28, "text": "r_0/\\delta" }, { "math_id": 29, "text": " R_\\mathrm{r} \\approx \\frac{\\pi}{6}\\sqrt{\\frac{\\mu_0}{\\varepsilon_0}}\\left(\\frac{\\omega_0 r_0}{c}\\right)^4\\,," }, { "math_id": 30, "text": "R_\\rho" }, { "math_id": 31, "text": "S_{11}" }, { "math_id": 32, "text": "L_0" }, { "math_id": 33, "text": "L=L_0" }, { "math_id": 34, "text": "\\sqrt{2}" }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": "(n-1)" }, { "math_id": 37, "text": " f_0 \\approx \\frac{1}{2\\pi}\\sqrt{\\frac{n}{LC}}\\,," }, { "math_id": 38, "text": "\\lambda" }, { "math_id": 39, "text": "L=\\mu_0 V_\\mathrm{r}/\\ell^2" }, { "math_id": 40, "text": "V_\\mathrm{r}" }, { "math_id": 41, "text": "L^{-1/2}" }, { "math_id": 42, "text": " \\delta f_0 = -\\frac{f_0}{2} \\frac{\\delta V_\\mathrm{r}}{V_\\mathrm{r}}\\,." }, { "math_id": 43, "text": " V_\\mathrm{ex} \\approx ab\\left(c-2\\lambda_a\\right)\\,," }, { "math_id": 44, "text": "a" }, { "math_id": 45, "text": "b" }, { "math_id": 46, "text": "c" }, { "math_id": 47, "text": "abc" }, { "math_id": 48, "text": "V_\\mathrm{s}" }, { "math_id": 49, "text": "\\delta V_\\mathrm{r}=-V_\\mathrm{ex}" }, { "math_id": 50, "text": " \\delta f_0 = \\frac{f_0}{2 V_\\mathrm{r}} \\left(V_\\mathrm{s}-2ab\\lambda_a\\right)\\,." }, { "math_id": 51, "text": " \\lambda_a = \\frac{c}{2}-\\frac{V_\\mathrm{r}}{ab}\\frac{\\delta f_0}{f_0}\\,." }, { "math_id": 52, "text": "\\lambda_a\\approx 100~\\mathrm{nm}" }, { "math_id": 53, "text": "\\lambda_a" }, { "math_id": 54, "text": "10~\\mathrm{nm}" }, { "math_id": 55, "text": " \\lambda_a(T) = \\lambda_a(T_0)+\\Delta\\lambda_a(T)\\,," }, { "math_id": 56, "text": "T" }, { "math_id": 57, "text": "T_0" }, { "math_id": 58, "text": "\\Delta\\lambda_a(T)" }, { "math_id": 59, "text": " \\Delta\\lambda_a(T) = \\lambda_a(T)-\\lambda_a(T_0)=-\\frac{V_\\mathrm{r}}{ab}\\frac{1}{f_0}\\left[\\delta f_0(T)-\\delta f_0(T_0)\\right]\\,." }, { "math_id": 60, "text": "\\Delta f_0(T)=\\delta f_0(T)-\\delta f_0(T_0)" }, { "math_id": 61, "text": " \\Delta\\lambda_a(T) = -\\frac{V_\\mathrm{r}}{ab}\\frac{\\Delta f_0(T)}{f_0}\\,." }, { "math_id": 62, "text": "\\Delta f_0/f_0\\sim 10^{-10}" }, { "math_id": 63, "text": "1~\\mathrm{mm}^2" }, { "math_id": 64, "text": "1~\\mathrm{cm}^3" }, { "math_id": 65, "text": "10^6" }, { "math_id": 66, "text": "1~\\mathrm{K}" }, { "math_id": 67, "text": "Z" }, { "math_id": 68, "text": " Z = R+j\\omega L +\\frac{1}{j\\omega C}\\,," }, { "math_id": 69, "text": "j=\\sqrt{-1}" }, { "math_id": 70, "text": "\\omega_0" }, { "math_id": 71, "text": " \\frac{Z}{QR} = \\frac{1}{Q}+j\\left(\\frac{\\omega}{\\omega_0} - \\frac{\\omega_0}{\\omega}\\right)\\,." }, { "math_id": 72, "text": " S_{11} = \\frac{Z_0-Z}{Z_0+Z}\\,," }, { "math_id": 73, "text": "Z_0" }, { "math_id": 74, "text": "Z_0=50~\\Omega)" }, { "math_id": 75, "text": "\\varepsilon_\\mathrm{r}=\\varepsilon^\\prime-j\\varepsilon^{\\prime\\prime}" }, { "math_id": 76, "text": "\\varepsilon_\\mathrm{r}C" }, { "math_id": 77, "text": " Z_\\varepsilon = R+j\\omega L +\\frac{1}{j\\omega \\left(\\varepsilon^\\prime-j\\varepsilon^{\\prime\\prime}\\right)C}\\,." }, { "math_id": 78, "text": " Z_\\varepsilon = \\left[R+\\left(\\frac{\\varepsilon^{\\prime\\prime}}{\\left(\\varepsilon^\\prime\\right)^2+\\left(\\varepsilon^{\\prime\\prime}\\right)^2}\\right)\\frac{1}{\\omega C}\\right] +j\\left[\\omega L-\\left(\\frac{\\varepsilon^{\\prime}}{\\left(\\varepsilon^\\prime\\right)^2+\\left(\\varepsilon^{\\prime\\prime}\\right)^2}\\right)\\frac{1}{\\omega C}\\right]\\,." }, { "math_id": 79, "text": "\\varepsilon^{\\prime\\prime}" }, { "math_id": 80, "text": "\\varepsilon^{\\prime}" }, { "math_id": 81, "text": " \\frac{Z_\\varepsilon}{QR} = \\left[\\frac{1}{Q}+\\left(\\frac{\\varepsilon^{\\prime\\prime}}{\\left(\\varepsilon^\\prime\\right)^2+\\left(\\varepsilon^{\\prime\\prime}\\right)^2}\\right)\\frac{\\omega_0}{\\omega}\\right] +j\\left[\\frac{\\omega}{\\omega_0}-\\left(\\frac{\\varepsilon^{\\prime}}{\\left(\\varepsilon^\\prime\\right)^2+\\left(\\varepsilon^{\\prime\\prime}\\right)^2}\\right)\\frac{\\omega_0}{\\omega}\\right]\\,." }, { "math_id": 82, "text": "Z_\\varepsilon" }, { "math_id": 83, "text": "\\varepsilon^\\prime" }, { "math_id": 84, "text": "\\mu_\\mathrm{r}=\\mu^\\prime-j\\mu^{\\prime\\prime}" }, { "math_id": 85, "text": "\\mu_\\mathrm{r}L" }, { "math_id": 86, "text": " Z_\\mu = R+j\\omega \\left(\\mu^\\prime-j\\mu^{\\prime\\prime}\\right) L +\\frac{1}{j\\omega C}\\,." }, { "math_id": 87, "text": "Z_\\mu" }, { "math_id": 88, "text": " \\frac{Z_\\mu}{QR} = \\left[\\frac{1}{Q}+\\mu^{\\prime\\prime}\\frac{\\omega}{\\omega_0}\\right]+j \\left[\\mu^\\prime\\frac{\\omega}{\\omega_0}-\\frac{\\omega_0}{\\omega}\\right]\\,." }, { "math_id": 89, "text": "\\mu^{\\prime\\prime}" }, { "math_id": 90, "text": "\\mu^\\prime" } ]
https://en.wikipedia.org/wiki?curid=60470875
604798
Joule heating
Heat from a current in an electric conductor Joule heating (also known as resistive, resistance, or Ohmic heating) is the process by which the passage of an electric current through a conductor produces heat. Joule's first law (also just "Joule's law"), also known in countries of the former USSR as the Joule–Lenz law, states that the power of heating generated by an electrical conductor equals the product of its resistance and the square of the current. Joule heating affects the whole electric conductor, unlike the Peltier effect which transfers heat from one electrical junction to another. Joule-heating or resistive-heating is used in multiple devices and industrial process. The part that converts electricity into heat is called a heating element. Among the many practical uses are: History. James Prescott Joule first published in December 1840, an abstract in the "Proceedings of the Royal Society", suggesting that heat could be generated by an electrical current. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current flowing through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the immersed wire. In 1841 and 1842, subsequent experiments showed that the amount of heat generated was proportional to the chemical energy used in the voltaic pile that generated the template. This led Joule to reject the caloric theory (at that time the dominant theory) in favor of the mechanical theory of heat (according to which heat is another form of energy). Resistive heating was independently studied by Heinrich Lenz in 1842. The SI unit of energy was subsequently named the joule and given the symbol "J". The commonly known unit of power, the watt, is equivalent to one joule per second. Microscopic description. Joule heating is caused by interactions between charge carriers (usually electrons) and the body of the conductor. A potential difference (voltage) between two points of a conductor creates an electric field that accelerates charge carriers in the direction of the electric field, giving them kinetic energy. When the charged particles collide with the quasi-particles in the conductor (i.e. the canonically quantized, ionic lattice oscillations in the harmonic approximation of a crystal), energy is being transferred from the electrons to the lattice (by the creation of further lattice oscillations). The oscillations of the ions are the origin of the radiation ("thermal energy") that one measures in a typical experiment. Power loss and noise. Joule heating is referred to as "ohmic heating" or "resistive heating" because of its relationship to Ohm's Law. It forms the basis for the large number of practical applications involving electric heating. However, in applications where heating is an unwanted by-product of current use (e.g., load losses in electrical transformers) the diversion of energy is often referred to as "resistive loss". The use of high voltages in electric power transmission systems is specifically designed to reduce such losses in cabling by operating with commensurately lower currents. The ring circuits, or ring mains, used in UK homes are another example, where power is delivered to outlets at lower currents (per wire, by using two paths in parallel), thus reducing Joule heating in the wires. Joule heating does not occur in superconducting materials, as these materials have zero electrical resistance in the superconducting state. Resistors create electrical noise, called Johnson–Nyquist noise. There is an intimate relationship between Johnson–Nyquist noise and Joule heating, explained by the fluctuation-dissipation theorem. Formulas. Direct current. The most fundamental formula for Joule heating is the generalized power equation: formula_0 where The explanation of this formula (formula_4) is: &lt;templatestyles src="Block indent/styles.css"/&gt;("Energy dissipated per unit time") = ("Charge passing through resistor per unit time") × ("Energy dissipated per charge passing through resistor") Assuming the element behaves as a perfect resistor and that the power is completely converted into heat, the formula can be re-written by substituting Ohm's law, formula_5, into the generalized power equation: formula_6 where "R" is the resistance. Voltage can be increased in DC circuits by connecting batteries or solar panels in series. Alternating current. When current varies, as it does in AC circuits, formula_7 where "t" is time and "P" is the instantaneous active power being converted from electrical energy to heat. Far more often, the "average" power is of more interest than the instantaneous power: formula_8 where "avg" denotes average (mean) over one or more cycles, and "rms" denotes root mean square. These formulas are valid for an ideal resistor, with zero reactance. If the reactance is nonzero, the formulas are modified: formula_9 where formula_10 is phase difference between current and voltage, formula_11 means real part, "Z" is the complex impedance, and "Y*" is the complex conjugate of the admittance (equal to 1/"Z*"). For more details in the reactive case, see AC power. Differential form. Joule heating can also be calculated at a particular location in space. The differential form of the Joule heating equation gives the power per unit volume. formula_12 Here, formula_13 is the current density, and formula_14 is the electric field. For a material with a conductivity formula_15, formula_16 and therefore formula_17 where formula_18 is the resistivity. This directly resembles the "formula_19" term of the macroscopic form. In the harmonic case, where all field quantities vary with the angular frequency formula_20 as formula_21, complex valued phasors formula_22 and formula_23 are usually introduced for the current density and the electric field intensity, respectively. The Joule heating then reads formula_24 where formula_25 denotes the complex conjugate. Electricity transmission. Overhead power lines transfer electrical energy from electricity producers to consumers. Those power lines have a nonzero resistance and therefore are subject to Joule heating, which causes transmission losses. The split of power between transmission losses (Joule heating in transmission lines) and load (useful energy delivered to the consumer) can be approximated by a voltage divider. In order to minimize transmission losses, the resistance of the lines has to be as small as possible compared to the load (resistance of consumer appliances). Line resistance is minimized by the use of copper conductors, but the resistance and power supply specifications of consumer appliances are fixed. Usually, a transformer is placed between the lines and consumption. When a high-voltage, low-intensity current in the primary circuit (before the transformer) is converted into a low-voltage, high-intensity current in the secondary circuit (after the transformer), the equivalent resistance of the secondary circuit becomes higher and transmission losses are reduced in proportion. During the war of currents, AC installations could use transformers to reduce line losses by Joule heating, at the cost of higher voltage in the transmission lines, compared to DC installations. Applications. Food processing. Joule heating is a flash pasteurization (also called "high-temperature short-time" (HTST)) aseptic process that runs an alternating current of 50–60 Hz through food. Heat is generated through the food's electrical resistance. As the product heats, electrical conductivity increases linearly. A higher electrical current frequency is best as it reduces oxidation and metallic contamination. This heating method is best for foods that contain particulates suspended in a weak salt-containing medium due to their high resistance properties. Heat is generated rapidly and uniformly in the liquid matrix as well as in particulates, producing a higher quality sterile product that is suitable for aseptic processing. Electrical energy is linearly translated to thermal energy as electrical conductivity increases, and this is the key process parameter that affects heating uniformity and heating rate. This heating method is best for foods that contain particulates suspended in a weak salt containing medium due to their high resistance properties. Ohmic heating is beneficial due to its ability to inactivate microorganisms through thermal and non-thermal cellular damage. This method can also inactivate antinutritional factors thereby maintaining nutritional and sensory properties. However, ohmic heating is limited by viscosity, electrical conductivity, and fouling deposits. Although ohmic heating has not yet been approved by the Food and Drug Administration (FDA) for commercial use, this method has many potential applications, ranging from cooking to fermentation. There are different configurations for continuous ohmic heating systems, but in the most basic process, a power supply or generator is needed to produce electrical current. Electrodes, in direct contact with food, pass electric current through the matrix. The distance between the electrodes can be adjusted to achieve the optimum electrical field strength. The generator creates the electrical current which flows to the first electrode and passes through the food product placed in the electrode gap. The food product resists the flow of current causing internal heating. The current continues to flow to the second electrode and back to the power source to close the circuit. The insulator caps around the electrodes controls the environment within the system. The electrical field strength and the residence time are the key process parameters which affect heat generation. The ideal foods for ohmic heating are viscous with particulates. The efficiency by which electricity is converted to heat depends upon on salt, water, and fat content due to their thermal conductivity and resistance factors. In particulate foods, the particles heat up faster than the liquid matrix due to higher resistance to electricity and matching conductivity can contribute to uniform heating. This prevents overheating of the liquid matrix while particles receive sufficient heat processing. Table 1 shows the electrical conductivity values of certain foods to display the effect of composition and salt concentration. The high electrical conductivity values represent a larger number of ionic compounds suspended in the product, which is directly proportional to the rate of heating. This value is increased in the presence of polar compounds, like acids and salts, but decreased with nonpolar compounds, like fats. Electrical conductivity of food materials generally increases with temperature, and can change if there are structural changes caused during heating such as gelatinization of starch. Density, pH, and specific heat of various components in a food matrix can also influence heating rate. Benefits of Ohmic heating include: uniform and rapid heating (&gt;1°Cs−1), less cooking time, better energy efficiency, lower capital cost, and heating simulataneously throughout food's volume as compared to aseptic processing, canning, and PEF. Volumetric heating allows internal heating instead of transferring heat from a secondary medium. This results in the production of safe, high quality food with minimal changes to structural, nutritional, and organoleptic properties of food. Heat transfer is uniform to reach areas of food that are harder to heat. Less fouling accumulates on the electrodes as compared to other heating methods. Ohmic heating also requires less cleaning and maintenance, resulting in an environmentally cautious heating method. Microbial inactivation in ohmic heating is achieved by both thermal and non-thermal cellular damage from the electrical field. This method destroys microorganisms due to electroporation of cell membranes, physical membrane rupture, and cell lysis. In electroporation, excessive leakage of ions and intramolecular components results in cell death. In membrane rupture, cells swell due to an increase in moisture diffusion across the cell membrane. Pronounced disruption and decomposition of cell walls and cytoplasmic membranes causes cells to lyse. Decreased processing times in ohmic heating maintains nutritional and sensory properties of foods. Ohmic heating inactivates antinutritional factors like lipoxigenase (LOX), polyphenoloxidase (PPO), and pectinase due to the removal of active metallic groups in enzymes by the electrical field. Similar to other heating methods, ohmic heating causes gelatinization of starches, melting of fats, and protein agglutination. Water-soluble nutrients are maintained in the suspension liquid allowing for no loss of nutritional value if the liquid is consumed. Ohmic heating is limited by viscosity, electrical conductivity, and fouling deposits. The density of particles within the suspension liquid can limit the degree of processing. A higher viscosity fluid will provide more resistance to heating, allowing the mixture to heat up quicker than low viscosity products. A food product's electrical conductivity is a function of temperature, frequency, and product composition. This may be increased by adding ionic compounds, or decreased by adding non-polar constituents. Changes in electrical conductivity limit ohmic heating as it is difficult to model the thermal process when temperature increases in multi-component foods. The potential applications of ohmic heating range from cooking, thawing, blanching, peeling, evaporation, extraction, dehydration, and fermentation. These allow for ohmic heating to pasteurize particulate foods for hot filling, pre-heat products prior to canning, and aseptically process ready-to-eat meals and refrigerated foods. Prospective examples are outlined in Table 2 as this food processing method has not been commercially approved by the FDA. Since there is currently insufficient data on electrical conductivities for solid foods, it is difficult to prove the high quality and safe process design for ohmic heating. Additionally, a successful 12D reduction for "C. botulinum" prevention has yet to be validated. Materials synthesis, recovery and processing. Flash joule heating (transient high-temperature electrothermal heating) has been used to synthesize allotropes of carbon, including graphene and diamond. Heating various solid carbon feedstocks (carbon black, coal, coffee grounds, etc.) to temperatures of ~3000 K for 10-150 milliseconds produces turbostratic graphene flakes. FJH has also been used to recover rare-earth elements used in modern electronics from industrial wastes. Beginning from a fluorinated carbon source, fluorinated activated carbon, fluorinated nanodiamond, concentric carbon (carbon shell around a nanodiamond core), and fluorinated flash graphene can be synthesized. Heating efficiency. Heat is not to be confused with internal energy or synonymously thermal energy. While intimately connected to heat, they are distinct physical quantities. As a heating technology, Joule heating has a coefficient of performance of 1.0, meaning that every joule of electrical energy supplied produces one joule of heat. In contrast, a heat pump can have a coefficient of more than 1.0 since it moves additional thermal energy from the environment to the heated item. The definition of the efficiency of a heating process requires defining the boundaries of the system to be considered. When heating a building, the overall efficiency is different when considering heating effect per unit of electric energy delivered on the customer's side of the meter, compared to the overall efficiency when also considering the losses in the power plant and transmission of power. Hydraulic equivalent. In the energy balance of groundwater flow a hydraulic equivalent of Joule's law is used: formula_26 where: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = I (V_{A} - V_{B})" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "I" }, { "math_id": 3, "text": "V_{A}-V_{B}" }, { "math_id": 4, "text": "P = IV" }, { "math_id": 5, "text": "V = I R " }, { "math_id": 6, "text": "P = IV = I^2R = V^2/R" }, { "math_id": 7, "text": "P(t) = U(t) I(t)" }, { "math_id": 8, "text": "P_{\\rm avg} = U_\\text{rms} I_\\text{rms} = (I_\\text{rms})^2 R = (U_\\text{rms})^2 / R" }, { "math_id": 9, "text": "P_{\\rm avg} = U_\\text{rms}I_\\text{rms}\\cos\\phi = (I_\\text{rms})^2 \\operatorname{Re}(Z) = (U_\\text{rms})^2 \\operatorname{Re}(Y^*)" }, { "math_id": 10, "text": "\\phi" }, { "math_id": 11, "text": "\\operatorname{Re}" }, { "math_id": 12, "text": "\\frac{\\mathrm{d}P}{\\mathrm{d}V} = \\mathbf{J} \\cdot \\mathbf{E}" }, { "math_id": 13, "text": "\\mathbf{J}" }, { "math_id": 14, "text": "\\mathbf{E}" }, { "math_id": 15, "text": "\\sigma" }, { "math_id": 16, "text": "\\mathbf{J}=\\sigma \\mathbf{E}" }, { "math_id": 17, "text": "\\frac{\\mathrm{d}P}{\\mathrm{d}V} = \\mathbf{J} \\cdot \\mathbf{E} = \\mathbf{J} \\cdot \\mathbf{J}\\frac{1}{\\sigma} = J^2\\rho" }, { "math_id": 18, "text": "\\rho = 1/\\sigma" }, { "math_id": 19, "text": "I^2R" }, { "math_id": 20, "text": "\\omega" }, { "math_id": 21, "text": "e^{-\\mathrm{i} \\omega t}" }, { "math_id": 22, "text": "\\hat\\mathbf{J}" }, { "math_id": 23, "text": "\\hat\\mathbf{E}" }, { "math_id": 24, "text": "\\frac{\\mathrm{d}P}{\\mathrm{d}V} = \\frac{1}{2}\\hat\\mathbf{J} \\cdot \\hat\\mathbf{E}^* = \\frac{1}{2}\\hat\\mathbf{J} \\cdot \\hat\\mathbf{J}^*/\\sigma = \\frac{1}{2}J^2\\rho," }, { "math_id": 25, "text": "\\bullet^*" }, { "math_id": 26, "text": " \\frac{dE}{dx} = \\frac{(v_x)^2}{K} " }, { "math_id": 27, "text": "dE/dt" }, { "math_id": 28, "text": "E" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "v_x" }, { "math_id": 31, "text": "K" }, { "math_id": 32, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=604798
6048617
Mössbauer spectroscopy
Spectroscopic technique Mössbauer spectroscopy is a spectroscopic technique based on the Mössbauer effect. This effect, discovered by Rudolf Mössbauer (sometimes written "Moessbauer", German: "Mößbauer") in 1958, consists of the nearly recoil-free emission and absorption of nuclear gamma rays in solids. The consequent nuclear spectroscopy method is exquisitely sensitive to small changes in the chemical environment of certain nuclei. Typically, three types of nuclear interactions may be observed: the isomer shift due to differences in nearby electron densities (also called the chemical shift in older literature), quadrupole splitting due to atomic-scale electric field gradients; and magnetic splitting due to non-nuclear magnetic fields. Due to the high energy and extremely narrow line widths of nuclear gamma rays, Mössbauer spectroscopy is a highly sensitive technique in terms of energy (and hence frequency) resolution, capable of detecting changes of just a few parts in 1011. It is a method completely unrelated to nuclear magnetic resonance spectroscopy. Basic principle. Just as a gun recoils when a bullet is fired, conservation of momentum requires a nucleus (such as in a gas) to recoil during emission or absorption of a gamma ray. If a nucleus at rest "emits" a gamma ray, the energy of the gamma ray is slightly "less" than the natural energy of the transition, but in order for a nucleus at rest to "absorb" a gamma ray, the gamma ray's energy must be slightly "greater" than the natural energy, because in both cases energy is lost to recoil. This means that nuclear resonance (emission and absorption of the same gamma ray by identical nuclei) is unobservable with free nuclei, because the shift in energy is too great and the emission and absorption spectra have no significant overlap. Nuclei in a solid crystal, however, are not free to recoil because they are bound in place in the crystal lattice. When a nucleus in a solid emits or absorbs a gamma ray, some energy can still be lost as recoil energy, but in this case it always occurs in discrete packets called phonons (quantized vibrations of the crystal lattice). Any whole number of phonons can be emitted, including zero, which is known as a "recoil-free" event. In this case conservation of momentum is satisfied by the momentum of the crystal as a whole, so practically no energy is lost. Mössbauer found that a significant fraction of emission and absorption events will be recoil-free, which is quantified using the Lamb–Mössbauer factor. This fact is what makes Mössbauer spectroscopy possible, because it means that gamma rays emitted by one nucleus can be resonantly absorbed by a sample containing nuclei of the same isotope, and this absorption can be measured. The recoil fraction of the Mössbauer absorption is analyzed by nuclear resonance vibrational spectroscopy. Typical method. In its most common form, Mössbauer absorption spectroscopy, a solid sample is exposed to a beam of gamma radiation, and a detector measures the intensity of the beam transmitted through the sample. The atoms in the source emitting the gamma rays must be of the same isotope as the atoms in the sample absorbing them. If the emitting and absorbing nuclei were in identical chemical environments, the nuclear transition energies would be exactly equal and resonant absorption would be observed with both materials at rest. The difference in chemical environments, however, causes the nuclear energy levels to shift in a few different ways, as described below. Although these energy shifts are tiny (often less than a micro-electronvolt), the extremely narrow spectral linewidths of gamma rays for some radionuclides make the small energy shifts correspond to large changes in absorbance. To bring the two nuclei back into resonance it is necessary to change the energy of the gamma ray slightly, and in practice this is always done using the Doppler shift. During Mössbauer absorption spectroscopy, the source is accelerated through a range of velocities using a linear motor to produce a Doppler effect and scan the gamma ray energy through a given range. A typical range of velocities for 57Fe, for example, may be ± ( In the resulting spectra, gamma ray intensity is plotted as a function of the source velocity. At velocities corresponding to the resonant energy levels of the sample, a fraction of the gamma rays are absorbed, resulting in a drop in the measured intensity and a corresponding dip in the spectrum. The number, positions, and intensities of the dips (also called peaks; dips in transmittance are peaks in absorbance) provide information about the chemical environment of the absorbing nuclei and can be used to characterize the sample. Selecting a suitable source. Suitable gamma-ray sources consist of a radioactive parent that decays to the desired isotope. For example, the source for 57Fe consists of 57Co, which decays by electron capture to an excited state of 57Fe, which in turn decays to a ground state via a series of gamma-ray emissions that include the one exhibiting the Mössbauer effect. The radioactive cobalt is prepared on a foil, often of rhodium. Ideally the parent isotope will have a convenient half-life. Also, the gamma-ray energy should be relatively low, otherwise the system will have a low recoil-free fraction resulting in a poor signal-to-noise ratio and requiring long collection times. The periodic table below indicates those elements having an isotope suitable for Mössbauer spectroscopy. Of these, 57Fe is by far the most common element studied using the technique, although 129I, 119Sn, and 121Sb are also frequently studied. Analysis of Mössbauer spectra. As described above, Mössbauer spectroscopy has an extremely fine energy resolution and can detect even subtle changes in the nuclear environment of the relevant atoms. Typically, there are three types of nuclear interactions that are observed: isomeric shift, quadrupole splitting, and hyperfine magnetic splitting. Isomer shift. Isomer shift (δ) (also sometimes called chemical shift, especially in the older literature) is a relative measure describing a shift in the resonance energy of a nucleus (see Fig. 2) due to the transition of electrons within its "s" orbitals. The whole spectrum is shifted in either a positive or negative direction depending upon the "s" electron charge density in the nucleus. This change arises due to alterations in the electrostatic response between the non-zero probability "s" orbital electrons and the non-zero volume nucleus they orbit. Only electrons in "s" orbitals have a non-zero probability of being found in the nucleus (see atomic orbitals). However, "p", "d", and "f" electrons may influence the "s" electron density through a screening effect. Isomer shift can be expressed using the formula below, where "K" is a nuclear constant, the difference between "R"e2 and "R"g2 is the effective nuclear charge radius difference between excited state and the ground state, and the difference between [Ψs2(0)]a and [Ψs2(0)]b is the electron density difference in the nucleus (a = source, b = sample). The Chemical Isomer shift as described here does not change with temperature, however, Mössbauer spectra do have a temperature sensitivity due to a relativistic effect known as the second-order Doppler effect. Generally, the impact of this effect is small, and the IUPAC standard allows the Isomer Shift to be reported without correcting for it. formula_0 The physical meaning of this equation can be clarified using examples: The isomer shift is useful for determining oxidation state, valency states, electron shielding and the electron-drawing power of electronegative groups. Quadrupole splitting. Quadrupole splitting reflects the interaction between the nuclear energy levels and the surrounding electric field gradient (EFG). Nuclei in states with non-spherical charge distributions, i.e. all those with spin quantum number ("I") greater than 1/2, may have a nuclear quadrupole moment. In this case an asymmetrical electric field (produced by an asymmetric electronic charge distribution or ligand arrangement) splits the nuclear energy levels. In the case of an isotope with a "I" = 3/2 excited state, such as 57Fe or 119Sn, the excited state is split into two substates "m""I" = ±1/2 and "m""I" = ±3/2. The ground to excited state transitions appear as two specific peaks in a spectrum, sometimes referred to as a "doublet". Quadrupole splitting is measured as the separation between these two peaks and reflects the character of the electric field at the nucleus. The quadrupole splitting can be used for determining oxidation state, spin state, site symmetry, and the arrangement of ligands. Magnetic hyperfine splitting. Magnetic hyperfine splitting is a result of the interaction between the nucleus and a surrounding magnetic field (similar to the Zeeman effect in atomic spectra). A nucleus with spin "I" splits into 2"I" + 1 sub-energy levels in the presence of a magnetic field. For example, the first excited state of the 57Fe nucleus with spin state "I" = 3/2 will split into 4 non-degenerate sub-states with "m""I" values of +3/2, +1/2, −1/2 and −3/2. The equally-spaced splits are said to be hyperfine, being on the order of 10−7 eV. The selection rule for magnetic dipole transitions means that transitions between the excited state and ground state can only occur where "m""I" changes by 0 or 1 or −1. This gives 6 possible for a 3/2 to 1/2 transition. The extent of splitting is proportional to the magnetic field strength at the nucleus, which in turn depends on the electron distribution ("chemical environment") of the nucleus. The splitting can be measured, for instance, with a sample foil placed between an oscillating source and a photon detector (see Fig. 5), resulting in an absorption spectrum, as illustrated in Fig. 4. The magnetic field can be determined from the spacing between the peaks if the quantum "g-factors" of the nuclear states are known. In ferromagnetic materials, including many iron compounds, the natural internal magnetic fields are quite strong and their effects dominate the spectra. Combination of all. The three Mössbauer parameters: isomer shift, quadrupole splitting, and hyperfine splitting can often be used to identify a particular compound by comparison to spectra for standards. In some cases, a compound may have more than one possible position for the Mössbauer active atom. For example, the crystal structure of magnetite (Fe3O4) supports two different sites for the iron atoms. Its spectrum has 12 peaks, a sextet for each potential atomic site, corresponding to two sets of Mössbauer parameters. Many times all effects are observed: isomer shift, quadrupole splitting, and magnetic splitting. In such cases the isomer shift is given by the average of all lines. The quadrupole splitting when all the four excited substates are equally shifted (two substates are lifted and other two are lowered) is given by the shift of the outer two lines relative to the inner four lines (all inner four lines shift in opposition to the outermost two lines). Usually fitting software is used for accurate values. In addition, the relative intensities of the various peaks reflect the relative concentrations of compounds in a sample and can be used for semi-quantitative analysis. Also, since ferromagnetic phenomena are size-dependent, in some cases spectra can provide insight into the crystallite size and grain structure of a material. Mössbauer emission spectroscopy. Mössbauer emission spectroscopy is a specialized variant of Mössbauer spectroscopy where the emitting element is in the probed sample, and the absorbing element is in the reference. Most commonly, the technique is applied to the 57Co/57Fe pair. A typical application is the characterization of the cobalt sites in amorphous Co-Mo catalysts used in hydrodesulfurization. In such a case, the sample is doped with 57Co. Applications. Among the drawbacks of the technique are the limited number of gamma ray sources and the requirement that samples be solid in order to eliminate the recoil of the nucleus. Mössbauer spectroscopy is unique in its sensitivity to subtle changes in the chemical environment of the nucleus including oxidation state changes, the effect of different ligands on a particular atom, and the magnetic environment of the sample. As an analytical tool Mössbauer spectroscopy has been especially useful in the field of geology for identifying the composition of iron-containing specimens including meteorites and Moon rocks. "In situ" data collection of Mössbauer spectra has also been carried out on iron rich rocks on Mars. In another application, Mössbauer spectroscopy is used to characterize phase transformations in iron catalysts, e.g., those used for Fischer–Tropsch synthesis. While initially consisting of hematite (Fe2O3), these catalysts transform into a mixture of magnetite (Fe3O4) and several iron carbides. The formation of carbides appears to improve catalytic activity, but it can also lead to the mechanical break-up and attrition of the catalyst particles, which can cause difficulties in the final separation of catalyst from reaction products. Mössbauer spectroscopy has also been used to determine the relative concentration change in the oxidation state of antimony (Sb) during the selective oxidation of olefins. During calcination, all the Sb ions in an antimony-containing tin dioxide catalyst transform into the +5 oxidation state. Following the catalytic reaction, almost all Sb ions revert from the +5 to the +3 oxidation state. A significant change in the chemical environment surrounding the antimony nucleus occurs during the oxidation state change which can easily be monitored as an isomer shift in the Mössbauer spectrum. This technique has also been used to observe the second-order transverse Doppler effect predicted by the theory of relativity, because of very high energy resolution. Bioinorganic chemistry. Mössbauer spectroscopy has been widely applied to bioinorganic chemistry, especially for the study of iron-containing proteins and enzymes. Often the technique is used to determine the oxidation state of iron. Examples of prominent iron-containing biomolecules are iron-sulfur proteins, ferritin, and hemes including the cytochromes. These studies are often supplemented by analysis of related model complexes. An area of particular interest is the characterization of intermediates involved in oxygen activation by iron proteins. Vibrational spectra of 57Fe-enriched biomolecules can be acquired using nuclear resonance vibrational spectroscopy (NRVS), in which the sample is scanned through a range of synchrotron-generated X-rays, centered at the Mössbauer absorbance frequency. Stokes and anti-Stokes peaks in the spectrum correspond to low frequency vibrations, many below 600 cm−1 with some below 100 cm−1. Mössbauer spectrometers. A Mössbauer spectrometer is a device that performs Mössbauer spectroscopy, or a device that uses the Mössbauer effect to determine the chemical environment of Mössbauer nuclei present in the sample. It is formed by three main parts; a source that moves back and forth to generate a Doppler effect, a collimator that filters out non-parallel gamma rays and a detector. A miniature Mössbauer Spectrometer, named (MB) MIMOS II, was used by the two rovers in NASA's Mars Exploration Rover missions. 57Fe Mössbauer spectroscopy. The chemical isomer shift and quadrupole splitting are generally evaluated with respect to a reference material. For example, in iron compounds, the Mössbauer parameters were evaluated using iron foil (of a thickness less than 40 micrometers). The centroid of the six-line spectrum from metallic iron foil is −0.1 mm/s (for a Co/Rh source). All shifts in other iron compounds are computed relative to this −0.10 mm/s (at room temperature), i.e., in this case isomer shifts are relative to the Co/Rh source. In other words, the centre point of the Mössbauer spectrum is zero. The shift values may also be reported relative to 0.0 mm/s; here, shifts are relative to the iron foil. To calculate the outer line distance from the six-line iron spectrum: formula_1 where "c" is the speed of light, "B"int is the internal magnetic field of the metallic iron (), "μ"N is the nuclear magneton (), "E"γ is the excitation energy (14.412497(3) keV), "g"n is the ground state nuclear splitting factor (/("I"), where Isospin "I" = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2) and "g" is the excited state splitting factor of 57Fe (-0.15532/("I"), where "I" = &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄2). By substituting the above values one would get "V" = . Other values are sometimes used to reflect different qualities of iron foils. In all cases any change in "V" only affects the isomer shift and not the quadrupole splitting. As the IBAME, the authority for Mössbauer spectroscopy, does not specify a particular value, anything between 10.60 mm/s to 10.67 mm/s can be used. For this reason it is highly recommended to provide the isomer shift values relative to the source used, not to the iron foil, mentioning the details of the source (centre of gravity of the folded spectrum). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{CS} = K\\left(\\langle R_e^2\\rangle - \\langle R_g^2\\rangle\\right)\\left([\\Psi_s^2(0)]_b - [\\Psi_s^2(0)]_a\\right)." }, { "math_id": 1, "text": "V=\\frac{c\\,B_\\text{int}\\,\\mu_{\\rm N}}{E_\\gamma}(3g_n^e+g_n)" } ]
https://en.wikipedia.org/wiki?curid=6048617
60487
Abscissa and ordinate
Horizontal and vertical axes/coordinate numbers of a 2D coordinate system or graph In common usage, the abscissa refers to the "x" coordinate and the ordinate refers to the "y" coordinate of a standard two-dimensional graph. The distance of a point from the "y" axis, scaled with the "x" axis, is called the abscissa or "x" coordinate of the point. The distance of a point from the "x" axis scaled with the "y" axis is called the ordinate or "y" coordinate of the point. For example, if ("x", "y") is an ordered pair in the Cartesian plane, then the first coordinate in the plane ("x") is called the abscissa, and the second coordinate ("y") is the ordinate. In mathematics, the abscissa (; plural "abscissae" or "abscissas") and the ordinate are respectively the first and second coordinate of a point in a Cartesian coordinate system: abscissa formula_0-axis (horizontal) coordinate, ordinate formula_1-axis (vertical) coordinate. Usually these are the horizontal and vertical coordinates of a point in plane, the rectangular coordinate system. An ordered pair consists of two terms—the abscissa (horizontal, usually "x") and the ordinate (vertical, usually "y")—which define the location of a point in two-dimensional rectangular space: formula_2 The abscissa of a point is the signed measure of its projection on the primary axis, whose absolute value is the distance between the projection and the origin of the axis, and whose sign is given by the location on the projection relative to the origin (before: negative; after: positive). The ordinate of a point is the signed measure of its projection on the secondary axis, whose absolute value is the distance between the projection and the origin of the axis, and whose sign is given by the location on the projection relative to the origin (before: negative; after: positive). In three dimensions the third direction is sometimes referred to as the applicate. Etymology. Though the word "abscissa" (from la " linea abscissa" 'a line cut off') has been used at least since "De Practica Geometrie" published in 1220 by Fibonacci (Leonardo of Pisa), its use in its modern sense may be due to Venetian mathematician Stefano degli Angeli in his work "Miscellaneum Hyperbolicum, et Parabolicum" of 1659. In his 1892 work " ("Lectures on history of mathematics""), volume 2, German historian of mathematics Moritz Cantor writes: At the same time it was presumably by [Stefano degli Angeli] that a word was introduced into the mathematical vocabulary for which especially in analytic geometry the future proved to have much in store. […] We know of no earlier use of the word "abscissa" in Latin original texts. Maybe the word appears in translations of the Apollonian conics, where [in] Book I, Chapter 20 there is mention of "ἀποτεμνομέναις," for which there would hardly be a more appropriate Latin word than . The use of the word "ordinate" is related to the Latin phrase "linea ordinata appliicata" 'line applied parallel'. In parametric equations. In a somewhat obsolete variant usage, the abscissa of a point may also refer to any number that describes the point's location along some path, e.g. the parameter of a parametric equation. Used in this way, the abscissa can be thought of as a coordinate-geometry analog to the independent variable in a mathematical model or experiment (with any ordinates filling a role analogous to dependent variables). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\equiv x" }, { "math_id": 1, "text": "\\equiv y" }, { "math_id": 2, "text": "(\\overbrace{x}^{\\displaystyle\\text{abscissa}}, \\overbrace{y}^{\\displaystyle\\text{ordinate}})." } ]
https://en.wikipedia.org/wiki?curid=60487
60490
Abstract interpretation
Approach to static program analysis In computer science, abstract interpretation is a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices. It can be viewed as a partial execution of a computer program which gains information about its semantics (e.g., control-flow, data-flow) without performing all the calculations. Its main concrete application is formal static analysis, the automatic extraction of information about the possible executions of computer programs; such analyses have two main usages: Abstract interpretation was formalized by the French computer scientist working couple Patrick Cousot and Radhia Cousot in the late 1970s. Intuition. This section illustrates abstract interpretation by means of real-world, non-computing examples. Consider the people in a conference room. Assume a unique identifier for each person in the room, like a social security number in the United States. To prove that someone is not present, all one needs to do is see if their social security number is not on the list. Since two different people cannot have the same number, it is possible to prove or disprove the presence of a participant simply by looking up their number. However it is possible that only the names of attendees were registered. If the name of a person is not found in the list, we may safely conclude that that person was not present; but if it is, we cannot conclude definitely without further inquiries, due to the possibility of homonyms (for example, two people named John Smith). Note that this imprecise information will still be adequate for most purposes, because homonyms are rare in practice. However, in all rigor, we cannot say for sure that somebody was present in the room; all we can say is that they were "possibly" here. If the person we are looking up is a criminal, we will issue an "alarm"; but there is of course the possibility of issuing a "false alarm". Similar phenomena will occur in the analysis of programs. If we are only interested in some specific information, say, "was there a person of age formula_0 in the room?", keeping a list of all names and dates of births is unnecessary. We may safely and without loss of precision restrict ourselves to keeping a list of the participants' ages. If this is already too much to handle, we might keep only the age of the youngest, formula_1 and oldest person, formula_2. If the question is about an age strictly lower than formula_1 or strictly higher than formula_2, then we may safely respond that no such participant was present. Otherwise, we may only be able to say that we do not know. In the case of computing, concrete, precise information is in general not computable within finite time and memory (see Rice's theorem and the halting problem). Abstraction is used to allow for generalized answers to questions (for example, answering "maybe" to a yes/no question, meaning "yes or no", when we (an algorithm of abstract interpretation) cannot compute the precise answer with certainty); this simplifies the problems, making them amenable to automatic solutions. One crucial requirement is to add enough vagueness so as to make problems manageable while still retaining enough precision for answering the important questions (such as "might the program crash?"). Abstract interpretation of computer programs. Given a programming or specification language, abstract interpretation consists of giving several semantics linked by relations of abstraction. A semantics is a mathematical characterization of a possible behavior of the program. The most precise semantics, describing very closely the actual execution of the program, are called the "concrete semantics". For instance, the concrete semantics of an imperative programming language may associate to each program the set of execution traces it may produce – an execution trace being a sequence of possible consecutive states of the execution of the program; a state typically consists of the value of the program counter and the memory locations (globals, stack and heap). More abstract semantics are then derived; for instance, one may consider only the set of reachable states in the executions (which amounts to considering the last states in finite traces). The goal of static analysis is to derive a computable semantic interpretation at some point. For instance, one may choose to represent the state of a program manipulating integer variables by forgetting the actual values of the variables and only keeping their signs (+, − or 0). For some elementary operations, such as multiplication, such an abstraction does not lose any precision: to get the sign of a product, it is sufficient to know the sign of the operands. For some other operations, the abstraction may lose precision: for instance, it is impossible to know the sign of a sum whose operands are respectively positive and negative. Sometimes a loss of precision is necessary to make the semantics decidable (see Rice's theorem and the halting problem). In general, there is a compromise to be made between the precision of the analysis and its decidability (computability), or tractability (computational cost). In practice the abstractions that are defined are tailored to both the program properties one desires to analyze, and to the set of target programs. The first large scale automated analysis of computer programs with abstract interpretation was motivated by the accident that resulted in the destruction of the first flight of the Ariane 5 rocket in 1996. Formalization. Let formula_3 be an ordered set, called "concrete set", and let formula_4 be another ordered set, called "abstract set". These two sets are related to each other by defining total functions that map elements from one to the other. A function formula_5 is called an "abstraction function" if it maps an element formula_6 in the concrete set formula_3 to an element formula_7 in the abstract set formula_4. That is, element formula_7 in formula_4 is the "abstraction" of formula_6 in formula_3. A function formula_8 is called a "concretization function" if it maps an element formula_9 in the abstract set formula_4 to an element formula_10 in the concrete set formula_3. That is, element formula_10 in formula_3 is a "concretization" of formula_9 in formula_4. Let formula_11, formula_12, formula_13, and formula_14 be ordered sets. The concrete semantics formula_15 is a monotonic function from formula_11 to formula_12. A function formula_16 from formula_13 to formula_14 is said to be a "valid abstraction" of formula_15 if, for all formula_9 in formula_13, we have formula_17. Program semantics are generally described using fixed points in the presence of loops or recursive procedures. Suppose that formula_3 is a complete lattice and let formula_15 be a monotonic function from formula_3 into formula_3. Then, any formula_9 such that formula_18 is an abstraction of the least fixed-point of formula_15, which exists, according to the Knaster–Tarski theorem. The difficulty is now to obtain such an formula_9. If formula_4 is of finite height, or at least verifies the ascending chain condition (all ascending sequences are ultimately stationary), then such an formula_9 may be obtained as the stationary limit of the ascending sequence formula_19 defined by induction as follows: formula_20 (the least element of formula_4) and formula_21. In other cases, it is still possible to obtain such an formula_9 through a (pair-)widening operator, defined as a binary operator formula_22 which satisfies the following conditions: In some cases, it is possible to define abstractions using Galois connections formula_30 where formula_5 is from formula_3 to formula_4 and formula_8 is from formula_4 to formula_3. This supposes the existence of best abstractions, which is not necessarily the case. For instance, if we abstract sets of couples formula_31 of real numbers by enclosing convex polyhedra, there is no optimal abstraction to the disc defined by formula_32. Examples of abstract domains. Numerical abstract domains. One can assign to each variable formula_6 available at a given program point an interval formula_33. A state assigning the value formula_34 to variable formula_6 will be a concretization of these intervals if, for all formula_6, we have formula_35. From the intervals formula_33 and formula_36 for variables formula_6 and formula_23, respectively, one can easily obtain intervals for formula_37 (namely, formula_38) and for formula_39 (namely, formula_40); note that these are "exact" abstractions, since the set of possible outcomes for, say, formula_41, is precisely the interval formula_38. More complex formulas can be derived for multiplication, division, etc., yielding so-called interval arithmetics. Let us now consider the following very simple program: With reasonable arithmetic types, the result for &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;z&lt;/samp&gt; should be zero. But if we do interval arithmetic starting from &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;x&lt;/samp&gt; in [0, 1], one gets &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;z&lt;/samp&gt; in [−1, +1]. While each of the operations taken individually was exactly abstracted, their composition isn't. The problem is evident: we did not keep track of the equality relationship between &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;x&lt;/samp&gt; and &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;y&lt;/samp&gt;; actually, this domain of intervals does not take into account any relationships between variables, and is thus a "non-relational domain". Non-relational domains tend to be fast and simple to implement, but imprecise. Some examples of "relational" numerical abstract domains are: and combinations thereof (such as the reduced product, cf. right picture). When one chooses an abstract domain, one typically has to strike a balance between keeping fine-grained relationships, and high computational costs. Machine word abstract domains. While high-level languages such as Python or Haskell use unbounded integers by default, lower-level programming languages such as C or assembly language typically operate on finitely-sized machine words, which are more suitably modeled using the integers modulo formula_42 (where "n" is the bit width of a machine word). There are several abstract domains suitable for various analyses of such variables. The "bitfield domain" treats each bit in a machine word separately, i.e., a word of width "n" is treated as an array of "n" abstract values. The abstract values are taken from the set formula_43, and the abstraction and concretization functions are given by: formula_44, formula_45, formula_46, formula_47, formula_48, formula_49, formula_50. Bitwise operations on these abstract values are identical with the corresponding logical operations in some three-valued logics: Further domains include the "signed interval domain" and the "unsigned interval domain". All three of these domains support forwards and backwards abstract operators for common operations such as addition, shifts, xor, and multiplication. These domains can be combined using the reduced product. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "L'" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "\\alpha(x)" }, { "math_id": 8, "text": "\\gamma" }, { "math_id": 9, "text": "x'" }, { "math_id": 10, "text": "\\gamma(x')" }, { "math_id": 11, "text": "L_1" }, { "math_id": 12, "text": "L_2" }, { "math_id": 13, "text": "L'_{1}" }, { "math_id": 14, "text": "L'_2" }, { "math_id": 15, "text": "f" }, { "math_id": 16, "text": "f'" }, { "math_id": 17, "text": "(f \\circ \\gamma)(x') \\leq (\\gamma \\circ f')(x')" }, { "math_id": 18, "text": "f(x') \\leq x'" }, { "math_id": 19, "text": "x'_{n}" }, { "math_id": 20, "text": "x'_{0} = \\bot" }, { "math_id": 21, "text": "x'_{n+1} = f'(x'_{n})" }, { "math_id": 22, "text": "\\nabla\\colon L\\times L\\to L" }, { "math_id": 23, "text": "y" }, { "math_id": 24, "text": "x \\leq x \\mathbin{\\nabla} y" }, { "math_id": 25, "text": "y \\leq x \\mathbin{\\nabla} y" }, { "math_id": 26, "text": "(y'_{n})_{n\\geq 0}" }, { "math_id": 27, "text": "x'_{0} := \\bot" }, { "math_id": 28, "text": "x'_{n+1} := x'_{n} \\mathbin{\\nabla} y'_{n}" }, { "math_id": 29, "text": "y'_{n}=f'(x'_{n})" }, { "math_id": 30, "text": "(\\alpha, \\gamma)" }, { "math_id": 31, "text": "(x, y)" }, { "math_id": 32, "text": "x^2 + y^2 \\leq 1" }, { "math_id": 33, "text": "[L_{x}, H_{x}]" }, { "math_id": 34, "text": "v(x)" }, { "math_id": 35, "text": "v(x) \\in [L_{x}, H_{x}]" }, { "math_id": 36, "text": "[L_{y}, H_{y}]" }, { "math_id": 37, "text": "x + y" }, { "math_id": 38, "text": "[L_{x}+L_{y}, H_{x}+H_{y}]" }, { "math_id": 39, "text": "x - y" }, { "math_id": 40, "text": "[L_{x} - H_{y}, H_{x} - L_{y}]" }, { "math_id": 41, "text": "x+y" }, { "math_id": 42, "text": "2^n" }, { "math_id": 43, "text": "\\{0,1,\\bot\\}" }, { "math_id": 44, "text": "\\gamma(0) = \\{0\\}" }, { "math_id": 45, "text": "\\gamma(1) = \\{1\\}" }, { "math_id": 46, "text": "\\gamma(\\bot) = \\{0,1\\}" }, { "math_id": 47, "text": "\\alpha(\\{0\\}) = 0" }, { "math_id": 48, "text": "\\alpha(\\{1\\}) = 1" }, { "math_id": 49, "text": "\\alpha(\\{0, 1\\}) = \\bot" }, { "math_id": 50, "text": "\\alpha(\\{\\}) = \\bot" } ]
https://en.wikipedia.org/wiki?curid=60490
60491326
Magnetic space group
Concept in physics In solid state physics, the magnetic space groups, or Shubnikov groups, are the symmetry groups which classify the symmetries of a crystal both in space, and in a two-valued property such as electron spin. To represent such a property, each lattice point is colored black or white, and in addition to the usual three-dimensional symmetry operations, there is a so-called "antisymmetry" operation which turns all black lattice points white and all white lattice points black. Thus, the magnetic space groups serve as an extension to the crystallographic space groups which describe spatial symmetry alone. The application of magnetic space groups to crystal structures is motivated by Curie's Principle. Compatibility with a material's symmetries, as described by the magnetic space group, is a necessary condition for a variety of material properties, including ferromagnetism, ferroelectricity, topological insulation. History. A major step was the work of Heinrich Heesch, who first rigorously established the concept of antisymmetry as part of a series of papers in 1929 and 1930. Applying this antisymmetry operation to the 32 crystallographic point groups gives a total of 122 magnetic point groups. However, although Heesch correctly laid out each of the magnetic point groups, his work remained obscure, and the point groups were later re-derived by Tavger and Zaitsev. The concept was more fully explored by Shubnikov in terms of color symmetry. When applied to space groups, the number increases from the usual 230 three dimensional space groups to 1651 magnetic space groups, as found in the 1953 thesis of Alexandr Zamorzaev. While the magnetic space groups were originally found using geometry, it was later shown the same magnetic space groups can be found using generating sets. Description. Magnetic space groups. The magnetic space groups can be placed into three categories. First, the 230 colorless groups contain only spatial symmetry, and correspond to the crystallographic space groups. Then there are 230 grey groups, which are invariant under antisymmetry. Finally are the 1191 black-white groups, which contain the more complex symmetries. There are two common conventions for giving names to the magnetic space groups. They are Opechowski-Guiccione (named after Wladyslaw Opechowski and Rosalia Guiccione) and Belov-Neronova-Smirnova. For colorless and grey groups, the conventions use the same names, but they treat the black-white groups differently. A full list of the magnetic space groups (in both conventions) can be found both in the original papers, and in several places online. The types can be distinguished by their different construction. Type I magnetic space groups, formula_0 are identical to the ordinary space groups,formula_1. formula_2 Type II magnetic space groups, formula_3, are made up of all the symmetry operations of the crystallographic space group, formula_1, plus the product of those operations with time reversal operation, formula_4. Equivalently, this can be seen as the direct product of an ordinary space group with the point group formula_5. formula_6 formula_7 Type III magnetic space groups, formula_8, are constructed using a group formula_9, which is a subgroup of formula_1 with index 2. formula_10 Type IV magnetic space groups, formula_11, are constructed with the use of a pure translation, formula_12, which is Seitz notation for null rotation and a translation, formula_13. Here the formula_13 is a vector (usually given in fractional coordinates) pointing from a black colored point to a white colored point, or vice versa. formula_14 Magnetic point groups. The following table lists all of the 122 possible three-dimensional magnetic point groups. This is given in the short version of Hermann–Mauguin notation in the following table. Here, the addition of an apostrophe to a symmetry operation indicates that the combination of the symmetry element and the antisymmetry operation is a symmetry of the structure. There are 32 Crystallographic point groups, 32 grey groups, and 58 magnetic point groups. The magnetic point groups which are compatible with ferromagnetism are colored cyan, the magnetic point groups which are compatible with ferroelectricity are colored red, and the magnetic point groups which are compatible with both ferromagnetism and ferroelectricity are purple. There are 31 magnetic point groups which are compatible with ferromagnetism. These groups, sometimes called "admissible", leave at least one component of the spin invariant under operations of the point group. There are 31 point groups compatible with ferroelectricity; these are generalizations of the crystallographic polar point groups. There are also 31 point groups compatible with the theoretically proposed ferrotorodicity. Similar symmetry arguments have been extended to other electromagnetic material properties such as magnetoelectricity or piezoelectricity. The following diagrams show the stereographic projection of most of the magnetic point groups onto a flat surface. Not shown are the grey point groups, which look identical to the ordinary crystallographic point groups, except they are also invariant under the antisymmetry operation. Black-white Bravais lattices. The black-white Bravais lattices characterize the translational symmetry of the structure like the typical Bravais lattices, but also contain additional symmetry elements. For black-white Bravais lattices, the number of black and white sites is always equal. There are 14 traditional Bravais lattices, 14 grey lattices, and 22 black-white Bravais lattices, for a total of 50 two-color lattices in three dimensions. The table shows the 36 black-white Bravais lattices, including the 14 traditional Bravais lattices, but excluding the 14 gray lattices which look identical to the traditional lattices. The lattice symbols are those used for the traditional Bravais lattices. The suffix in the symbol indicates the mode of centering by the black (antisymmetry) points in the lattice, where "s" denotes edge centering. Magnetic superspace groups. When the periodicity of the magnetic order coincides with the periodicity of crystallographic order, the magnetic phase is said to be "commensurate", and can be well-described by a magnetic space group. However, when this is not the case, the order does not correspond to any magnetic space group. These phases can instead be described by "magnetic superspace groups", which describe "incommensurate" order. This is the same formalism often used to describe the ordering of some quasicrystals. Phase transitions. The Landau theory of second-order phase transitions has been applied to magnetic phase transitions. The magnetic space group of disordered structure, formula_15, transitions to the magnetic space group of the ordered phase, formula_16. formula_16 is a subgroup of formula_15, and keeps only the symmetries which have not been broken during the phase transition. This can be tracked numerically by evolution of the order parameter, which belongs to a single irreducible representation of formula_15. Important magnetic phase transitions include the paramagnetic to ferromagnetic transition at the Curie temperature and the paramagnetic to antiferromagnetic transition at the Néel temperature. Differences in the magnetic phase transitions explain why Fe2O3, MnCO3, and CoCO3 are weakly ferromagnetic, whereas the structurally similar Cr2O3 and FeCO3 are purely antiferromagnetic. This theory developed into what is now known as antisymmetric exchange. A related scheme is the classification of "Aizu species" which consist of a prototypical non-ferroic magnetic point group, the letter "F" for ferroic, and a ferromagnetic or ferroelectric point group which is a subgroup of the prototypical group which can be reached by continuous motion of the atoms in the crystal structure. Applications and extensions. The main application of these space groups is to magnetic structure, where the black/white lattice points correspond to spin up/spin down configuration of electron spin. More abstractly, the magnetic space groups are often thought of as representing time reversal symmetry. This is in contrast to time crystals, which instead have time translation symmetry. In the most general form, magnetic space groups can represent symmetries of any two valued lattice point property, such as positive/negative electrical charge or the alignment of electric dipole moments. The magnetic space groups place restrictions on the electronic band structure of materials. Specifically, they place restrictions on the connectivity of the different electron bands, which in turn defines whether material has symmetry-protected topological order. Thus, the magnetic space groups can be used to identify topological materials, such as topological insulators. Experimentally, the main source of information about magnetic space groups is neutron diffraction experiments. The resulting experimental profile can be matched to theoretical structures by Rietveld refinement or simulated annealing. Adding the two-valued symmetry is also a useful concept for frieze groups which are often used to classify artistic patterns. In that case, the 7 frieze groups with the addition of color reversal become 24 color-reversing frieze groups. Beyond the simple two-valued property, the idea has been extended further to three colors in three dimensions, and to even higher dimensions and more colors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{M}_I" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\mathcal{M}_I=G" }, { "math_id": 3, "text": "\\mathcal{M}_{II}" }, { "math_id": 4, "text": "\\mathcal{T}" }, { "math_id": 5, "text": "1'" }, { "math_id": 6, "text": "\\mathcal{M}_{II}=G + \\mathcal{T}G" }, { "math_id": 7, "text": "\\mathcal{M}_{II}=G \\times 1'" }, { "math_id": 8, "text": "\\mathcal{M}_{III}" }, { "math_id": 9, "text": "H" }, { "math_id": 10, "text": "\\mathcal{M}_{III}=H + \\mathcal{T} (G-H)" }, { "math_id": 11, "text": "\\mathcal{M}_{IV}" }, { "math_id": 12, "text": "\\{E|t_0\\}" }, { "math_id": 13, "text": "t_0" }, { "math_id": 14, "text": "\\mathcal{M}_{IV}=G + \\mathcal{T}\\{E|t_0\\}G" }, { "math_id": 15, "text": "G_0" }, { "math_id": 16, "text": "G_1" } ]
https://en.wikipedia.org/wiki?curid=60491326
60495606
Grünbaum–Rigby configuration
In geometry, the Grünbaum–Rigby configuration is a symmetric configuration consisting of 21 points and 21 lines, with four points on each line and four lines through each point. Originally studied by Felix Klein in the complex projective plane in connection with the Klein quartic, it was first realized in the Euclidean plane by Branko Grünbaum and John F. Rigby. History and notation. The Grünbaum–Rigby configuration was known to Felix Klein, William Burnside, and H. S. M. Coxeter. Its original description by Klein in 1879 marked the first appearance in the mathematical literature of a 4-configuration, a system of points and lines with four points per line and four lines per point. In Klein's description, these points and lines belong to the complex projective plane, a space whose coordinates are complex numbers rather than the real-number coordinates of the Euclidean plane. The geometric realisation of this configuration as points and lines in the Euclidean plane, based on overlaying three regular heptagrams, was only established much later, by Branko Grünbaum and J. F. Rigby (1990). Their paper on it became the first of a series of works on configurations by Grünbaum, and contained the first published graphical depiction of a 4-configuration. In the notation of configurations, configurations with 21 points, 21 lines, 4 points per line and 4 lines per point are denoted (214). However, the notation does not specify the configuration itself, only its type (the numbers of points, lines, and incidences). It also does not specify whether the configuration is purely combinatorial (an abstract incidence pattern of lines and points) or whether the points and lines of the configuration are realizable in the Euclidean plane or another standard geometry. The type (214) is highly ambiguous: there is an unknown but large number of (combinatorial) configurations of this type, 200 of which were listed by . Construction. The Grünbaum–Rigby configuration can be constructed from the seven points of a regular heptagon and its 14 interior diagonals. To complete the 21 points and lines of the configuration, these must be augmented by 14 more points and seven more lines. The remaining 14 points of the configuration are the points where pairs of equal-length diagonals of the heptagon cross each other. These form two smaller heptagons, one for each of the two lengths of diagonal; the sides of these smaller heptagons are the diagonals of the outer heptagon. Each of the two smaller heptagons has 14 diagonals, seven of which are shared with the other smaller heptagon. The seven shared diagonals are the remaining seven lines of the configuration. The original construction of the Grünbaum–Rigby configuration by Klein viewed its points and lines as belonging to the complex projective plane, rather than the Euclidean plane. In this space, the points and lines form the perspective centers and axes of the perspective transformations of the Klein quartic. They have the same pattern of point-line intersections as the Euclidean version of the configuration. The finite projective plane formula_0 has 57 points and 57 lines, and can be given coordinates based on the integers modulo 7. In this space, every conic formula_1 (the set of solutions to a two-variable quadratic equation modulo 7) has 28 secant lines through pairs of its points, 8 tangent lines through a single point, and 21 nonsecant lines that are disjoint from formula_1. Dually, there are 28 points where pairs of tangent lines meet, 8 points on formula_1, and 21 interior points that do not belong to any tangent line. The 21 nonsecant lines and 21 interior points form an instance of the Grünbaum–Rigby configuration, meaning that again these points and lines have the same pattern of intersections. Properties. The projective dual of this configuration, a system of points and lines with a point for every line of the configuration and a line for every point, and with the same point-line incidences, is the same configuration. The symmetry group of the configuration includes symmetries that take any incident pair of points and lines to any other incident pair. The Grünbaum–Rigby configuration is an example of a polycyclic configuration, that is, a configuration with cyclic symmetry, such that each orbit of points or lines has the same number of elements. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "PG(2,7)" }, { "math_id": 1, "text": "C" } ]
https://en.wikipedia.org/wiki?curid=60495606
6050
List of equations in classical mechanics
Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, called a frame of reference. The point of concurrency of the three axes is known as the origin of the particular space. Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another. These include differential equations, manifolds, Lie groups, and ergodic theory. This article gives a summary of the most important of these. This article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics (which includes Lagrangian and Hamiltonian mechanics). Classical mechanics. General energy definitions. Every conservative force has a potential energy. By following two principles one can consistently assign a non-relative value to "U": Kinematics. In the following rotational definitions, the angle can be any angle about the specified axis of rotation. It is customary to use "θ", but this does not have to be the polar angle used in polar coordinate systems. The unit axial vector formula_0 defines the axis of rotation, formula_1 = unit vector in direction of r, formula_2 = unit vector tangential to the angle. Dynamics. Precession. The precession angular speed of a spinning top is given by: formula_3 where "w" is the weight of the spinning flywheel. Energy. The mechanical work done by an external agent on a system is equal to the change in kinetic energy of the system: General work-energy theorem (translation and rotation). The work done "W" by an external agent which exerts a force F (at r) and torque τ on an object along a curved path "C" is: formula_4 where θ is the angle of rotation about an axis defined by a unit vector n. Kinetic energy. The change in kinetic energy for an object initially traveling at speed formula_5 and later at speed formula_6 is: formula_7 Elastic potential energy. For a stretched spring fixed at one end obeying Hooke's law, the elastic potential energy is formula_8 where "r"2 and "r"1 are collinear coordinates of the free end of the spring, in the direction of the extension/compression, and k is the spring constant. Euler's equations for rigid body dynamics. Euler also worked out analogous laws of motion to those of Newton, see Euler's laws of motion. These extend the scope of Newton's laws to rigid bodies, but are essentially the same as above. A new equation Euler formulated is: formula_9 where I is the moment of inertia tensor. General planar motion. The previous equations for planar motion can be used here: corollaries of momentum, angular momentum etc. can immediately follow by applying the above definitions. For any object moving in any path in a plane, formula_10 the following general results apply to the particle. Central force motion. For a massive body moving in a central potential due to another object, which depends only on the radial separation between the centers of masses of the two objects, the equation of motion is: formula_11 Equations of motion (constant acceleration). These equations can be used only when acceleration is constant. If acceleration is not constant then the general calculus equations above must be used, found by integrating the definitions of position, velocity and acceleration (see above). Galilean frame transforms. For classical (Galileo-Newtonian) mechanics, the transformation law from one inertial or accelerating (including rotation) frame (reference frame traveling at constant velocity - including zero) to another is the Galilean transform. Unprimed quantities refer to position, velocity and acceleration in one frame F; primed quantities refer to position, velocity and acceleration in another frame F' moving at translational velocity V or angular velocity Ω relative to F. Conversely F moves at velocity (—V or —Ω) relative to F'. The situation is similar for relative accelerations. Mechanical oscillators. SHM, DHM, SHO, and DHO refer to simple harmonic motion, damped harmonic motion, simple harmonic oscillator and damped harmonic oscillator respectively. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{\\hat{n}} = \\mathbf{\\hat{e}}_r\\times\\mathbf{\\hat{e}}_\\theta " }, { "math_id": 1, "text": " \\scriptstyle \\mathbf{\\hat{e}}_r " }, { "math_id": 2, "text": " \\scriptstyle \\mathbf{\\hat{e}}_\\theta " }, { "math_id": 3, "text": " \\boldsymbol{\\Omega} = \\frac{wr}{I\\boldsymbol{\\omega}} " }, { "math_id": 4, "text": " W = \\Delta T = \\int_C \\left ( \\mathbf{F} \\cdot \\mathrm{d} \\mathbf{r} + \\boldsymbol{\\tau} \\cdot \\mathbf{n} \\, {\\mathrm{d} \\theta} \\right ) " }, { "math_id": 5, "text": "v_0" }, { "math_id": 6, "text": "v" }, { "math_id": 7, "text": " \\Delta E_k = W = \\frac{1}{2} m(v^2 - {v_0}^2) " }, { "math_id": 8, "text": " \\Delta E_p = \\frac{1}{2} k(r_2-r_1)^2 " }, { "math_id": 9, "text": " \\mathbf{I} \\cdot \\boldsymbol{\\alpha} + \\boldsymbol{\\omega} \\times \\left ( \\mathbf{I} \\cdot \\boldsymbol{\\omega} \\right ) = \\boldsymbol{\\tau} " }, { "math_id": 10, "text": " \\mathbf{r} = \\mathbf{r}(t) = r\\hat\\mathbf r " }, { "math_id": 11, "text": "\\frac{d^2}{d\\theta^2}\\left(\\frac{1}{\\mathbf{r}}\\right) + \\frac{1}{\\mathbf{r}} = -\\frac{\\mu\\mathbf{r}^2}{\\mathbf{l}^2}\\mathbf{F}(\\mathbf{r})" } ]
https://en.wikipedia.org/wiki?curid=6050
605011
AM–GM inequality
Arithmetic mean is greater than or equal to geometric mean In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same (in which case they are both that number). The simplest non-trivial case – i.e., with more than one variable – for two non-negative numbers x and y, is the statement that formula_0 with equality if and only if "x" "y". This case can be seen from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the elementary case ("a" ± "b")2 "a"2 ± 2"ab" + "b"2 of the binomial formula: formula_1 Hence ("x" + "y")2 ≥ 4"xy", with equality precisely when ("x" − "y")2 0, i.e. "x" "y". The AM–GM inequality then follows from taking the positive square root of both sides and then dividing both sides by "2". For a geometrical interpretation, consider a rectangle with sides of length x and y, hence it has perimeter 2"x" + 2"y" and area xy. Similarly, a square with all sides of length has the perimeter and the same area as the rectangle. The simplest non-trivial case of the AM–GM inequality implies for the perimeters that 2"x" + 2"y" ≥ 4√"xy" and that only the square has the smallest perimeter amongst all rectangles of equal area. The simplest case is implicit in Euclid's Elements, Book 5, Proposition 25. Extensions of the AM–GM inequality are available to include weights or generalized means. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Background. The "arithmetic mean", or less precisely the "average", of a list of n numbers "x"1, "x"2, . . . , "xn" is the sum of the numbers divided by n: formula_2 The "geometric mean" is similar, except that it is only defined for a list of "nonnegative" real numbers, and uses multiplication and a root in place of addition and division: formula_3 If "x"1, "x"2, . . . , "xn" &gt; 0, this is equal to the exponential of the arithmetic mean of the natural logarithms of the numbers: formula_4 Note: This does not apply exclusively to the exp() function and natural logarithms. The base b of the exponentiation could be any positive real number except 1 if the logarithm is of base b. The inequality. Restating the inequality using mathematical notation, we have that for any list of n nonnegative real numbers "x"1, "x"2, . . . , "xn", formula_5 and that equality holds if and only if "x"1 "x"2 "xn". Geometric interpretation. In two dimensions, 2"x"1 + 2"x"2 is the perimeter of a rectangle with sides of length "x"1 and "x"2. Similarly, is the perimeter of a square with the same area, "x"1"x"2, as that rectangle. Thus for "n" 2 the AM–GM inequality states that a rectangle of a given area has the smallest perimeter if that rectangle is also a square. The full inequality is an extension of this idea to n dimensions. Consider an n-dimensional box with edge lengths "x"1, "x"2, . . . , "xn". Every vertex of the box is connected to n edges of different directions, so the average length of edges incident to the vertex is ("x"1 + "x"2 + · · · + "xn")/"n". On the other hand, formula_6 is the edge length of an n-dimensional cube of equal volume, which therefore is also the average length of edges incident to a vertex of the cube. Thus the AM–GM inequality states that only the n-cube has the smallest average length of edges connected to each vertex amongst all n-dimensional boxes with the same volume. Examples. Example 1. If formula_7, then the A.M.-G.M. tells us that formula_8 Example 2. A simple upper bound for formula_9 can be found. AM-GM tells us formula_10 formula_11 and so formula_12 with equality at formula_13. Equivalently, formula_14 Example 3. Consider the function formula_15 for all positive real numbers x, y and z. Suppose we wish to find the minimal value of this function. It can be rewritten as: formula_16 with formula_17 Applying the AM–GM inequality for "n" 6, we get formula_18 Further, we know that the two sides are equal exactly when all the terms of the mean are equal: formula_19 All the points ("x", "y", "z") satisfying these conditions lie on a half-line starting at the origin and are given by formula_20 Applications. An important practical application in financial mathematics is to computing the rate of return: the annualized return, computed via the geometric mean, is less than the average annual return, computed by the arithmetic mean (or equal if all returns are equal). This is important in analyzing investments, as the average return overstates the cumulative effect. It can also be used to prove the Cauchy–Schwarz inequality. Proofs of the AM–GM inequality. The AM–GM inequality is also known for the variety of methods that can be used to prove it. Proof using Jensen's inequality. Jensen's inequality states that the value of a concave function of an arithmetic mean is greater than or equal to the arithmetic mean of the function's values. Since the logarithm function is concave, we have formula_21 Taking antilogs of the far left and far right sides, we have the AM–GM inequality. Proof by successive replacement of elements. We have to show that formula_22 with equality only when all numbers are equal. If not all numbers are equal, then there exist formula_23 such that formula_24. Replacing xi by formula_25 and xj by formula_26 will leave the arithmetic mean of the numbers unchanged, but will increase the geometric mean because formula_27 If the numbers are still not equal, we continue replacing numbers as above. After at most formula_28 such replacement steps all the numbers will have been replaced with formula_25 while the geometric mean strictly increases at each step. After the last step, the geometric mean will be formula_29, proving the inequality. It may be noted that the replacement strategy works just as well from the right hand side. If any of the numbers is 0 then so will the geometric mean thus proving the inequality trivially. Therefore we may suppose that all the numbers are positive. If they are not all equal, then there exist formula_23 such that formula_30. Replacing formula_31 by formula_32 and formula_33 by formula_34leaves the geometric mean unchanged but strictly decreases the arithmetic mean since formula_35. The proof then follows along similar lines as in the earlier replacement. Induction proofs. Proof by induction #1. Of the non-negative real numbers "x"1, . . . , "xn", the AM–GM statement is equivalent to formula_36 with equality if and only if "α" "xi" for all "i" ∈ {1, . . . , "n"}. For the following proof we apply mathematical induction and only well-known rules of arithmetic. Induction basis: For "n" 1 the statement is true with equality. Induction hypothesis: Suppose that the AM–GM statement holds for all choices of n non-negative real numbers. Induction step: Consider "n" + 1 non-negative real numbers "x"1, . . . , "x""n"+1, . Their arithmetic mean α satisfies formula_37 If all the xi are equal to α, then we have equality in the AM–GM statement and we are done. In the case where some are not equal to α, there must exist one number that is greater than the arithmetic mean α, and one that is smaller than α. Without loss of generality, we can reorder our xi in order to place these two particular elements at the end: "xn" &gt; "α" and "x""n"+1 &lt; "α". Then formula_38 formula_39 Now define "y" with formula_40 and consider the n numbers "x"1, . . . , "x""n"–1, "y" which are all non-negative. Since formula_41 formula_42 Thus, α is also the arithmetic mean of n numbers "x"1, . . . , "x""n"–1, "y" and the induction hypothesis implies formula_43 Due to (*) we know that formula_44 hence formula_45 in particular "α" &gt; 0. Therefore, if at least one of the numbers "x"1, . . . , "x""n"–1 is zero, then we already have strict inequality in (**). Otherwise the right-hand side of (**) is positive and strict inequality is obtained by using the estimate (***) to get a lower bound of the right-hand side of (**). Thus, in both cases we can substitute (***) into (**) to get formula_46 which completes the proof. Proof by induction #2. First of all we shall prove that for real numbers "x"1 &lt; 1 and "x"2 &gt; 1 there follows formula_47 Indeed, multiplying both sides of the inequality "x"2 &gt; 1 by 1 – "x"1, gives formula_48 whence the required inequality is obtained immediately. Now, we are going to prove that for positive real numbers "x"1, . . . , "x""n" satisfying "x"1 . . . "x""n" 1, there holds formula_49 The equality holds only if "x"1 "x""n" 1. Induction basis: For "n" 2 the statement is true because of the above property. Induction hypothesis: Suppose that the statement is true for all natural numbers up to "n" – 1. Induction step: Consider natural number "n", i.e. for positive real numbers "x"1, . . . , "x""n", there holds "x"1 . . . "x""n" 1. There exists at least one "xk" &lt; 1, so there must be at least one "xj" &gt; 1. Without loss of generality, we let "k" "n" – 1 and "j" "n". Further, the equality "x"1 . . . "x""n" 1 we shall write in the form of ("x"1 . . . "x""n"–2) ("x""n"–1 "x""n") 1. Then, the induction hypothesis implies formula_50 However, taking into account the induction basis, we have formula_51 which completes the proof. For positive real numbers "a"1, . . . , "a""n", let's denote formula_52 The numbers "x"1, . . . , "x""n" satisfy the condition "x"1 . . . "x""n" 1. So we have formula_53 whence we obtain formula_54 with the equality holding only for "a"1 "a""n". Proof by Cauchy using forward–backward induction. The following proof by cases relies directly on well-known rules of arithmetic but employs the rarely used technique of forward-backward-induction. It is essentially from Augustin Louis Cauchy and can be found in his "Cours d'analyse". The case where all the terms are equal. If all the terms are equal: formula_55 then their sum is "nx"1, so their arithmetic mean is "x"1; and their product is "x"1"n", so their geometric mean is "x"1; therefore, the arithmetic mean and geometric mean are equal, as desired. The case where not all the terms are equal. It remains to show that if "not" all the terms are equal, then the arithmetic mean is greater than the geometric mean. Clearly, this is only possible when "n" &gt; 1. This case is significantly more complex, and we divide it into subcases. The subcase where "n" = 2. If "n" 2, then we have two terms, "x"1 and "x"2, and since (by our assumption) not all terms are equal, we have: formula_56 hence formula_57 as desired. The subcase where "n" = 2"k". Consider the case where "n" 2"k", where k is a positive integer. We proceed by mathematical induction. In the base case, "k" 1, so "n" 2. We have already shown that the inequality holds when "n" 2, so we are done. Now, suppose that for a given "k" &gt; 1, we have already shown that the inequality holds for "n" 2"k"−1, and we wish to show that it holds for "n" 2"k". To do so, we apply the inequality twice for 2"k"-1 numbers and once for 2 numbers to obtain: formula_58 where in the first inequality, the two sides are equal only if formula_59 and formula_60 (in which case the first arithmetic mean and first geometric mean are both equal to "x"1, and similarly with the second arithmetic mean and second geometric mean); and in the second inequality, the two sides are only equal if the two geometric means are equal. Since not all 2"k" numbers are equal, it is not possible for both inequalities to be equalities, so we know that: formula_61 as desired. The subcase where "n" &lt; 2"k". If n is not a natural power of 2, then it is certainly "less" than some natural power of 2, since the sequence 2, 4, 8, . . . , 2"k", . . . is unbounded above. Therefore, without loss of generality, let m be some natural power of 2 that is greater than n. So, if we have n terms, then let us denote their arithmetic mean by α, and expand our list of terms thus: formula_62 We then have: formula_63 so formula_64 and formula_65 as desired. Proof by induction using basic calculus. The following proof uses mathematical induction and some basic differential calculus. Induction basis: For "n" 1 the statement is true with equality. Induction hypothesis: Suppose that the AM–GM statement holds for all choices of n non-negative real numbers. Induction step: In order to prove the statement for "n" + 1 non-negative real numbers "x"1, . . . , "xn", "x""n"+1, we need to prove that formula_66 with equality only if all the "n" + 1 numbers are equal. If all numbers are zero, the inequality holds with equality. If some but not all numbers are zero, we have strict inequality. Therefore, we may assume in the following, that all "n" + 1 numbers are positive. We consider the last number "x""n"+1 as a variable and define the function formula_67 Proving the induction step is equivalent to showing that "f"("t") ≥ 0 for all "t" &gt; 0, with "f"("t") 0 only if "x"1, . . . , "xn" and t are all equal. This can be done by analyzing the critical points of f using some basic calculus. The first derivative of f is given by formula_68 A critical point "t"0 has to satisfy "f′"("t"0) 0, which means formula_69 After a small rearrangement we get formula_70 and finally formula_71 which is the geometric mean of "x"1, . . . , "xn". This is the only critical point of f. Since "f′′"("t") &gt; 0 for all "t" &gt; 0, the function f is strictly convex and has a strict global minimum at "t"0. Next we compute the value of the function at this global minimum: formula_72 where the final inequality holds due to the induction hypothesis. The hypothesis also says that we can have equality only when "x"1, . . . , "xn" are all equal. In this case, their geometric mean  "t"0 has the same value, Hence, unless "x"1, . . . , "xn", "x""n"+1 are all equal, we have "f"("x""n"+1) &gt; 0. This completes the proof. This technique can be used in the same manner to prove the generalized AM–GM inequality and Cauchy–Schwarz inequality in Euclidean space R"n". Proof by Pólya using the exponential function. George Pólya provided a proof similar to what follows. Let "f"("x") e"x"–1 – "x" for all real x, with first derivative "f′"("x") e"x"–1 – 1 and second derivative "f′′"("x") e"x"–1. Observe that "f"(1) 0, "f′"(1) 0 and "f′′"("x") &gt; 0 for all real x, hence f is strictly convex with the absolute minimum at "x" 1. Hence "x" ≤ e"x"–1 for all real x with equality only for "x" 1. Consider a list of non-negative real numbers "x"1, "x"2, . . . , "xn". If they are all zero, then the AM–GM inequality holds with equality. Hence we may assume in the following for their arithmetic mean "α" &gt; 0. By n-fold application of the above inequality, we obtain that formula_73 with equality if and only if "xi" "α" for every "i" ∈ {1, . . . , "n"}. The argument of the exponential function can be simplified: formula_74 Returning to (*), formula_75 which produces "x"1 "x"2 · · · "xn" ≤ "αn", hence the result formula_76 Proof by Lagrangian multipliers. If any of the formula_31 are formula_77, then there is nothing to prove. So we may assume all the formula_31 are strictly positive. Because the arithmetic and geometric means are homogeneous of degree 1, without loss of generality assume that formula_78. Set formula_79, and formula_80. The inequality will be proved (together with the equality case) if we can show that the minimum of formula_81 subject to the constraint formula_82 is equal to formula_83, and the minimum is only achieved when formula_84. Let us first show that the constrained minimization problem has a global minimum. Set formula_85. Since the intersection formula_86 is compact, the extreme value theorem guarantees that the minimum of formula_87 subject to the constraints formula_88 and formula_89 is attained at some point inside formula_90. On the other hand, observe that if any of the formula_91, then formula_92, while formula_93, and formula_94. This means that the minimum inside formula_86 is in fact a global minimum, since the value of formula_95 at any point inside formula_86 is certainly no smaller than the minimum, and the value of formula_95 at any point formula_96 not inside formula_90 is strictly bigger than the value at formula_97, which is no smaller than the minimum. The method of Lagrange multipliers says that the global minimum is attained at a point formula_98 where the gradient of formula_99 is formula_100 times the gradient of formula_101, for some formula_100. We will show that the only point at which this happens is when formula_84 and formula_102 Compute formula_103 and formula_104 along the constraint. Setting the gradients proportional to one another therefore gives for each formula_105 that formula_106 and so formula_107 Since the left-hand side does not depend on formula_105, it follows that formula_108, and since formula_109, it follows that formula_110 and formula_111, as desired. Generalizations. Weighted AM–GM inequality. There is a similar inequality for the weighted arithmetic mean and weighted geometric mean. Specifically, let the nonnegative numbers "x"1, "x"2, . . . , "xn" and the nonnegative weights "w"1, "w"2, . . . , "wn" be given. Set "w" "w"1 + "w"2 + · · · + "wn". If "w" &gt; 0, then the inequality formula_112 holds with equality if and only if all the xk with "wk" &gt; 0 are equal. Here the convention 00 1 is used. If all "wk" 1, this reduces to the above inequality of arithmetic and geometric means. One stronger version of this, which also gives strengthened version of the unweighted version, is due to Aldaz. In particular, There is a similar inequality for the weighted arithmetic mean and weighted geometric mean. Specifically, let the nonnegative numbers "x"1, "x"2, . . . , "xn" and the nonnegative weights "w"1, "w"2, . . . , "wn" be given. Assume further that the sum of the weights is 1. Then formula_113. Proof using Jensen's inequality. Using the finite form of Jensen's inequality for the natural logarithm, we can prove the inequality between the weighted arithmetic mean and the weighted geometric mean stated above. Since an xk with weight "wk" 0 has no influence on the inequality, we may assume in the following that all weights are positive. If all xk are equal, then equality holds. Therefore, it remains to prove strict inequality if they are not all equal, which we will assume in the following, too. If at least one xk is zero (but not all), then the weighted geometric mean is zero, while the weighted arithmetic mean is positive, hence strict inequality holds. Therefore, we may assume also that all xk are positive. Since the natural logarithm is strictly concave, the finite form of Jensen's inequality and the functional equations of the natural logarithm imply formula_114 Since the natural logarithm is strictly increasing, formula_115 Matrix arithmetic–geometric mean inequality. Most matrix generalizations of the arithmetic geometric mean inequality apply on the level of unitarily invariant norms, since, even if the matrices formula_116 and formula_117 are positive semi-definite, the matrix formula_118 may not be positive semi-definite and hence may not have a canonical square root. In Bhatia and Kittaneh proved that for any unitarily invariant norm formula_119 and positive semi-definite matrices formula_116 and formula_117 it is the case that formula_120 Later, in the same authors proved the stronger inequality that formula_121 Finally, it is known for dimension formula_122 that the following strongest possible matrix generalization of the arithmetic-geometric mean inequality holds, and it is conjectured to hold for all formula_123 formula_124 This conjectured inequality was shown by Stephen Drury in 2012. Indeed, he proved formula_125 Finance: Link to geometric asset returns. In finance much research is concerned with accurately estimating the rate of return of an asset over multiple periods in the future. In the case of lognormal asset returns, there is an exact formula to compute the arithmetic asset return from the geometric asset return. For simplicity, assume we are looking at yearly geometric returns "r1, r2, ... , rN" over a time horizon of N years, i.e. formula_126 where: formula_127 = value of the asset at time formula_123, formula_128 = value of the asset at time formula_129. The geometric and arithmetic returns are respectively defined as formula_130 formula_131 When the yearly geometric asset returns are lognormally distributed, then the following formula can be used to convert the geometric average return to the arithemtic average return: formula_132 where formula_133 is the variance of the observed asset returns This implicit equation for aN can be solved exactly as follows. First, notice that by setting formula_134 we obtain a polynomial equation of degree 2: formula_135 Solving this equation for z and using the definition of z, we obtain 4 possible solutions for aN: formula_136 However, notice that formula_137 This implies that the only 2 possible solutions are (as asset returns are real numbers): formula_138 Finally, we expect the derivative of aN with respect to gN to be non-negative as an increase in the geometric return should never cause a decrease in the arithmetic return. Indeed, both measure the average growth of an asset's value and therefore should move in similar directions. This leaves us with one solution to the implicit equation for aN, namely formula_139 Therefore, under the assumption of lognormally distributed asset returns, the arithmetic asset return is fully determined by the geometric asset return. Other generalizations. Other generalizations of the inequality of arithmetic and geometric means include: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{x+y}2 \\ge \\sqrt{xy}" }, { "math_id": 1, "text": "\\begin{align}\n0 & \\le (x-y)^2 \\\\\n& = x^2-2xy+y^2 \\\\\n& = x^2+2xy+y^2 - 4xy \\\\\n& = (x+y)^2 - 4xy.\n\\end{align}" }, { "math_id": 2, "text": "\\frac{x_1 + x_2 + \\cdots + x_n}{n}." }, { "math_id": 3, "text": "\\sqrt[n]{x_1 \\cdot x_2 \\cdots x_n}." }, { "math_id": 4, "text": "\\exp \\left( \\frac{\\ln {x_1} + \\ln {x_2} + \\cdots + \\ln {x_n}}{n} \\right)." }, { "math_id": 5, "text": "\\frac{x_1 + x_2 + \\cdots + x_n}{n} \\ge \\sqrt[n]{x_1 \\cdot x_2 \\cdots x_n}\\,," }, { "math_id": 6, "text": "\\sqrt[n]{x_1 x_2 \\cdots x_n}" }, { "math_id": 7, "text": "a,b,c>0" }, { "math_id": 8, "text": "(1+a)(1+b)(1+c)\\ge 2\\sqrt{1\\cdot{a}} \\cdot 2\\sqrt{1\\cdot{b}} \\cdot 2\\sqrt{1\\cdot{c}} = 8\\sqrt{abc}" }, { "math_id": 9, "text": "n!" }, { "math_id": 10, "text": "1+2+\\dots+n \\ge n\\sqrt[n]{n!}" }, { "math_id": 11, "text": "\\frac{n(n+1)}{2} \\ge n\\sqrt[n]{n!}" }, { "math_id": 12, "text": "\\left(\\frac{n+1}{2}\\right)^n \\ge n!" }, { "math_id": 13, "text": "n=1" }, { "math_id": 14, "text": "(n+1)^n \\ge 2^nn!" }, { "math_id": 15, "text": "f(x,y,z) = \\frac{x}{y} + \\sqrt{\\frac{y}{z}} + \\sqrt[3]{\\frac{z}{x}}" }, { "math_id": 16, "text": "\n\\begin{align}\nf(x,y,z)\n&= 6 \\cdot \\frac{ \\frac{x}{y} + \\frac{1}{2} \\sqrt{\\frac{y}{z}} + \\frac{1}{2} \\sqrt{\\frac{y}{z}} + \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}} + \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}} + \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}} }{6}\\\\\n&=6\\cdot\\frac{x_1+x_2+x_3+x_4+x_5+x_6}{6}\n\\end{align}" }, { "math_id": 17, "text": " x_1=\\frac{x}{y},\\qquad x_2=x_3=\\frac{1}{2} \\sqrt{\\frac{y}{z}},\\qquad x_4=x_5=x_6=\\frac{1}{3} \\sqrt[3]{\\frac{z}{x}}." }, { "math_id": 18, "text": "\n\\begin{align}\nf(x,y,z)\n&\\ge 6 \\cdot \\sqrt[6]{ \\frac{x}{y} \\cdot \\frac{1}{2} \\sqrt{\\frac{y}{z}} \\cdot \\frac{1}{2} \\sqrt{\\frac{y}{z}} \\cdot \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}} \\cdot \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}} \\cdot \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}} }\\\\\n&= 6 \\cdot \\sqrt[6]{ \\frac{1}{2 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3} \\frac{x}{y} \\frac{y}{z} \\frac{z}{x} }\\\\\n&= 2^{2/3} \\cdot 3^{1/2}.\n\\end{align}" }, { "math_id": 19, "text": "f(x,y,z) = 2^{2/3} \\cdot 3^{1/2} \\quad \\mbox{when} \\quad \\frac{x}{y} = \\frac{1}{2} \\sqrt{\\frac{y}{z}} = \\frac{1}{3} \\sqrt[3]{\\frac{z}{x}}." }, { "math_id": 20, "text": "(x,y,z)=\\biggr(t,\\sqrt[3]{2}\\sqrt{3}\\,t,\\frac{3\\sqrt{3}}{2}\\,t\\biggr)\\quad\\mbox{with}\\quad t>0." }, { "math_id": 21, "text": "\\log \\left(\\frac { \\sum_i x_i}{n} \\right) \\geq \\sum \\frac{1}{n} \\log x_i = \\sum \\left( \\log x_i^{1/n}\\right) = \\log \\left( \\prod x_i^{1/n}\\right). " }, { "math_id": 22, "text": "\\alpha = \\frac{x_1+x_2+\\cdots+x_n}{n} \\ge \\sqrt[n]{x_1x_2 \\cdots x_n}=\\beta" }, { "math_id": 23, "text": "x_i,x_j" }, { "math_id": 24, "text": "x_i<\\alpha<x_j" }, { "math_id": 25, "text": "\\alpha" }, { "math_id": 26, "text": "(x_i+x_j-\\alpha)" }, { "math_id": 27, "text": "\\alpha(x_j+x_i-\\alpha)-x_ix_j=(\\alpha-x_i)(x_j-\\alpha)>0" }, { "math_id": 28, "text": "(n-1)" }, { "math_id": 29, "text": "\\sqrt[n]{\\alpha\\alpha \\cdots \\alpha}=\\alpha" }, { "math_id": 30, "text": "0<x_i<\\beta<x_j" }, { "math_id": 31, "text": "x_i" }, { "math_id": 32, "text": "\\beta" }, { "math_id": 33, "text": "x_j" }, { "math_id": 34, "text": "\\frac{x_ix_j}{\\beta}" }, { "math_id": 35, "text": "x_i + x_j - \\beta - \\frac{x_i x_j}\\beta = \\frac{(\\beta - x_i)(x_j - \\beta)}\\beta > 0" }, { "math_id": 36, "text": "\\alpha^n\\ge x_1 x_2 \\cdots x_n" }, { "math_id": 37, "text": " (n+1)\\alpha=\\ x_1 + \\cdots + x_n + x_{n+1}." }, { "math_id": 38, "text": "x_n - \\alpha > 0\\qquad \\alpha-x_{n+1}>0" }, { "math_id": 39, "text": "\\implies (x_n-\\alpha)(\\alpha-x_{n+1})>0\\,.\\qquad(*)" }, { "math_id": 40, "text": "y:=x_n+x_{n+1}-\\alpha\\ge x_n-\\alpha>0\\,," }, { "math_id": 41, "text": "(n+1)\\alpha=x_1 + \\cdots + x_{n-1} + x_n + x_{n+1}" }, { "math_id": 42, "text": "n\\alpha=x_1 + \\cdots + x_{n-1} + \\underbrace{x_n+x_{n+1}-\\alpha}_{=\\,y}," }, { "math_id": 43, "text": "\\alpha^{n+1}=\\alpha^n\\cdot\\alpha\\ge x_1x_2 \\cdots x_{n-1} y\\cdot\\alpha.\\qquad(**)" }, { "math_id": 44, "text": "(\\underbrace{x_n+x_{n+1}-\\alpha}_{=\\,y})\\alpha-x_nx_{n+1}=(x_n-\\alpha)(\\alpha-x_{n+1})>0," }, { "math_id": 45, "text": "y\\alpha>x_nx_{n+1}\\,,\\qquad({*}{*}{*})" }, { "math_id": 46, "text": "\\alpha^{n+1}>x_1x_2 \\cdots x_{n-1} x_nx_{n+1}\\,," }, { "math_id": 47, "text": " x_1 + x_2 > x_1x_2+1." }, { "math_id": 48, "text": " x_2 - x_1x_2 > 1 - x_1," }, { "math_id": 49, "text": "x_1 + \\cdots + x_n \\ge n." }, { "math_id": 50, "text": "(x_1 + \\cdots + x_{n-2}) + (x_{n-1} x_n ) > n - 1." }, { "math_id": 51, "text": "\\begin{align}\nx_1 + \\cdots + x_{n-2} + x_{n-1} + x_n & = (x_1 + \\cdots + x_{n-2}) + (x_{n-1} + x_n )\n\\\\ &> (x_1 + \\cdots + x_{n-2}) + x_{n-1} x_n + 1\n\\\\ & > n,\n\\end{align}" }, { "math_id": 52, "text": "x_1 = \\frac{a_1}{\\sqrt[n]{a_1\\cdots a_n}}, . . ., x_n = \\frac{a_n}{\\sqrt[n]{a_1\\cdots a_n}}. " }, { "math_id": 53, "text": "\\frac{a_1}{\\sqrt[n]{a_1\\cdots a_n}} + \\cdots + \\frac{a_n}{\\sqrt[n]{a_1\\cdots a_n}} \\ge n, " }, { "math_id": 54, "text": "\\frac{a_1 + \\cdots + a_n}n \\ge \\sqrt[n]{a_1\\cdots a_n}, " }, { "math_id": 55, "text": "x_1 = x_2 = \\cdots = x_n," }, { "math_id": 56, "text": "\\begin{align}\n\\Bigl(\\frac{x_1+x_2}{2}\\Bigr)^2-x_1x_2\n&=\\frac14(x_1^2+2x_1x_2+x_2^2)-x_1x_2\\\\\n&=\\frac14(x_1^2-2x_1x_2+x_2^2)\\\\\n&=\\Bigl(\\frac{x_1-x_2}{2}\\Bigr)^2>0,\n\\end{align} " }, { "math_id": 57, "text": "\n\\frac{x_1 + x_2}{2} > \\sqrt{x_1 x_2}" }, { "math_id": 58, "text": "\n\\begin{align}\n\\frac{x_1 + x_2 + \\cdots + x_{2^k}}{2^k} & {} =\\frac{\\frac{x_1 + x_2 + \\cdots + x_{2^{k-1}}}{2^{k-1}} + \\frac{x_{2^{k-1} + 1} + x_{2^{k-1} + 2} + \\cdots + x_{2^k}}{2^{k-1}}}{2} \\\\[7pt]\n& \\ge \\frac{\\sqrt[2^{k-1}]{x_1 x_2 \\cdots x_{2^{k-1}}} + \\sqrt[2^{k-1}]{x_{2^{k-1} + 1} x_{2^{k-1} + 2} \\cdots x_{2^k}}}{2} \\\\[7pt]\n& \\ge \\sqrt{\\sqrt[2^{k-1}]{x_1 x_2 \\cdots x_{2^{k-1}}} \\sqrt[2^{k-1}]{x_{2^{k-1} + 1} x_{2^{k-1} + 2} \\cdots x_{2^k}}} \\\\[7pt]\n& = \\sqrt[2^k]{x_1 x_2 \\cdots x_{2^k}}\n\\end{align}\n" }, { "math_id": 59, "text": "x_1 = x_2 = \\cdots = x_{2^{k-1}}" }, { "math_id": 60, "text": "x_{2^{k-1}+1} = x_{2^{k-1}+2} = \\cdots = x_{2^k}" }, { "math_id": 61, "text": "\\frac{x_1 + x_2 + \\cdots + x_{2^k}}{2^k} > \\sqrt[2^k]{x_1 x_2 \\cdots x_{2^k}}" }, { "math_id": 62, "text": "x_{n+1} = x_{n+2} = \\cdots = x_m = \\alpha." }, { "math_id": 63, "text": "\n\\begin{align}\n\\alpha & = \\frac{x_1 + x_2 + \\cdots + x_n}{n} \\\\[6pt]\n& = \\frac{\\frac{m}{n} \\left( x_1 + x_2 + \\cdots + x_n \\right)}{m} \\\\[6pt]\n& = \\frac{x_1 + x_2 + \\cdots + x_n + \\frac{(m-n)}{n} \\left( x_1 + x_2 + \\cdots + x_n \\right)}{m} \\\\[6pt]\n& = \\frac{x_1 + x_2 + \\cdots + x_n + \\left( m-n \\right) \\alpha}{m} \\\\[6pt]\n& = \\frac{x_1 + x_2 + \\cdots + x_n + x_{n+1} + \\cdots + x_m}{m} \\\\[6pt]\n& \\ge \\sqrt[m]{x_1 x_2 \\cdots x_n x_{n+1} \\cdots x_m} \\\\[6pt]\n& = \\sqrt[m]{x_1 x_2 \\cdots x_n \\alpha^{m-n}}\\,,\n\\end{align}\n" }, { "math_id": 64, "text": "\\alpha^m \\ge x_1 x_2 \\cdots x_n \\alpha^{m-n}" }, { "math_id": 65, "text": "\\alpha \\ge \\sqrt[n]{x_1 x_2 \\cdots x_n}" }, { "math_id": 66, "text": "\\frac{x_1 + \\cdots + x_n + x_{n+1}}{n+1} - ({x_1 \\cdots x_n x_{n+1}})^{\\frac{1}{n+1}}\\ge0" }, { "math_id": 67, "text": " f(t)=\\frac{x_1 + \\cdots + x_n + t}{n+1} - ({x_1 \\cdots x_n t})^{\\frac{1}{n+1}},\\qquad t>0." }, { "math_id": 68, "text": "f'(t)=\\frac{1}{n+1}-\\frac{1}{n+1}({x_1 \\cdots x_n})^{\\frac{1}{n+1}}t^{-\\frac{n}{n+1}},\\qquad t>0." }, { "math_id": 69, "text": "({x_1 \\cdots x_n})^{\\frac{1}{n+1}}t_0^{-\\frac{n}{n+1}}=1." }, { "math_id": 70, "text": "t_0^{\\frac{n}{n+1}}=({x_1 \\cdots x_n})^{\\frac{1}{n+1}}," }, { "math_id": 71, "text": "t_0=({x_1 \\cdots x_n})^{\\frac{1}n}," }, { "math_id": 72, "text": "\n\\begin{align}\nf(t_0) &= \\frac{x_1 + \\cdots + x_n + ({x_1 \\cdots x_n})^{1/n}}{n+1} - ({x_1 \\cdots x_n})^{\\frac{1}{n+1}}({x_1 \\cdots x_n})^{\\frac{1}{n(n+1)}}\\\\\n&= \\frac{x_1 + \\cdots + x_n}{n+1} + \\frac{1}{n+1}({x_1 \\cdots x_n})^{\\frac{1}n} - ({x_1 \\cdots x_n})^{\\frac{1}n}\\\\\n&= \\frac{x_1 + \\cdots + x_n}{n+1} - \\frac{n}{n+1}({x_1 \\cdots x_n})^{\\frac{1}n}\\\\\n&= \\frac{n}{n+1}\\Bigl(\\frac{x_1 + \\cdots + x_n}n - ({x_1 \\cdots x_n})^{\\frac{1}n}\\Bigr)\n\\\\ &\\ge0,\n\\end{align}" }, { "math_id": 73, "text": "\\begin{align}{ \\frac{x_1}{\\alpha} \\frac{x_2}{\\alpha} \\cdots \\frac{x_n}{\\alpha} } &\\le { e^{\\frac{x_1}{\\alpha} - 1} e^{\\frac{x_2}{\\alpha} - 1} \\cdots e^{\\frac{x_n}{\\alpha} - 1} }\\\\\n& = \\exp \\Bigl( \\frac{x_1}{\\alpha} - 1 + \\frac{x_2}{\\alpha} - 1 + \\cdots + \\frac{x_n}{\\alpha} - 1 \\Bigr), \\qquad (*)\n\\end{align}" }, { "math_id": 74, "text": "\\begin{align}\n\\frac{x_1}{\\alpha} - 1 + \\frac{x_2}{\\alpha} - 1 + \\cdots + \\frac{x_n}{\\alpha} - 1 & = \\frac{x_1 + x_2 + \\cdots + x_n}{\\alpha} - n \\\\\n& = \\frac{n \\alpha}{\\alpha} - n \\\\\n& = 0.\n\\end{align}" }, { "math_id": 75, "text": "\\frac{x_1 x_2 \\cdots x_n}{\\alpha^n} \\le e^0 = 1," }, { "math_id": 76, "text": "\\sqrt[n]{x_1 x_2 \\cdots x_n} \\le \\alpha." }, { "math_id": 77, "text": "0" }, { "math_id": 78, "text": "\\prod_{i=1}^n x_i = 1" }, { "math_id": 79, "text": "G(x_1,x_2,\\ldots,x_n)=\\prod_{i=1}^n x_i" }, { "math_id": 80, "text": "F(x_1,x_2,\\ldots,x_n) = \\frac{1}{n}\\sum_{i=1}^n x_i" }, { "math_id": 81, "text": "F(x_1,x_2,...,x_n)," }, { "math_id": 82, "text": "G(x_1,x_2,\\ldots,x_n) = 1," }, { "math_id": 83, "text": "1" }, { "math_id": 84, "text": "x_1 = x_2 = \\cdots = x_n = 1" }, { "math_id": 85, "text": "K = \\{(x_1,x_2,\\ldots,x_n) \\colon 0 \\leq x_1,x_2,\\ldots,x_n \\leq n\\}" }, { "math_id": 86, "text": "K \\cap \\{G = 1\\}" }, { "math_id": 87, "text": "F(x_1,x_2,...,x_n)" }, { "math_id": 88, "text": "G(x_1,x_2,\\ldots,x_n) = 1" }, { "math_id": 89, "text": " (x_1,x_2,\\ldots,x_n) \\in K " }, { "math_id": 90, "text": "K" }, { "math_id": 91, "text": "x_i > n" }, { "math_id": 92, "text": "F(x_1,x_2,\\ldots,x_n) > 1 " }, { "math_id": 93, "text": "F(1,1,\\ldots,1) = 1" }, { "math_id": 94, "text": "(1,1,\\ldots,1) \\in K \\cap \\{G = 1\\} " }, { "math_id": 95, "text": "F" }, { "math_id": 96, "text": "(y_1,y_2,\\ldots, y_n)" }, { "math_id": 97, "text": "(1,1,\\ldots,1)" }, { "math_id": 98, "text": "(x_1,x_2,\\ldots,x_n)" }, { "math_id": 99, "text": "F(x_1,x_2,\\ldots,x_n)" }, { "math_id": 100, "text": "\\lambda" }, { "math_id": 101, "text": "G(x_1,x_2,\\ldots,x_n)" }, { "math_id": 102, "text": "F(x_1,x_2,...,x_n) = 1." }, { "math_id": 103, "text": "\\frac{\\partial F}{\\partial x_i} = \\frac{1}{n}" }, { "math_id": 104, "text": "\\frac{\\partial G}{\\partial x_i} = \\prod_{j \\neq i}x_j = \\frac{G(x_1,x_2,\\ldots,x_n)}{x_i} = \\frac{1}{x_i}" }, { "math_id": 105, "text": "i" }, { "math_id": 106, "text": "\\frac{1}{n} = \\frac{\\lambda}{x_i}," }, { "math_id": 107, "text": "n\\lambda= x_i." }, { "math_id": 108, "text": "x_1 = x_2 = \\cdots = x_n" }, { "math_id": 109, "text": "G(x_1,x_2,\\ldots, x_n) = 1" }, { "math_id": 110, "text": " x_1 = x_2 = \\cdots = x_n = 1" }, { "math_id": 111, "text": "F(x_1,x_2,\\ldots,x_n) = 1" }, { "math_id": 112, "text": "\\frac{w_1 x_1 + w_2 x_2 + \\cdots + w_n x_n}{w} \\ge \\sqrt[w]{x_1^{w_1} x_2^{w_2} \\cdots x_n^{w_n}}" }, { "math_id": 113, "text": "\\sum_{i=1}^n w_ix_i \\geq \\prod_{i=1}^n x_i^{w_i} + \\sum_{i=1}^n w_i\\left(x_i^{\\frac{1}{2}} -\\sum_{i=1}^n w_ix_i^{\\frac{1}{2}} \\right)^2 " }, { "math_id": 114, "text": "\\begin{align}\n\\ln\\Bigl(\\frac{w_1x_1+\\cdots+w_nx_n}w\\Bigr) & >\\frac{w_1}w\\ln x_1+\\cdots+\\frac{w_n}w\\ln x_n \\\\\n& =\\ln \\sqrt[w]{x_1^{w_1} x_2^{w_2} \\cdots x_n^{w_n}}.\n\\end{align}" }, { "math_id": 115, "text": "\n\\frac{w_1x_1+\\cdots+w_nx_n}w\n>\\sqrt[w]{x_1^{w_1} x_2^{w_2} \\cdots x_n^{w_n}}.\n" }, { "math_id": 116, "text": "A" }, { "math_id": 117, "text": "B" }, { "math_id": 118, "text": "A B" }, { "math_id": 119, "text": "|||\\cdot|||" }, { "math_id": 120, "text": "\n|||AB|||\\leq \\frac{1}{2}|||A^2 + B^2|||\n" }, { "math_id": 121, "text": "\n|||AB||| \\leq \\frac{1}{4}|||(A+B)^2|||\n" }, { "math_id": 122, "text": "n=2" }, { "math_id": 123, "text": "n" }, { "math_id": 124, "text": "\n|||(AB)^{\\frac{1}{2}}|||\\leq \\frac{1}{2}|||A+B|||\n" }, { "math_id": 125, "text": "\\sqrt{\\sigma_j(AB)}\\leq \\frac{1}{2}\\lambda_j(A+B), \\ j=1, \\ldots, n." }, { "math_id": 126, "text": "r_n=\\frac{V_n - V_{n-1}}{V_{n-1}}," }, { "math_id": 127, "text": "V_n" }, { "math_id": 128, "text": "V_{n-1}" }, { "math_id": 129, "text": "n-1" }, { "math_id": 130, "text": "g_N=\\left(\\prod_{n = 1}^N(1+r_n)\\right)^{1/N}," }, { "math_id": 131, "text": "a_N=\\frac1N \\sum_{n = 1}^Nr_n." }, { "math_id": 132, "text": "1+g_N=\\frac{1+a_N}{\\sqrt{1+\\frac{\\sigma^2}{(1+a_N)^2}}}," }, { "math_id": 133, "text": "\\sigma^2" }, { "math_id": 134, "text": "z=(1+a_N)^2," }, { "math_id": 135, "text": "z^2 - (1+g)^2 - (1+g)^2\\sigma^2 = 0." }, { "math_id": 136, "text": "a_N = \\pm \\frac{1+g_N}{\\sqrt{2}}\\sqrt{1 \\pm \\sqrt{1+\\frac{4\\sigma^2}{(1+g_N)^2}}}-1." }, { "math_id": 137, "text": " \\sqrt{1+\\frac{4\\sigma^2}{(1+g_N)^2}} \\geq 1. " }, { "math_id": 138, "text": "a_N = \\pm \\frac{1+g_N}{\\sqrt{2}}\\sqrt{1 + \\sqrt{1+\\frac{4\\sigma^2}{(1+g_N)^2}}}-1." }, { "math_id": 139, "text": "a_N = \\frac{1+g_N}{\\sqrt{2}}\\sqrt{1 + \\sqrt{1+\\frac{4\\sigma^2}{(1+g_N)^2}}}-1." } ]
https://en.wikipedia.org/wiki?curid=605011
60504486
Local differential privacy
Local differential privacy (LDP) is a model of differential privacy with the added requirement that if an adversary has access to the personal responses of an individual in the database, that adversary will still be unable to learn much of the user's personal data. This is contrasted with global differential privacy, a model of differential privacy that incorporates a central aggregator with access to the raw data. Local differential privacy (LDP) is an approach to mitigate the concern of data fusion and analysis techniques used to expose individuals to attacks and disclosures. LDP is a well-known privacy model for distributed architectures that aims to provide privacy guarantees for each user while collecting and analyzing data, protecting from privacy leaks for the client and server. LDP has been widely adopted to alleviate contemporary privacy concerns in the era of big data. History. In 2003, Alexandre V. Evfimievski, Johannes Gehrke, and Ramakrishnan Srikant gave a definition equivalent to local differential privacy. In 2008, Kasiviswanathan et al. gave a formal definition conforming to the now-standard definition of differential privacy. The prototypical example of a mechanism with local differential privacy is the randomized response survey technique proposed by Stanley L. Warner in 1965. Warner's innovation was the introduction of the “untrusted curator” model, where the entity collecting the data may not be trustworthy. Before users' responses are sent to the curator, the answers are randomized in a controlled manner, guaranteeing differential privacy while still allowing valid population-wide statistical inferences. Applications. The era of big data exhibits a high demand for machine learning services that provide privacy protection for users. Demand for such services has pushed research into algorithmic paradigms that provably satisfy specific privacy requirements. Anomaly Detection. Anomaly detection is formally defined as the process of identifying unexpected items or events in data sets. The rise of social networking in the current era has led to many potential concerns related to information privacy. As more and more users rely on social networks, they are often threatened by privacy breaches, unauthorized access to personal information, and leakage of sensitive data. To attempt to solve this issue, the authors of "Anomaly Detection over Differential Preserved Privacy in Online Social Networks" have proposed a model using a social network utilizing restricted local differential privacy. By using this model, it aims for improved privacy preservation through anomaly detection. In this paper, the authors propose a privacy preserving model that sanitizes the collection of user information from a social network utilizing restricted local differential privacy (LDP) to save synthetic copies of collected data. This model uses reconstructed data to classify user activity and detect abnormal network behavior. The experimental results demonstrate that the proposed method achieves high data utility on the basis of improved privacy preservation. Furthermore, local differential privacy sanitized data are suitable for use in subsequent analyses, such as anomaly detection. Anomaly detection on the proposed method’s reconstructed data achieves a detection accuracy similar to that on the original data. Blockchain Technology. Potential combinations of blockchain technology with local differential privacy have received research attention. Blockchains implement distributed, secured, and shared ledgers used to record and track data within a decentralized network, and they have successfully replaced certain prior systems of economic transactions within and between organizations. Increased usage of blockchains has raised some questions regarding privacy and security of data they store, and local differential privacy of various kinds has been proposed as a desirable property for blockchains containing sensitive data. Context-Free Privacy. Local differential privacy provides context-free privacy even in the absence of a trusted data collector, though often at the expense of a significant drop in utility. The classical definition of LDP assumes that all elements in the data domain are equally sensitive. However, in many applications, some symbols are more sensitive than others. A context-aware framework of local differential privacy can allow a privacy designer to incorporate the application’s context into the privacy definition. For binary data domains, algorithmic research has provided a universally optimal privatization scheme and highlighted its connections to Warner’s randomized response (RR) and Mangat’s improved response. For k-ary data domains, motivated by geolocation and web search applications, researchers have considered at least two special cases of context-aware LDP: block-structured LDP and high-low LDP (the latter is also defined in ). The research has provided communication-efficient, sample-optimal schemes and information theoretic lower bounds for both models. Facial Recognition. Facial recognition has become more and more widespread in recent years. Recent smartphones, for example, utilize facial recognition to unlock the users phone as well as authorize the payment with their credit card. Though this is convenient, it poses privacy concerns. It is a resource-intensive task that often involves third party users, often resulting in a gap where the user’s privacy could be compromised. Biometric information delivered to untrusted third-party servers in an uncontrolled manner can constitute a significant privacy leak as biometrics can be correlated with sensitive data such as healthcare or financial records. In Chamikara's academic article, he proposes a privacy-preserving technique for “controlled information release”, where they disguise an original face image and prevent leakage of the biometric features while identifying a person. He introduces a new privacy-preserving face recognition protocol named PEEP (Privacy using Eigenface Perturbation) that utilizes local differential privacy. PEEP applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm. As a result, the trained model will not be vulnerable to privacy attacks such as membership inference and model memorization attacks. This model provided by Chami kara shows the potential solution of this issue or privacy leaks. Federated Learning (FL). Federated learning has the ambition to protect data privacy through distributed learning methods that keep the data in its storage. Likewise, differential privacy (DP) attains to improve the protection of data privacy by measuring the privacy loss in the communication among the elements of federated learning. The prospective matching of federated learning and differential privacy to the challenges of data privacy protection has caused the release of several software tools that support their functionalities, but they lack a unified vision of these techniques, and a methodological workflow that supports their usage. In the study sponsored by the Andalusian Research Institute in Data Science and computational Intelligence, they developed a Sherpa.ai FL, 1,2 which is an open-research unified FL and DP framework that aims to foster the research and development of AI services at the edges and to preserve data privacy. The characteristics of FL and DP tested and summarized in the study suggests that they make them good candidates to support AI services at the edges and to preserve data privacy through their finding that by setting the value of formula_0 for lower values would guarantee higher privacy at the cost of lower accuracy. Health Data Aggregation. The rise of technology not only changes the way we work and perform our everyday lives, but also the changes to the health industry is also prominent as a result of the rise of the big data era is emphasized. The rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks is becoming a barrier to the development of the health industry to keep up. Aiming to solve this, the outsourcing of encrypted health data to the cloud has been an appealing strategy. However, there may come potential downsides as do all choices. The data aggregation will become more difficult and more vulnerable to data branches of this sensitive information of the patients of the healthcare industry. In his academic article, "Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees," Hao Ren and his team proposes a privacy enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. This aggregation function is designed to protect the aggregated data from cloud servers. The performance and evaluation done in their study shows that the proposal leads to less communication overhead than the existing data aggregation models currently in place. Internet Connected Vehicles. The idea of having internet in one's car would only be a dream if this concept was brought up during the last century. However, now most updated vehicles contain this feature for the convenience of the users. Though convenient, this poses yet another threat to the user's privacy. Internet of connected vehicles (IoV) are expected to enable intelligent traffic management, intelligent dynamic information services, intelligent vehicle control, etc. However, vehicles’ data privacy is argued to be a major barrier toward the application and development of IoV, thus causing a wide range of attention. Local differential privacy (LDP) is the relaxed version of the privacy standard, differential privacy, and it can protect users’ data privacy against the untrusted third party in the worst adversarial setting. The computational costs of using LDP is one concern among researchers as it is quite expensive to implement for such a specific model given that the model needs high mobility and short connection times. Furthermore, as the number of vehicles increases, the frequent communication between vehicles and the cloud server incurs unexpected amounts of communication cost. To avoid the privacy threat and reduce the communication cost, researchers propose to integrate federated learning and local differential privacy (LDP) to facilitate the crowdsourcing applications to achieve the machine learning model. Phone Blacklisting. The topic of spam phone calls has been increasingly relevant, and though it has been a growing nuisance to the current digital world, researchers have been looking at potential solutions in minimizing this issue. To counter this increasingly successful attack vector, federal agencies such as the US Federal Trade Commission (FTC) have been working with telephone carriers to design systems for blocking robocalls. Furthermore, a number of commercial and smartphone apps that promise to block spam phone calls have been created, but they come with a subtle cost. The user’s privacy information that comes with giving the app the access to block spam calls may be leaked without the user’s consent or knowledge of it even occurring. In the study, the researchers analyze the challenges and trade-offs related to using local differential privacy, evaluate the LDP-based system on real-world user-reported call records collected by the FTC, and show that it is possible to learn a phone blacklist using a reasonable overall privacy budget and at the same time preserve users’ privacy while maintaining utility for the learned blacklist. Trajectory Cross-Correlation Constraint. Aiming to solve the problem of low data utilization and privacy protection, a personalized differential privacy protection method based on cross-correlation constraints is proposed by researcher Hu. By protecting sensitive location points on the trajectory and the sensitive points, this extended differential privacy protection model combines the sensitivity of the user’s trajectory location and user privacy protection requirements and privacy budget. Using autocorrelation Laplace transform, specific white noise is transformed into noise that is related to the user's real trajectory sequence in both time and space. This noise data is used to find the cross-correlation constraint mechanics of the trajectory sequence in the model. By proposing this model, the researcher Hu's personalized differential privacy protection method is broken down and addresses the issue of adding independent and uncorrelated noise and the same degree of scrambling results in low privacy protection and poor data availability. ε-local differential privacy. Definition of ε-local differential privacy. Let ε be a positive real number and formula_1 be a randomized algorithm that takes a user's private data as input. Let formula_2 denote the image of formula_1. The algorithm formula_1 is said to provide formula_3-local differential privacy if, for all pairs of users' possible private data formula_4 and formula_5 and all subsets formula_6 of formula_2: formula_7 where the probability is taken over the random measure implicit in the algorithm. The main difference between this definition of local differential privacy and the definition of standard (global) differential privacy is that in standard differential privacy the probabilities are of the outputs of an algorithm that takes all users' data and here it is on an algorithm that takes a single user's data. Other formal definitions of local differential privacy concern algorithms that categorize all users' data as input and output a collection of all responses (such as the definition in Raef Bassily, Kobbi Nissim, Uri Stemmer and Abhradeep Guha Thakurta's 2017 paper). Deployment. Algorithms guaranteeing local differential privacy have been deployed in several internet companies: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "\\textrm{im} \\mathcal{A}" }, { "math_id": 3, "text": "\\varepsilon" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x^\\prime" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "\\frac{\\Pr[\\mathcal{A}(x) \\in S]}{\\Pr[\\mathcal{A}(x^\\prime) \\in S]} \\leq e^{\\varepsilon}" } ]
https://en.wikipedia.org/wiki?curid=60504486
60504494
Differentially private analysis of graphs
Method of graph analysis Differentially private analysis of graphs studies algorithms for computing accurate graph statistics while preserving differential privacy. Such algorithms are used for data represented in the form of a graph where nodes correspond to individuals and edges correspond to relationships between them. For examples, edges could correspond to friendships, sexual relationships, or communication patterns. A party that collected sensitive graph data can process it using a differentially private algorithm and publish the output of the algorithm. The goal of differentially private analysis of graphs is to design algorithms that compute accurate global information about graphs while preserving privacy of individuals whose data is stored in the graph. Variants. Differential privacy imposes a restriction on the algorithm. Intuitively, it requires that the algorithm has roughly the same output distribution on neighboring inputs. If the input is a graph, there are two natural notions of neighboring inputs, edge neighbors and node neighbors, which yield two natural variants of differential privacy for graph data. Let ε be a positive real number and formula_0 be a randomized algorithm that takes a graph as input and returns an output from a set formula_1. The algorithm formula_0 is formula_2-differentially private if, for all neighboring graphs formula_3 and formula_4 and all subsets formula_5 of formula_1, formula_6 where the probability is taken over the randomness used by the algorithm. Edge differential privacy. Two graphs are edge neighbors if they differ in one edge. An algorithm is formula_2-edge-differentially private if, in the definition above, the notion of edge neighbors is used. Intuitively, an edge differentially private algorithm has similar output distributions on any pair of graphs that differ in one edge, thus protecting changes to graph edges. Node differential privacy. Two graphs are node neighbors if one can be obtained from the other by deleting a node and its adjacent edges. An algorithm is formula_2-node-differentially private if, in the definition above, the notion of node neighbors is used. Intuitively, a node differentially private algorithm has similar output distributions on any pair of graphs that differ in one one nodes and edges adjacent to it, thus protecting information pertaining to each individual. Node differential privacy give a stronger privacy protection than edge differential privacy. Research history. The first edge differentially private algorithm was designed by Nissim, Raskhodnikova, and Smith. The distinction between edge and node differential privacy was first discussed by Hay, Miklau, and Jensen. However, it took several years before first node differentially private algorithms were published in Blocki et al., Kasiviswanathan et al., and Chen and Zhou. In all three papers, the algorithms are for releasing a single statistic, like a triangle count or counts of other subgraphs. Raskhodnikova and Smith gave the first node differentially private algorithm for releasing a vector, specifically, the degree count and the degree distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{A}" }, { "math_id": 1, "text": "\\mathcal{O}" }, { "math_id": 2, "text": "\\epsilon" }, { "math_id": 3, "text": "G_1" }, { "math_id": 4, "text": "G_2" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "\\Pr[\\mathcal{A}(G_1) \\in S] \\leq e^{\\epsilon} \\times \\Pr[\\mathcal{A}(G_2) \\in S]," } ]
https://en.wikipedia.org/wiki?curid=60504494
60504564
Additive noise differential privacy mechanisms
Adding controlled noise from predetermined distributions is a way of designing differentially private mechanisms. This technique is useful for designing private mechanisms for real-valued functions on sensitive data. Some commonly used distributions for adding noise include Laplace and Gaussian distributions. Definition. Let formula_0 be a collection of all datasets and formula_1 be a real-valued function. The "sensitivity" of a function, denoted formula_2, is defined by formula_3 where the maximum is over all pairs of datasets formula_4 and formula_5 in formula_0 differing in at most one element. For functions with higher dimensions, the sensitivity is usually measured under formula_6 or formula_7 norms. Throughout this article, formula_8 is used to denote a randomized algorithm that releases a sensitive function formula_9 under the formula_10- (or formula_11-) differential privacy. Real-valued functions. Laplace Mechanism. Introduced by Dwork et al., this mechanism adds noise drawn from a Laplace distribution: formula_12 where formula_13 is the expectation of the Laplace distribution and formula_14 is the scale parameter. Roughly speaking, a small-scale noise should suffice for a weak privacy constraint (corresponding to a large value of formula_10), while a greater level of noise would provide a greater degree of uncertainty in what was the original input (corresponding to a small value of formula_10). To argue that the mechanism satisfies formula_10-differential privacy, it suffices to show that the output distribution of formula_15 is "close in a multiplicative sense" to formula_16 everywhere.formula_17 The first inequality follows from the triangle inequality and the second from the sensitivity bound. A similar argument gives a lower bound of formula_18. A discrete variant of the Laplace mechanism, called the geometric mechanism, is "universally utility-maximizing". It means that for any prior (such as auxiliary information or beliefs about data distributions) and any symmetric and monotone univariate loss function, the expected loss of any differentially private mechanism can be matched or improved by running the geometric mechanism followed by a data-independent post-processing transformation. The result also holds for minimax (risk-averse) consumers. No such universal mechanism exists for multi-variate loss functions. Gaussian Mechanism. Analogous to Laplace mechanism, Gaussian mechanism adds noise drawn from a Gaussian distribution whose variance is calibrated according to the sensitivity and privacy parameters. For any formula_19 and formula_20, the mechanism defined by: formula_21 provides formula_11-differential privacy. Note that, unlike Laplace mechanism, formula_22 only satisfies formula_23-differential privacy with formula_24. To prove so, it is sufficient to show that, with probability at least formula_25, the distribution of formula_26 is close to formula_27. See Appendix A in Dwork and Roth for a proof of this result). High dimensional functions. For high dimensional functions of the form formula_28, where formula_29, the sensitivity of formula_9 is measured under formula_6 or formula_7 norms. The equivalent Gaussian mechanism that satisfies formula_23-differential privacy for such function (still under the assumption that formula_24) is formula_30 where formula_31 represents the sensitivity of formula_9 under formula_7 norm and formula_32 represents a formula_33-dimensional vector, where each coordinate is a noise sampled according to formula_34 independent of the other coordinates (see Appendix A in Dwork and Roth for proof). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{D}" }, { "math_id": 1, "text": "f\\colon \\mathcal{D} \\to \\R" }, { "math_id": 2, "text": "\\Delta f" }, { "math_id": 3, "text": "\\Delta f=\\max | f(x)-f(y) |," }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "\\ell_1" }, { "math_id": 7, "text": "\\ell_2" }, { "math_id": 8, "text": "\\mathcal{M}" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "\\epsilon" }, { "math_id": 11, "text": "(\\epsilon,\\delta)" }, { "math_id": 12, "text": "\\mathcal{M}_\\mathrm{Lap}(x,f,\\epsilon) = f(x) + \\mathrm{Lap}\\left(\\mu = 0, b = \\frac{\\Delta f}{\\epsilon}\\right)," }, { "math_id": 13, "text": "\\mu" }, { "math_id": 14, "text": "b" }, { "math_id": 15, "text": "\\mathcal{M}_\\mathrm{Lap}(x,f,\\epsilon)" }, { "math_id": 16, "text": "\\mathcal{M}_\\mathrm{Lap}(y,f,\\epsilon)" }, { "math_id": 17, "text": "\n\\begin{align}\n\\frac{\\mathrm{Pr}(\\mathcal{M}_\\mathrm{Lap}(x,f,\\epsilon) = z)}{\\mathrm{Pr}(\\mathcal{M}_\\mathrm{Lap}(y,f,\\epsilon) = z)} &= \\frac{\\mathrm{Pr}(f(x) + \\mathrm{Lap}(0,\\frac{\\Delta f}{\\epsilon}) = z)}{\\mathrm{Pr}(f(y) + \\mathrm{Lap}(0,\\frac{\\Delta f}{\\epsilon}) = z)}\\\\\n&= \\frac{\\mathrm{Pr}(\\mathrm{Lap}(0, \\frac{\\Delta f}{\\epsilon}) = z-f(x))}{\\mathrm{Pr}(\\mathrm{Lap}(0,\\frac{\\Delta f}{\\epsilon}) = z-f(y))}\\\\\n&= \\frac{\\frac{1}{2b}\\exp\\left(- \\frac{|z-f(x)|}{b}\\right)}{\\frac{1}{2b}\\exp\\left(- \\frac{|z-f(y)|}{b}\\right)}\\\\\n&= \\exp\\left(\\frac{|z-f(y)|-|z-f(x)|}{b}\\right)\\\\\n&\\leq \\exp\\left(\\frac{|f(y)-f(x)|}{b}\\right)\\\\\n&\\leq \\exp\\left(\\frac{\\Delta f}{b}\\right) = \\exp(\\epsilon).\n\\end{align}\n" }, { "math_id": 18, "text": "\\exp(-\\epsilon)" }, { "math_id": 19, "text": "\\delta\\in(0,1)" }, { "math_id": 20, "text": "\\epsilon\\in(0,1)" }, { "math_id": 21, "text": "\\mathcal M_\\text{Gauss}(x, f, \\epsilon, \\delta) = f(x) + \\mathcal N\\left(\\mu = 0, \\sigma^2 = \\frac{2 \\ln(1.25/\\delta) \\cdot (\\Delta f)^2}{\\epsilon^2}\\right)" }, { "math_id": 22, "text": "\\mathcal{M}_\\text{Gauss}" }, { "math_id": 23, "text": "(\\epsilon, \\delta)" }, { "math_id": 24, "text": "\\epsilon<1" }, { "math_id": 25, "text": "1-\\delta" }, { "math_id": 26, "text": "\\mathcal{M}_\\text{Gauss}(x, f, \\epsilon, \\delta)" }, { "math_id": 27, "text": "\\mathcal{M}_\\text{Gauss}(y, f, \\epsilon, \\delta)" }, { "math_id": 28, "text": "f\\colon \\mathcal{D} \\to \\R^d" }, { "math_id": 29, "text": "d \\geq 2" }, { "math_id": 30, "text": "\\mathcal{M}_\\text{Gauss}(x, f, \\epsilon, \\delta) = f(x) + \\mathcal{N}^d \\left(\\mu = 0, \\sigma^2 = \\frac{2 \\ln (1.25/\\delta) \\cdot (\\Delta_2 f)^2}{\\epsilon^2}\\right), " }, { "math_id": 31, "text": "\\Delta_2 f" }, { "math_id": 32, "text": "\\mathcal{N}^d(0, \\sigma^2)" }, { "math_id": 33, "text": "d" }, { "math_id": 34, "text": "\\mathcal{N}(0, \\sigma^2)" } ]
https://en.wikipedia.org/wiki?curid=60504564
60504977
Reconstruction attack
A reconstruction attack is any method for partially reconstructing a private dataset from public aggregate information. Typically, the dataset contains sensitive information about individuals, whose privacy needs to be protected. The attacker has no or only partial access to the dataset, but has access to public aggregate statistics about the datasets, which could be exact or distorted, for example by adding noise. If the public statistics are not sufficiently distorted, the attacker is able to accurately reconstruct a large portion of the original private data. Reconstruction attacks are relevant to the analysis of private data, as they show that, in order to preserve even a very weak notion of individual privacy, any published statistics need to be sufficiently distorted. This phenomenon was called the Fundamental Law of Information Recovery by Dwork and Roth, and formulated as "overly accurate answers to too many questions will destroy privacy in a spectacular way." The Dinur-Nissim Attack. In 2003, Irit Dinur and Kobbi Nissim proposed a reconstruction attack based on noisy answers to multiple statistical queries. Their work was recognized by the 2013 ACM PODS Alberto O. Mendelzon Test-of-Time Award in part for being the seed for the development of differential privacy. Dinur and Nissim model a "private database" as a sequence of bits formula_0, where each bit is the private information of a single individual. A "database query" is specified by a subset formula_1, and is defined to equal formula_2. They show that, given approximate answers formula_3 to queries specified by sets formula_4, such that formula_5 for all formula_6, if formula_7 is sufficiently small and formula_8 is sufficiently large, then an attacker can reconstruct most of the private bits in formula_9. Here the error bound formula_7 can be a function of formula_8 and formula_10. Nissim and Dinur's attack works in two regimes: in one regime, formula_8 is exponential in formula_10, and the error formula_7 can be linear in formula_10; in the other regime, formula_8 is polynomial in formula_10, and the error formula_7 is on the order of formula_11.
[ { "math_id": 0, "text": "D = (d_1, \\ldots, d_n)" }, { "math_id": 1, "text": "S\\subseteq \\{1, \\ldots, n\\}" }, { "math_id": 2, "text": "q_S(D) = \\sum_{i \\in S}{d_i}" }, { "math_id": 3, "text": "a_1, \\ldots, a_m" }, { "math_id": 4, "text": "S_1, \\ldots, S_m" }, { "math_id": 5, "text": "|a_i - q_{S_i}(D)| \\le \\mathcal{E}" }, { "math_id": 6, "text": "i \\in \\{1, \\ldots, m\\}" }, { "math_id": 7, "text": "\\mathcal{E}" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "D" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\sqrt{n}" } ]
https://en.wikipedia.org/wiki?curid=60504977
6050735
Step potential
In quantum mechanics and scattering theory, the one-dimensional step potential is an idealized system used to model incident, reflected and transmitted matter waves. The problem consists of solving the time-independent Schrödinger equation for a particle with a step-like potential in one dimension. Typically, the potential is modeled as a Heaviside step function. Calculation. Schrödinger equation and potential function. The time-independent Schrödinger equation for the wave function formula_0 is formula_1 where "Ĥ" is the Hamiltonian, "ħ" is the reduced Planck constant, "m" is the mass, "E" the energy of the particle. The step potential is simply the product of "V"0, the height of the barrier, and the Heaviside step function: formula_2 The barrier is positioned at "x" = 0, though any position "x"0 may be chosen without changing the results, simply by shifting position of the step by −"x"0. The first term in the Hamiltonian, formula_3 is the kinetic energy of the particle. Solution. The step divides space in two parts: "x" &lt; 0 and "x" &gt; 0. In any of these parts the potential is constant, meaning the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle) formula_4 formula_5 where subscripts 1 and 2 denote the regions "x" &lt; 0 and "x" &gt; 0 respectively, the subscripts (→) and (←) on the amplitudes "A" and "B" denote the direction of the particle's velocity vector: right and left respectively. The wave vectors in the respective regions being formula_6 formula_7 both of which have the same form as the De Broglie relation (in one dimension) formula_8. Boundary conditions. The coefficients "A", "B" have to be found from the boundary conditions of the wave function at "x" = 0. The wave function and its derivative have to be continuous everywhere, so: formula_9 formula_10 Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients formula_11 formula_12 Transmission and reflection. It is useful to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy "E" larger than the barrier height "V"0 will be slowed down but never reflected by the barrier, while a classical particle with "E" &lt; "V"0 incident on the barrier from the left would always be reflected. Once we have found the quantum-mechanical result we will return to the question of how to recover the classical limit. To study the quantum case, consider the following situation: a particle incident on the barrier from the left side "A"→. It may be reflected ("A"←) or transmitted ("B"→). Here and in the following assume "E" &gt; "V"0. To find the amplitudes for reflection and transmission for incidence from the left, we set in the above equations "A"→ = 1 (incoming particle), "A"← = √"R" (reflection), "B"← = 0 (no incoming particle from the right) and "B"→ = √"Tk"1/"k"2 (transmission ). We then solve for "T" and "R". The result is: formula_13 formula_14 The model is symmetric with respect to a parity transformation and at the same time interchange "k"1 and "k"2. For incidence from the right we have therefore the amplitudes for transmission and reflection formula_15 formula_16 Analysis of the expressions. Energy less than step height ("E" &lt; "V"0). For energies "E" &lt; "V"0, the wave function to the right of the step is exponentially decaying over a distance formula_17. Energy greater than step height ("E" &gt; "V"0). In this energy range the transmission and reflection coefficient differ from the classical case. They are the same for incidence from the left and right: formula_18 formula_19 In the limit of large energies "E" ≫ "V"0, we have "k"1 ≈ "k"2 and the classical result "T" = 1, "R" = 0 is recovered. Thus there is a finite probability for a particle with an energy larger than the step height to be reflected. Negative steps. In other words, a quantum particle reflects off a large potential drop (just as it does off a large potential step). This makes sense in terms of impedance mismatches, but it seems classically counter-intuitive... Classical limit. The result obtained for R depends only on the ratio "E"/"V"0. This seems superficially to violate the correspondence principle, since we obtain a finite probability of reflection regardless of the value of Planck's constant or the mass of the particle. For example, we seem to predict that when a marble rolls to the edge of a table, there can be a large probability that it is reflected back rather than falling off. Consistency with classical mechanics is restored by eliminating the unphysical assumption that the step potential is discontinuous. When the step function is replaced with a ramp that spans some finite distance "w", the probability of reflection approaches zero in the limit formula_20, where "k" is the wavenumber of the particle. Relativistic calculation. The relativistic calculation of a free particle colliding with a step potential can be obtained using relativistic quantum mechanics. For the case of 1/2 fermions, like electrons and neutrinos, the solutions of the Dirac equation for high energy barriers produce transmission and reflection coefficients that are not bounded. This phenomenon is known as the Klein paradox. The apparent paradox disappears in the context of quantum field theory. Applications. The Heaviside step potential mainly serves as an exercise in introductory quantum mechanics, as the solution requires understanding of a variety of quantum mechanical concepts: wavefunction normalization, continuity, incident/reflection/transmission amplitudes, and probabilities. A similar problem to the one considered appears in the physics of normal-metal superconductor interfaces. Quasiparticles are scattered at the pair potential which in the simplest model may be assumed to have a step-like shape. The solution of the Bogoliubov-de Gennes equation resembles that of the discussed Heaviside-step potential. In the superconductor normal-metal case this gives rise to Andreev reflection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi(x)" }, { "math_id": 1, "text": "\\hat H\\psi(x) = \\left[-\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2} + V(x)\\right]\\psi(x) = E\\psi(x)," }, { "math_id": 2, "text": "V(x) = \\begin{cases} 0, & x < 0 \\\\ V_0, & x \\ge 0 \\end{cases}" }, { "math_id": 3, "text": "-\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2}\\psi" }, { "math_id": 4, "text": "\\psi_1(x)= \\left(A_\\rightarrow e^{i k_1 x} + A_\\leftarrow e^{-ik_1x}\\right)\\quad x<0, " }, { "math_id": 5, "text": "\\psi_2(x)= \\left(B_\\rightarrow e^{i k_2 x} + B_\\leftarrow e^{-ik_2x}\\right)\\quad x>0" }, { "math_id": 6, "text": "k_1=\\sqrt{2m E/\\hbar^2}," }, { "math_id": 7, "text": "k_2=\\sqrt{2m (E-V_0)/\\hbar^2}" }, { "math_id": 8, "text": "p=\\hbar k" }, { "math_id": 9, "text": "\\psi_1(0)=\\psi_2(0)," }, { "math_id": 10, "text": "\\left.\\frac{d\\psi_1}{dx}\\right|_{x=0} = \\left.\\frac{d\\psi_2}{dx}\\right|_{x=0}." }, { "math_id": 11, "text": "(A_\\rightarrow+A_\\leftarrow)=(B_\\rightarrow+B_\\leftarrow)" }, { "math_id": 12, "text": "k_1(A_\\rightarrow-A_\\leftarrow)=k_2(B_\\rightarrow-B_\\leftarrow)" }, { "math_id": 13, "text": "\\sqrt{T}=\\frac{2\\sqrt{k_1 k_2}}{k_1+k_2}" }, { "math_id": 14, "text": "\\sqrt{R}=\\frac{k_1-k_2}{k_1+k_2}." }, { "math_id": 15, "text": "\\sqrt{T'}=\\sqrt{T}=\\frac{2\\sqrt{k_1k_2}}{k_1+k_2}" }, { "math_id": 16, "text": "\\sqrt{R'}=-\\sqrt{R}=\\frac{k_2-k_1}{k_1+k_2}." }, { "math_id": 17, "text": "1/(k_2)" }, { "math_id": 18, "text": "T=|T'|=\\frac{4k_1 k_2}{(k_1+k_2)^2}" }, { "math_id": 19, "text": "R=|R'|=1-T=\\frac{(k_1-k_2)^2}{(k_1+k_2)^2}" }, { "math_id": 20, "text": "wk \\to \\infty" } ]
https://en.wikipedia.org/wiki?curid=6050735
6050755
Stress–strain index
Measure of bone strength The stress–strain index (SSI), of a bone, is a surrogate measure of bone strength determined from a cross-sectional scan by QCT or pQCT (radiological scan). The stress–strain index is used to compare the structural parameters determined by analysis of QCT/pQCT cross-sectional scans to the results of three-point bending test. Definition. It is calculated using the following formula: formula_0 Where: History and relation to moments of inertia. It was developed by the manufacturer of a peripheral quantitative CT (pQCT) scanner, and is considered to be by some an improvement over the information provided by calculating the area moments of inertia and polar moments of inertia.
[ { "math_id": 0, "text": "\\text{SSI} = \\sum_{i=0}^n {{r_i^2 a (\\frac{CD}{ND})} \\over {r_\\text{max}}}" } ]
https://en.wikipedia.org/wiki?curid=6050755
605155
Elwin Bruno Christoffel
German mathematician and physicist Elwin Bruno Christoffel (; 10 November 1829 – 15 March 1900) was a German mathematician and physicist. He introduced fundamental concepts of differential geometry, opening the way for the development of tensor calculus, which would later provide the mathematical basis for general relativity. Life. Christoffel was born on 10 November 1829 in Montjoie (now Monschau) in Prussia in a family of cloth merchants. He was initially educated at home in languages and mathematics, then attended the Jesuit Gymnasium and the Friedrich-Wilhelms Gymnasium in Cologne. In 1850 he went to the University of Berlin, where he studied mathematics with Gustav Dirichlet (which had a strong influence over him) among others, as well as attending courses in physics and chemistry. He received his doctorate in Berlin in 1856 for a thesis on the motion of electricity in homogeneous bodies written under the supervision of Martin Ohm, Ernst Kummer and Heinrich Gustav Magnus. After receiving his doctorate, Christoffel returned to Montjoie where he spent the following three years in isolation from the academic community. However, he continued to study mathematics (especially mathematical physics) from books by Bernhard Riemann, Dirichlet and Augustin-Louis Cauchy. He also continued his research, publishing two papers in differential geometry. In 1859 Christoffel returned to Berlin, earning his habilitation and becoming a Privatdozent at the University of Berlin. In 1862 he was appointed to a chair at the Polytechnic School in Zürich left vacant by Dedekind. He organised a new institute of mathematics at the young institution (it had been established only seven years earlier) that was highly appreciated. He also continued to publish research, and in 1868 he was elected a corresponding member of the Prussian Academy of Sciences and of the Istituto Lombardo in Milan. In 1869 Christoffel returned to Berlin as a professor at the Gewerbeakademie (now part of Technische Universität Berlin), with Hermann Schwarz succeeding him in Zürich. However, strong competition from the close proximity to the University of Berlin meant that the Gewerbeakademie could not attract enough students to sustain advanced mathematical courses and Christoffel left Berlin again after three years. In 1872 Christoffel became a professor at the University of Strasbourg, a centuries-old institution that was being reorganized into a modern university after Prussia's annexation of Alsace-Lorraine in the Franco-Prussian War. Christoffel, together with his colleague Theodor Reye, built a reputable mathematics department at Strasbourg. He continued to publish research and had several doctoral students including Rikitaro Fujisawa, Ludwig Maurer and Paul Epstein. Christoffel retired from the University of Strasbourg in 1894, being succeeded by Heinrich Weber. After retirement he continued to work and publish, with the last treatise finished just before his death and published posthumously. Christoffel died on 15 March 1900 in Strasbourg. He never married and left no family. Work. Differential geometry. Christoffel is mainly remembered for his seminal contributions to differential geometry. In a famous 1869 paper on the equivalence problem for differential forms in "n" variables, published in Crelle's Journal, he introduced the fundamental technique later called covariant differentiation and used it to define the Riemann–Christoffel tensor (the most common method used to express the curvature of Riemannian manifolds). In the same paper he introduced the Christoffel symbols formula_0 and formula_1 which express the components of the Levi-Civita connection with respect to a system of local coordinates. Christoffel's ideas were generalized and greatly developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita, who turned them into the concept of tensors and the absolute differential calculus. The absolute differential calculus, later named tensor calculus, forms the mathematical basis of the general theory of relativity. Complex analysis. Christoffel contributed to complex analysis, where the Schwarz–Christoffel mapping is the first nontrivial constructive application of the Riemann mapping theorem. The Schwarz–Christoffel mapping has many applications to the theory of elliptic functions and to areas of physics. In the field of elliptic functions he also published results concerning abelian integrals and theta functions. Numerical analysis. Christoffel generalized the Gaussian quadrature method for integration and, in connection to this, he also introduced the Christoffel–Darboux formula for Legendre polynomials (he later also published the formula for general orthogonal polynomials). Other research. Christoffel also worked on potential theory and the theory of differential equations, however much of his research in these areas went unnoticed. He published two papers on the propagation of discontinuities in the solutions of partial differential equations which represent pioneering work in the theory of shock waves. He also studied physics and published research in optics, however his contributions here quickly lost their utility with the abandonment of the concept of the luminiferous aether. Honours. Christoffel was elected as a corresponding member of several academies: Christoffel was also awarded two distinctions for his activity by the Kingdom of Prussia: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma_{kij}" }, { "math_id": 1, "text": "\\Gamma^{k}_{ij}" } ]
https://en.wikipedia.org/wiki?curid=605155
60531312
Error exponents in hypothesis testing
In statistical hypothesis testing, the error exponent of a hypothesis testing procedure is the rate at which the probabilities of Type I and Type II decay exponentially with the size of the sample used in the test. For example, if the probability of error formula_0 of a test decays as formula_1, where formula_2 is the sample size, the error exponent is formula_3. Formally, the error exponent of a test is defined as the limiting value of the ratio of the negative logarithm of the error probability to the sample size for large sample sizes: formula_4. Error exponents for different hypothesis tests are computed using Sanov's theorem and other results from large deviations theory. Error exponents in binary hypothesis testing. Consider a binary hypothesis testing problem in which observations are modeled as independent and identically distributed random variables under each hypothesis. Let formula_5 denote the observations. Let formula_6 denote the probability density function of each observation formula_7 under the null hypothesis formula_8 and let formula_9 denote the probability density function of each observation formula_7 under the alternate hypothesis formula_10. In this case there are two possible error events. Error of type 1, also called false positive, occurs when the null hypothesis is true and it is wrongly rejected. Error of type 2, also called false negative, occurs when the alternate hypothesis is true and null hypothesis is not rejected. The probability of type 1 error is denoted formula_11 and the probability of type 2 error is denoted formula_12. Optimal error exponent for Neyman–Pearson testing. In the Neyman–Pearson version of binary hypothesis testing, one is interested in minimizing the probability of type 2 error formula_13 subject to the constraint that the probability of type 1 error formula_14 is less than or equal to a pre-specified level formula_15. In this setting, the optimal testing procedure is a likelihood-ratio test. Furthermore, the optimal test guarantees that the type 2 error probability decays exponentially in the sample size formula_2 according to formula_16. The error exponent formula_17 is the Kullback–Leibler divergence between the probability distributions of the observations under the two hypotheses. This exponent is also referred to as the Chernoff–Stein lemma exponent. Optimal error exponent for average error probability in Bayesian hypothesis testing. In the Bayesian version of binary hypothesis testing one is interested in minimizing the average error probability under both hypothesis, assuming a prior probability of occurrence on each hypothesis. Let formula_18 denote the prior probability of hypothesis formula_19. In this case the average error probability is given by formula_20. In this setting again a likelihood ratio test is optimal and the optimal error decays as formula_21 where formula_22 represents the Chernoff-information between the two distributions defined as formula_23. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_{\\mathrm{error}}" }, { "math_id": 1, "text": "e^{-n \\beta}" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\\lim_{n \\to \\infty}\\frac{-\\ln P_\\text{error}}{n}" }, { "math_id": 5, "text": " Y_1, Y_2, \\ldots, Y_n " }, { "math_id": 6, "text": " f_0 " }, { "math_id": 7, "text": "Y_i" }, { "math_id": 8, "text": "H_0" }, { "math_id": 9, "text": " f_1 " }, { "math_id": 10, "text": "H_1" }, { "math_id": 11, "text": "P (\\mathrm{error}\\mid H_0)" }, { "math_id": 12, "text": "P (\\mathrm{error}\\mid H_1)" }, { "math_id": 13, "text": "P (\\text{error}\\mid H_1)" }, { "math_id": 14, "text": "P (\\text{error}\\mid H_0)" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "\\lim_{n \\to \\infty} \\frac{- \\ln P (\\mathrm{error}\\mid H_1)}{n} = D(f_0\\parallel f_1)" }, { "math_id": 17, "text": "D(f_0\\parallel f_1)" }, { "math_id": 18, "text": " \\pi_0 " }, { "math_id": 19, "text": " H_0 " }, { "math_id": 20, "text": " P_\\text{ave} = \\pi_0 P (\\text{error}\\mid H_0) + (1-\\pi_0)P (\\text{error}\\mid H_1)" }, { "math_id": 21, "text": " \\lim_{n \\to \\infty} \\frac{- \\ln P_\\text{ave} }{n} = C(f_0,f_1)" }, { "math_id": 22, "text": "C(f_0,f_1)" }, { "math_id": 23, "text": " C(f_0,f_1) = \\max_{\\lambda \\in [0,1]} \\left[-\\ln \\int (f_0(x))^\\lambda (f_1(x))^{(1-\\lambda)} \\, dx \\right]" } ]
https://en.wikipedia.org/wiki?curid=60531312