id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
10950483
Tubulin GTPase
Class of enzymes Tubulin GTPase (EC 3.6.5.6) is an enzyme with systematic name "GTP phosphohydrolase (microtubule-releasing)". This enzyme catalyses the following chemical reaction GTP + H2O formula_0 GDP + phosphate This enzyme participates in tubulin folding and division plane formation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10950483
10951180
Interest Equalization Tax
Interest Equalization Tax was a domestic tax measure implemented by U.S. President John F. Kennedy in July 1963. It was meant to make it less profitable for U.S. investors to invest abroad by taxing the purchase of foreign securities. The design of the tax was to reduce the balance-of-payment deficit. Originally intended to be a temporary tax, it lasted until 1974. Purpose. The purpose of the tax was to decrease the balance of payments deficit in the US. This was achieved conceptually by making investments in foreign securities less appealing. By increasing the price of the security, investors will buy fewer of them, all else equal. With fewer domestic investors purchasing foreign securities, capital outflows will be lower, thereby reducing the balance-of-payments deficit. The equation for the balance of payments is: formula_0 The identity for the capital account is: formula_1 So when capital outflows decrease, the capital account increases. When the capital account increases, the balance-of-payments increases. Dates Effective. The tax was effective on purchases made after July 18, 1963. It was scheduled to expire on January 1, 1966, but was extended multiple times, and eventually abolished on January, 1974. Estimated Revenue. The tax was expected to raise $30 million per year. Effect on the Deficit. As the original intent of the Interest Equalization Tax was the reduce the balance-of-payments deficit, a majority consider the tax successful. Since many factors influence the balance-of-payments account, the effect of the tax is unclear. However, there was a positive trend in the years after it was enacted. Effect on Financial Markets. The interest equalization tax "brought American investment activity in foreign markets to a virtual standstill." However, financial markets responded over time with massive evasion of the tax, along with the development of the eurodollar market. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\text{BOP} = \\text{current account} + \\text{capital account} \\pm \\text{balancing item} \\," }, { "math_id": 1, "text": "\n\\begin{align}\n \\mbox{Capital account} & = \\mbox{Change in foreign ownership of domestic assets} \\\\\n & - \\mbox{Change in domestic ownership of foreign assets} \\\\\n \n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=10951180
10952021
Nitrate reductase (cytochrome)
Nitrate reductase (cytochrome) (EC 1.9.6.1, "respiratory nitrate reductase", "benzyl viologen-nitrate reductase") is an enzyme with systematic name "ferrocytochrome:nitrate oxidoreductase". This enzyme catalises the following chemical reaction 2 ferrocytochrome + 2 H+ + nitrate formula_0 2 ferricytochrome + nitrite References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10952021
10952023
Nitrate reductase (NADPH)
Nitrate reductase (NADPH) (EC 1.7.1.3, "assimilatory nitrate reductase", "assimilatory reduced nicotinamide adenine dinucleotide phosphate-nitrate reductase", "NADPH-nitrate reductase", "assimilatory NADPH-nitrate reductase", "triphosphopyridine nucleotide-nitrate reductase", "NADPH:nitrate reductase", "nitrate reductase (NADPH2)", "NADPH2:nitrate oxidoreductase") is an enzyme with systematic name "nitrite:NADP+ oxidoreductase". This enzyme catalises the following chemical reaction nitrite + NADP+ + H2O formula_0 nitrate + NADPH + H+ Nitrate reductase is an iron-sulfur molybdenum flavoprotein. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10952023
10952026
Nitrate reductase (NAD(P)H)
Nitrate reductase (NAD(P)H) (EC 1.7.1.2, "assimilatory nitrate reductase", "assimilatory NAD(P)H-nitrate reductase", "NAD(P)H bispecific nitrate reductase", "nitrate reductase (reduced nicotinamide adenine dinucleotide (phosphate))", "nitrate reductase NAD(P)H", "NAD(P)H-nitrate reductase", "nitrate reductase [NAD(P)H2]", "NAD(P)H2:nitrate oxidoreductase") is an enzyme with systematic name "nitrite:NAD(P)+ oxidoreductase". This enzyme catalises the following chemical reaction nitrite + NAD(P)+ + H2O formula_0 nitrate + NAD(P)H + H+ Nitrate reductase is an iron-sulfur molybdenum flavoprotein. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10952026
10952027
Nitrate reductase (NADH)
Class of enzymes Nitrate reductase (NADH) (EC 1.7.1.1, "assimilatory nitrate reductase", "NADH-nitrate reductase", "NADH-dependent nitrate reductase", "assimilatory NADH: nitrate reductase", "nitrate reductase (NADH2)", "NADH2: nitrate oxidoreductase") is an enzyme with systematic name "nitrite: NAD+ oxidoreductase". This enzyme catalyzes the following chemical reaction nitrite + NAD+ + H2O formula_0 nitrate + NADH + H+ Nitrate reductase is an iron-sulfur molybdenum flavoprotein. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10952027
1095208
Trip generation
Trip generation is the first step in the conventional four-step transportation forecasting process used for forecasting travel demands. It predicts the number of trips originating in or destined for a particular traffic analysis zone (TAZ). Trip generation analysis focuses on residences and residential trip generation is thought of as a function of the social and economic attributes of households. At the level of the traffic analysis zone, residential land uses "produce" or generate trips. Traffic analysis zones are also destinations of trips, trip attractors. The analysis of attractors focuses on non-residential land uses. This process is followed by trip distribution, mode choice, and route assignment. Input data. A forecasting activity, such as one based on the concept of economic base analysis, provides aggregate measures of population and activity growth. Land use forecasting distributes forecast changes in activities in a disaggregate-spatial manner among zones. The next step in the transportation planning process addresses the question of the frequency of origins and destinations of trips in each zone: for short, trip generation. Analysis. Initial analysis. The first zonal trip generation (and its inverse, attraction) analysis in the Chicago Area Transportation Study (CATS) followed the “decay of activity intensity with distance from the central business district (CBD)” thinking current at the time. Data from extensive surveys were arrayed and interpreted on a distance-from-CBD scale. For example, commercial land use in ring 0 (the CBD and vicinity) was found to generate 728 vehicle trips per day in 1956. That same land use in ring 5 (about from the CBD) generated about 150 trips per day. The case of trip destinations will illustrate use of the concept of activity decline with intensity (as measured by distance from CBD) worked. Destination data are arrayed: The land use analysis provides information on how land uses will change from an initial year (say t = 0) to some forecast year (say t = 20). Suppose we are examining a zone. We take the mix of land uses projected, say, for year t = 20 and apply the trip destination rates for the ring in which the zone is located. That is, there will this many acres of commercial land use, that many acres of public open space, etc., in the zone. The acres of each use type are multiplied by the ring specific destination rates. The result is summed to yield the zone’s trip destinations. The CATS assumed that trip destination rates would not change over time. Revisions to the analysis. As was true for land use analysis, the approach developed at CATS was considerably modified in later studies. The conventional four-step paradigm evolved as follows: Types of trips are considered. Home-based (residential) trips are divided into work and other, with major attention given to work trips. Movement associated with the home end of a trip is called trip production, whether the trip is leaving or coming to the home. Non-home-based or non-residential trips are those a home base is not involved. In this case, the term production is given to the origin of a trip and the term attraction refers to the destination of the trip. Residential trip generation analysis is often undertaken using statistical regression. Person, transit, walking, and auto trips per unit of time are regressed on variables thought to be explanatory, such as: household size, number of workers in the household, persons in an age group, type of residence (single family, apartment, etc.), and so on. Usually, measures on five to seven independent variables are available; additive causality is assumed. Regressions are also made at the aggregate/zone level. Variability among households within a zone isn’t measured when data are aggregated. High correlation coefficients are found when regressions are run on aggregate data, about 0.90, but lower coefficients, about 0.25, are found when regressions are made on observation units such as households. In short, there is much variability that is hidden by aggregation. Sometimes cross-classification techniques are applied to residential trip generation problems. The CATS procedure described above is a cross-classification procedure. Classification techniques are often used for non-residential trip generation. First, the type of land use is a factor influencing travel, it is regarded as a causal factor. A list of land uses and associated trip rates illustrated a simple version of the use of this technique: Such a list can be improved by adding information. Large, medium, and small might be defined for each activity and rates given by size. Number of employees might be used: for example, <10, 10-20, etc. Also, floor space is used to refine estimates. In other cases, regressions, usually of the form trip rate = f(number of employees, floor area of establishment), are made for land use types. Special treatment is often given major trip generators: large shopping centers, airports, large manufacturing plants, and recreation facilities. The theoretical work related to trip generation analysis is grouped under the rubric travel demand theory, which treats trip generation-attraction, as well as mode choice, route selection, and other topics. Databases. The Institute of Transportation Engineers's "Trip Generation Manual" provides trip generation rates for various land use and building types. The planner can add local adjustment factors and treat mixes of uses with ease. Ongoing work is adding to the stockpile of numbers; over 4000 studies were aggregated for the latest edition. ITE Procedures estimate the number of trips entering and exiting a site at a given time. ITE Rates are functions of type of development based on independent variables such as square footage of the gross leasable area, number of gas pumps, number of dwelling units, or other standard measurable things, usually produced in site plans. They are typically of the form formula_0 or formula_1. They do not consider location, competitors, complements, the cost of transportation, or other factors. The trip generation estimates are provided through data analysis. Many localities require their use to ensure adequate public facilities for growth management and subdivision approval. In the United Kingdom and Ireland, the TRICS database is commonly used to calculate trip generation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Trips = a + b * Area" }, { "math_id": 1, "text": "Trips = a + b ln (Area) " } ]
https://en.wikipedia.org/wiki?curid=1095208
10952145
Cystathionine beta synthase
Mammalian protein found in humans Cystathionine-β-synthase, also known as CBS, is an enzyme (EC 4.2.1.22) that in humans is encoded by the "CBS" gene. It catalyzes the first step of the transsulfuration pathway, from homocysteine to cystathionine: -serine + -homocysteine formula_0 -cystathionine + H2O CBS uses the cofactor pyridoxal-phosphate (PLP) and can be allosterically regulated by effectors such as the ubiquitous cofactor "S"-adenosyl--methionine (adoMet). This enzyme belongs to the family of lyases, to be specific, the hydro-lyases, which cleave carbon-oxygen bonds. CBS is a multidomain enzyme composed of an N-terminal enzymatic domain and two CBS domains. The "CBS" gene is the most common locus for mutations associated with homocystinuria. Nomenclature. The systematic name of this enzyme class is -serine hydro-lyase (adding homocysteine; -cystathionine-forming). Other names in common use include: Methylcysteine synthase was assigned the EC number EC 4.2.1.23 in 1961. A side-reaction of CBS caused this. The EC number EC 4.2.1.23 was deleted in 1972. Structure. The human enzyme cystathionine β-synthase is a tetramer and comprises 551 amino acids with a subunit molecular weight of 61 kDa. It displays a modular organization of three modules with the N-terminal heme domain followed by a core that contains the PLP cofactor. The cofactor is deep in the heme domain and is linked by a Schiff base. A Schiff base is a functional group containing a C=N bond with the nitrogen atom connected to an aryl or alkyl group. The heme domain is composed of 70 amino acids and it appears that the heme only exists in mammalian CBS and is absent in yeast and protozoan CBS. At the C-terminus, the regulatory domain of CBS contains a tandem repeat of two CBS domains of β-α-β-β-α, a secondary structure motif found in other proteins. CBS has a C-terminal inhibitory domain. The C-terminal domain of cystathionine β-synthase regulates its activity via both intrasteric and allosteric effects and is important for maintaining the tetrameric state of the protein. This inhibition is alleviated by binding of the allosteric effector, adoMet, or by deletion of the regulatory domain; however, the magnitude of the effects differ. Mutations in this domain are correlated with hereditary diseases. The heme domain contains an N-terminal loop that binds heme and provides the axial ligands C52 and H65. The distance of heme from the PLP binding site suggests its non-role in catalysis, however deletion of the heme domain causes loss of redox sensitivity, therefore it is hypothesized that heme is a redox sensor. The presence of protoporphyrin IX in CBS is a unique PLP-dependent enzyme and is only found in the mammalian CBS. "D. melanogaster" and "D. discoides" have truncated N-terminal extensions and therefore prevent the conserved histidine and cysteine heme ligand residues. However, the "Anopheles gambiae" sequence has a longer N-terminal extension than the human enzyme and contains the conserved histidine and cysteine heme ligand residues like the human heme. Therefore, it is possible that CBS in slime molds and insects are hemeproteins that suggest that the heme domain is an early evolutionary innovation that arose before the separation of animals and the slime molds. The PLP is an internal aldimine and forms a Schiff base with K119 in the active site. Between the catalytic and regulatory domains exists a hypersensitive site that causes proteolytic cleavage and produces a truncated dimeric enzyme that is more active than the original enzyme. Both truncated enzyme and the enzyme found in yeast are not regulated by adoMet. The yeast enzyme is also activated by the deletion of the C-terminal to produce the dimeric enzyme. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1JBQ and 1M54. Enzymatic activity. Transsulfuration, catalyzed by CBS, converts homocysteine to cystathionine, which cystathione gamma lyase converts to cysteine. CBS occupies a pivotal position in mammalian sulfur metabolism at the homocysteine junction where the decision to conserve methionine or to convert it to cysteine via the transsulfuration pathway, is made. Moreover, the transsulfuration pathway is the only pathway capable of removing sulfur-containing amino acids under conditions of excess. In analogy with other β-replacement enzymes, the reaction catalyzed by CBS is predicted to involve a series of adoMet-bound intermediates. Addition of serine results in a transchiffization reaction, which forms of an external aldimine. The aldimine undergoes proton abstraction at the α-carbon followed by elimination to generate an amino-acrylate intermediate. Nucleophilic attack by the thiolate of homocysteine on the aminoacrylate and reprotonation at Cα generate the external aldimine of cystathionine. A final transaldimination reaction releases the final product, cystathionine. The final product, -cystathionine can also form an aminoacrylate intermediate, indicating that the entire reaction of CBS is reversible. The measured V0 of an enzyme-catalyzed reaction, in general, reflects the steady state (where [ES] is constant), even though V0 is limited to the early part of a reaction, and analysis of these initial rates is referred to as steady-state kinetics. Steady-state kinetic analysis of yeast CBS yields parallel lines. These results agree with the proposed ping-pong mechanism in which serine binding and release of water are followed by homocysteine binding and release of cystathionine. In contrast, the steady-state enzyme kinetics of rat CBS yields intersecting lines, indicating that the β-substituent of serine is not released from the enzyme prior to binding of homocysteine. One of the alternate reactions involving CBS is the condensation of cysteine with homocysteine to form cystathionine and hydrogen sulfide (H2S). H2S in the brain is produced from -cysteine by CBS. This alternative metabolic pathway is also dependent on adoMet. CBS enzyme activity is not found in all tissues and cells. It is absent from heart, lung, testes, adrenal, and spleen in rats. In humans, it has been shown to be absent in heart muscle and primary cultures of human aortic endothelial cells. The lack of CBS in these tissues implies that these tissues are unable to synthesize cysteine and that cysteine must be supplied from extracellular sources. It also suggests that these tissues might have increased sensitivity to homocysteine toxicity because they cannot catabolize excess homocysteine via transsulfuration. Regulation. Allosteric activation of CBS by adoMet determines the metabolic fate of homocysteine. Mammalian CBS is activated 2.5-5-fold by AdoMet with a dissociation constant of 15 μM. AdoMet is an allosteric activator that increases the Vmax of the CBS reaction but does not affect the Km for the substrates. In other words, AdoMet stimulates CBS activity by increasing the turnover rate rather than the binding of substrates to the enzyme. This protein may use the morpheein model of allosteric regulation. Human CBS performs a crucial step in the biosynthetic pathway of cysteine by providing a regulatory control point for AdoMet. Homocysteine, after being methylated to methionine, can be converted to AdoMet, which donates methyl groups to a variety of substrates, e.g., neurotransmitters, proteins, and nucleic acids. AdoMet functions as an allosteric activator of CBS and exerts control on its biosynthesis: low concentrations of AdoMet result in low CBS activity, thereby funneling homocysteine into the transmethylation cycle toward AdoMet formation. In contrast, high adoMet concentrations funnel homocysteine into the transsulfuration pathway toward cysteine biosynthesis. In mammals, CBS is a highly regulated enzyme, which contains a heme cofactor that functions as a redox sensor, that can modulate its activity in response to changes in the redox potential. If the resting form of CBS in the cell has ferrous (Fe2+) heme, the potential exists for activating the enzyme under oxidizing conditions by conversion to the ferric (Fe3+) state. The Fe2+ form of the enzyme is inhibited upon binding CO or nitric oxide, whereas enzyme activity is doubled when the Fe2+ is oxidized to Fe3+. The redox state of the heme is pH dependent, with oxidation of Fe2+–CBS to Fe3+–CBS being favored at low pH conditions. Since mammalian CBS contains a heme cofactor, whereas yeast and protozoan enzyme from "Trypanosoma cruzi" do not have heme cofactors, researchers have speculated that heme is not required for CBS activity. CBS is regulated at the transcriptional level by NF-Y, SP-1, and SP-3. In addition it is upregulated transcriptionally by glucocorticoids and glycogen, and downregulated by insulin. Methionine upregulates CBS at the post-transcriptional level. Human disease. Down syndrome is a medical condition characterized by an overexpression of cystathionine beta synthase (CBS) and a low level of homocysteine in the blood. It has been speculated that cystathionine beta synthase overexpression could be the major culprit in this disease (along with dysfunctioning of GabaA and Dyrk1a). The phenotype of Down syndrome is the opposite of hyperhomocysteinemia (described below). Pharmacologicals inhibitors of CBS have been patented by the Jerome Lejeune Foundation (November 2011) and trials (animals and humans are planned). Hyperhomocysteinemia is a medical condition characterized by an abnormally large level of homocysteine in the blood. Mutations in CBS are the single most common cause of hereditary hyperhomocysteinemia. Genetic defects that affect the MTHFR, MTR, and MTRR/MS enzyme pathways can also contribute to high homocysteine levels. Inborn errors in CBS result in hyperhomocysteinemia with complications in the cardiovascular system leading to early and aggressive arterial disease. Hyperhomocysteinemia also affects three other major organ systems including the ocular, central nervous, and skeletal. Homocystinuria due to CBS deficiency is a special type of hyperhomocysteinemia. It is a rare, hereditary recessive autosomal disease, in general diagnosed during childhood. A total of 131 different homocystinuria-causing mutations have been identified. A common functional feature of the mutations in the CBS domains is that the mutations abolish or strongly reduce activation by adoMet. No specific cure has been discovered for homocystinuria; however, many people are treated using high doses of vitamin B6, which is a cofactor of CBS. Bioengineering. Cystathionine beta synthase (CBS) is involved in oocyte development. However, little is known about the regional and cellular expression patterns of CBS in the ovary and research is now focused on determining the location and expression during follicle development in the ovaries. Absence of Cystathionine beta synthase in mice provokes infertility due to the loss of uterine protein expression. Mutations. The genes that control CBS enzyme expression may not operate at 100% efficiency in individuals who have one of the SNPs (single-nucleotide polymorphisms, a type of mutations) that affect this gene. Known variants include the A360A, C699T, I278T, N212N, and T42N SNPs (among others). These SNPs, which have varied effects on the effectiveness of the enzyme, can be detected with standard DNA testing methods. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10952145
1095547
Hashlife
Algorithm for speeding up cellular automaton simulations Hashlife is a memoized algorithm for computing the long-term fate of a given starting configuration in Conway's Game of Life and related cellular automata, much more quickly than would be possible using alternative algorithms that simulate each time step of each cell of the automaton. The algorithm was first described by Bill Gosper in the early 1980s while he was engaged in research at the Xerox Palo Alto Research Center. Hashlife was originally implemented on Symbolics Lisp machines with the aid of the Flavors extension. Hashlife. Hashlife is designed to exploit large amounts of spatial and temporal redundancy in most Life rules. For example, in Conway's Life, many seemingly random patterns end up as collections of simple still lifes and oscillators. Hashlife does however not depend on patterns remaining in the same position; it is more about exploiting that large patterns tend to have subpatterns that appear in several places, possibly at different times. Representation. The field is typically treated as a theoretically infinite grid, with the pattern in question centered near the origin. A quadtree (with sharing of nodes) is used to represent the field. A node at the "k"th level of the tree represents a square of 22"k" cells, 2"k" on a side, by referencing the four "k"–1 level nodes that represent the four formula_0 quadrants of that level "k" square. For example, a level 3 node represents an 8×8 square, which decomposes into four 4×4 squares. Explicit cell contents are only stored at level 0. The root node has to be at a high enough level that all live cells are found within the square it represents. While a quadtree naively seems to require far more overhead than simpler representations (such as using a matrix of bits), it allows for various optimizations. Since each cell is either live or dead, there are only two possibilities for a node at level 0, so if nodes are allowed to be shared between parents, there is never a need for having more than 2 level 0 nodes in total. Likewise the 4 cells of a 2×2 square can only exhibit formula_1 different combinations, so no more than that many level 1 nodes are needed either. Going to higher levels, the number of possible "k"th level squares grows as formula_2, but the number of distinct "k"th level squares occurring in any particular run is much lower, and very often the same square contents appears in several places. For maximal sharing of nodes in the quadtree (which is not so much a tree as a directed acyclic graph), we only want to use one node to represent all squares with the same content. Hashing. A hash table, or more generally any kind of associative array, may be used to map square contents to an already existing node representing those contents, so that one through the technique of hash consing may avoid creating a duplicate node representing those contents. If this is applied consistently then it is sufficient to hash the four pointers to component nodes, as a bottom–up hashing of the square contents would always find those four nodes at the level below. It turns out several operations on higher level nodes can be carried out without explicitly producing the contents of those nodes, instead it suffices to work with pointers to nodes a fixed number of levels down. Caching and superspeed. The quadtree can be augmented to in a node also cache the result of an update on the contents of that node. There is not enough information in a formula_3 square to determine the next timestep contents on the whole of that square, but the contents of a formula_4 square centered at the same point determine the next timestep contents of the formula_3 square. This level "k" node for that next timestep is offset by formula_5 cells in both the horizontal and vertical directions, so even in the case of still life it would likely not be among the level "k" nodes that combine into the formula_4 square, but at level "k"–1 the squares are again in the same positions and will be shared if unchanged. Practically, computing the next timestep contents is a recursive operation that bottom–up populates the cache field of each level "k" node with a level "k"–1 node representing the contents of the updated center formula_6 square. Sharing of nodes can bring a significant speed-up to this operation, since the work required is proportional to the number of nodes, not to the number of cells as in a simpler representation. If nodes are being shared between quadtrees representing different timesteps, then only those nodes which were newly created during the previous timestep will need to have a cached value computed at all. Superspeed goes further, using the observation that the contents of a formula_4 square actually determine the contents of its central formula_3 square for the next formula_7 timesteps. Instead of having a level "k" node cache a level "k"–1 node for the contents 1 step ahead, we can have it cache one for the contents formula_8 steps ahead. Because updates at level "k" are computed from updates at level "k"–1, and since at level "k"–1 there are cached results for advancing formula_9 timesteps, a mere two rounds of advancing at level "k"–1 suffice for advancing by formula_8 steps at level "k". In the worst case 2 rounds at level "k"–1 may have to do 4 full rounds at level "k"–2, in turn calling for 8 full rounds at level "k"–3, etc., but in practice many subpatterns in the tree are identical to each other and most branches of the recursion are short. For example the pattern being studied may contain many copies of the same spaceship, and often large swathes of empty space. Each instance of these subpatterns will hash to the same quadtree node, and thus only need to be stored once. In addition, these subpatterns only need to be evaluated once, not once per copy as in other Life algorithms. For sparse or repetitive patterns such as the classical glider gun, this can result in tremendous speedups, allowing one to compute "bigger" patterns at "higher" generations "faster", sometimes exponentially. A generation of the various breeders and spacefillers, which grow at polynomial speeds, can be evaluated in Hashlife using logarithmic space and time. Since subpatterns of different sizes are effectively run at different speeds, some implementations, like Gosper's own hlife program, do not have an interactive display; they simply advance the starting pattern a given number of steps, and are usually run from the command line. More recent programs such as Golly, however, have a graphical interface that can drive a Hashlife-based engine. The typical behavior of a Hashlife program on a conducive pattern is as follows: first the algorithm runs slower compared to other algorithms because of the constant overhead associated with hashing and building the tree; but later, enough data will be gathered and its speed will increase tremendously – the rapid increase in speed is often described as "exploding". Drawbacks. Like many memoized codes, Hashlife can consume significantly more memory than other algorithms, especially on moderate-sized patterns with a lot of entropy, or which contain subpatterns poorly aligned to the bounds of the quadtree nodes (i.e. power-of-two sizes); the cache is a vulnerable component. It can also consume more time than other algorithms on these patterns. Golly, among other Life simulators, has options for toggling between Hashlife and conventional algorithms. Hashlife is also significantly more complex to implement. For example, it needs a dedicated garbage collector to remove unused nodes from the cache. Due to being designed for processing generally predictable patterns, chaotic and explosive rules generally operate much more poorly under Hashlife than they would under other implementations. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "2^{k-1} \\times 2^{k-1}" }, { "math_id": 1, "text": " 2^4 = 16 " }, { "math_id": 2, "text": " 2^{2^k \\cdot 2^k} = 2^{2^{2k}} " }, { "math_id": 3, "text": " 2^k \\times 2^k " }, { "math_id": 4, "text": " 2^{k+1} \\times 2^{k+1} " }, { "math_id": 5, "text": "2^{k-1}" }, { "math_id": 6, "text": " 2^{k-1} \\times 2^{k-1} " }, { "math_id": 7, "text": " 2^{k-1} " }, { "math_id": 8, "text": "2^{k-2}" }, { "math_id": 9, "text": "2^{k-3}" } ]
https://en.wikipedia.org/wiki?curid=1095547
10956197
Balanced circuit
Electronic circuit with two signal transmission lines of matching impedance In electrical engineering, a balanced circuit is electronic circuitry for use with a balanced line, or the balanced line itself. Balanced lines are a common method of transmitting many types of electrical signals between two points on two wires. In a balanced line, the two signal lines are of a matched impedance to help ensure that interference, induced in the line, is common-mode and can be removed at the receiving end by circuitry with good common-mode rejection. To maintain the balance, circuit blocks which interface to the line or are connected in the line must also be balanced. Balanced lines work because the interfering noise from the surrounding environment induces equal noise voltages into both wires. By measuring the voltage difference between the two wires at the receiving end, the original signal is recovered while the noise is rejected. Any inequality in the noise induced in each wire is an imbalance and will result in the noise not being fully rejected. One requirement for balance is that both wires are an equal distance from the noise source. This is often achieved by placing the wires as close together as possible and twisting them together. Another requirement is that the impedance to ground (or to whichever reference point is being used by the difference detector) is the same for both conductors at all points along the length of the line. If one wire has a higher impedance to ground it will tend to have a higher noise induced, destroying the balance. Balance and symmetry. A balanced circuit will normally show a symmetry of its components about a horizontal line midway between the two conductors (example in figure 3). This is different from what is normally meant by a symmetrical circuit, which is a circuit showing symmetry of its components about a vertical line at its midpoint. An example of a symmetrical circuit is shown in figure 2. Circuits designed for use with balanced lines will often be designed to be both balanced and symmetrical as shown in figure 4. The advantages of symmetry are that the same impedance is presented at both ports and that the circuit has the same effect on signals travelling in both directions on the line. Balance and symmetry are usually associated with reflected horizontal and vertical physical symmetry respectively as shown in figures 1 to 4. However, physical symmetry is not a necessary requirement for these conditions. It is only necessary that the electrical impedances are symmetrical. It is possible to design circuits that are not physically symmetrical but which have equivalent impedances which are symmetrical. Balanced signals and balanced circuits. A balanced signal is one where the voltages on each wire are symmetrical with respect to ground (or some other reference). That is, the signals are inverted with respect to each other. A balanced circuit is a circuit where the two sides have identical transmission characteristics in all respects. A balanced line is a line in which the two wires will carry balanced currents (that is, equal and opposite currents) when balanced (symmetrical) voltages are applied. The condition for balance of lines and circuits will be met, in the case of passive circuitry, if the impedances are balanced. The line and circuit remain balanced, and the benefits of common-mode noise rejection continue to apply, whether or not the applied signal is itself balanced (symmetrical), always provided that the generator producing that signal maintains the impedance balance of the line. Driving and receiving circuits. There are a number of ways that a balanced line can be driven and the signal detected. In all methods, for the continued benefit of good noise immunity, it is essential that the driving and receiving circuit maintain the impedance balance of the line. It is also essential that the receiving circuit detects only differential signals and rejects common-mode signals. It is not essential (although it is often the case) that the transmitted signal is balanced, that is, symmetrical about ground. Transformer balance. The conceptually simplest way to connect to a balanced line is through transformers at each end shown in figure 5. Transformers were the original method of making such connections in telephony, and before the advent of active circuitry were the only way. In the telephony application they are known as repeating coils. Transformers have the additional advantage of completely isolating (or "floating") the line from earth and earth loop currents, which are an undesirable possibility with other methods. The side of the transformer facing the line, in a good quality design, will have the winding laid in two parts (often with a centre tap provided) which are carefully balanced to maintain the line balance. Line side and equipment side windings are more useful concepts than the more usual primary and secondary windings when discussing these kinds of transformers. At the sending end the line side winding is the secondary, but at the receiving end the line side winding is the primary. When discussing a two-wire circuit primary and secondary cease to have any meaning at all, since signals are flowing in both directions at once. The equipment side winding of the transformer does not need to be so carefully balanced. In fact, one leg of the equipment side can be earthed without effecting the balance on the line as shown in figure 5. With transformers the sending and receiving circuitry can be entirely unbalanced with the transformer providing the balancing. Active balance. Active balance is achieved using differential amplifiers at each end of the line. An op-amp implementation of this is shown in figure 6, other circuitry is possible. Unlike transformer balance, there is no isolation of the circuitry from the line. Each of the two wires is driven by an op amp circuit which are identical except that one is inverting and one is non-inverting. Each one produces an asymmetrical signal individually but together they drive the line with a symmetrical signal. The output impedance of each amp is equal so the impedance balance of the line is maintained. While it is not possible to create an isolated drive with op-amp circuitry alone, it is possible to create a floating output. This is important if one leg of the line might become grounded or connected to some other voltage reference. Grounding one leg of the line in the circuit of figure 6 will result in the line voltage being halved since only one op-amp is now providing signal. To achieve a floating output additional feedback paths are required between the two op-amps resulting in a more complex circuit than figure 6, but still avoiding the expense of a transformer. A floating op-amp output can only float within the limits of the op-amp's supply rails. An isolated output can be achieved without transformers with the addition of opto-isolators. Impedance balance. As noted above, it is possible to drive a balanced line with a single-ended signal and still maintain the line balance. This is represented in outline in figure 7. The amplifier driving one leg of the line through a resistor is assumed to be an ideal (that is, zero output impedance) single-ended output amp. The other leg is connected from ground through another resistor of the same value. The impedance to ground of both legs is the same and the line remains balanced. The receiving amplifier still rejects any common-mode noise as it has a differential input. On the other hand, the line signal is not symmetrical. The voltages at the input to the two legs, "V"+ and "V"− are given by; formula_0 formula_1 Where "Z"in is the input impedance of the line. These are clearly not symmetrical since "V"− is much smaller than "V"+. They are not even opposite polarities. In audio applications "V"− is usually so small it can be taken as zero. Balanced to unbalanced interfacing. A circuit that has the specific purpose of allowing interfacing between balanced and unbalanced circuits is called a balun. A balun could be a transformer with one leg earthed on the unbalanced side as described in the transformer balance section above. Other circuits are possible such as autotransformers or active circuits. Connectors. Common connectors used with balanced circuits include modular connectors on telephone instruments and broadband data, and XLR connectors for professional audio. 1/4" tip/ring/sleeve (TRS) phone connectors were once widely used on manual switchboards and other telephone infrastructure. Such connectors are now more commonly seen in miniature sizes (2.5 and 3.5 mm) being used for unbalanced stereo audio; however, professional audio equipment such as mixing consoles still commonly use balanced and unbalanced "line-level" connections with 1/4" phone jacks. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V_+ = V_\\mathrm {in} \\frac{Z_\\mathrm {in}+R_1}{Z_\\mathrm {in}+2R_1}" }, { "math_id": 1, "text": "V_- = V_\\mathrm {in} \\frac{R_1}{Z_\\mathrm {in}+2R_1}" } ]
https://en.wikipedia.org/wiki?curid=10956197
1095698
Mill (grinding)
Device that breaks solid materials into smaller pieces by grinding, crushing, or cutting A mill is a device, often a structure, machine or kitchen appliance, that breaks solid materials into smaller pieces by grinding, crushing, or cutting. Such comminution is an important unit operation in many processes. There are many different types of mills and many types of materials processed in them. Historically mills were powered by hand or by animals (e.g., via a hand crank), working animal (e.g., horse mill), wind (windmill) or water (watermill). In modern era, they are usually powered by electricity. The grinding of solid materials occurs through mechanical forces that break up the structure by overcoming the interior bonding forces. After the grinding the state of the solid is changed: the grain size, the grain size disposition and the grain shape. Milling also refers to the process of breaking down, separating, sizing, or classifying aggregate material (e.g. mining ore). For instance rock crushing or grinding to produce uniform aggregate size for construction purposes, or separation of rock, soil or aggregate material for the purposes of structural fill or land reclamation activities. Aggregate milling processes are also used to remove or separate contamination or moisture from aggregate or soil and to produce "dry fills" prior to transport or structural filling. Grinding may serve the following purposes in engineering: Grinding laws. In spite of a great number of studies in the field of fracture schemes there is no formula known which connects the technical grinding work with grinding results. Mining engineers, Peter von Rittinger, Friedrich Kick and Fred Chester Bond independently produced equations to relate the needed grinding work to the grain size produced and a fourth engineer, R.T.Hukki suggested that these three equations might each describe a narrow range of grain sizes and proposed uniting them along a single curve describing what has come to be known as the "Hukki relationship". In stirred mills, the Hukki relationship does not apply and instead, experimentation has to be performed to determine any relationship. To evaluate the grinding results the grain size disposition of the source material (1) and of the ground material (2) is needed. Grinding degree is the ratio of the sizes from the grain disposition. There are several definitions for this characteristic value: formula_0 Instead of the value of "d"80 also "d"50 or other grain diameter can be used. formula_1 The specific surface area referring to volume "S"v and the specific surface area referring to mass "S"m can be found out through experiments. formula_2 The discharge die gap a of the grinding machine is used for the ground solid matter in this formula. Grinding machines. In materials processing a grinder is a machine for producing fine particle size reduction through attrition and compressive forces at the grain size level. See also crusher for mechanisms producing larger particles. In general, grinding processes require a relatively large amount of energy; for this reason, an experimental method to measure the energy used locally during milling with different machines was recently proposed. Autogenous mill. Autogenous or autogenic mills are so-called due to the self-grinding of the ore: a rotating drum throws larger rocks of ore in a cascading motion which causes impact breakage of larger rocks and compressive grinding of finer particles. It is similar in operation to a SAG mill as described below but does not use steel balls in the mill. Also known as ROM or "Run Of Mine" grinding. Ball mill. A typical type of fine grinder is the ball mill. A slightly inclined or horizontal rotating cylinder is partially filled with balls, usually stone or metal, which grind material to the necessary fineness by friction and impact with the tumbling balls. Ball mills normally operate with an approximate ball charge of 30%. Ball mills are characterized by their smaller (comparatively) diameter and longer length, and often have a length 1.5 to 2.5 times the diameter. The feed is at one end of the cylinder and the discharge is at the other. Ball mills are commonly used in the manufacture of Portland cement and finer grinding stages of mineral processing. Industrial ball mills can be as large as 8.5 m (28 ft) in diameter with a 22 MW motor, drawing approximately 0.0011% of the total world's power (see List of countries by electricity consumption). However, small versions of ball mills can be found in laboratories where they are used for grinding sample material for quality assurance. The power predictions for ball mills typically use the following form of the Bond equation: formula_3 where Buhrstone mill. Another type of fine grinder commonly used is the French buhrstone mill, which is similar to old-fashioned flour mills. High pressure grinding rolls. A high pressure grinding roll, often referred to as HPGRs or roller press, consists out of two rollers with the same dimensions, which are rotating against each other with the same circumferential speed. The special feeding of bulk material through a hopper leads to a material bed between the two rollers. The bearing units of one roller can move linearly and are pressed against the material bed by springs or hydraulic cylinders. The pressures in the material bed are greater than 50 MPa (7,000 PSI). In general they achieve 100 to 300 MPa. By this the material bed is compacted to a solid volume portion of more than 80%. The roller press has a certain similarity to roller crushers and roller presses for the compacting of powders, but purpose, construction and operation mode are different. Extreme pressure causes the particles inside of the compacted material bed to fracture into finer particles and also causes microfracturing at the grain size level. Compared to ball mills HPGRs achieve a 30 to 50% lower specific energy consumption, although they are not as common as ball mills since they are a newer technology. A similar type of intermediate crusher is the edge runner, which consists of a circular pan with two or more heavy wheels known as mullers rotating within it; material to be crushed is shoved underneath the wheels using attached plow blades. Pebble mill. A rotating drum causes friction and attrition between rock pebbles and ore particles. May be used where product contamination by iron from steel balls must be avoided. Quartz or silica is commonly used because it is inexpensive to obtain. Rod mill. A rotating drum causes friction and attrition between steel rods and ore particles. But the term 'rod mill' is also used as a synonym for a slitting mill, which makes rods of iron or other metal. Rod mills are less common than ball mills for grinding minerals. The rods used in the mill, usually a high-carbon steel, can vary in both the length and the diameter. However, the smaller the rods, the larger is the total surface area and hence, the greater the grinding efficiency. SAG mill. SAG is an acronym for semi-autogenous grinding. SAG mills are autogenous mills that also use grinding balls like a ball mill. A SAG mill is usually a primary or first stage grinder. SAG mills use a ball charge of 8 to 21%. The largest SAG mill is 42' (12.8m) in diameter, powered by a 28 MW (38,000 HP) motor. A SAG mill with a 44' (13.4m) diameter and a power of 35 MW (47,000 HP) has been designed. Attrition between grinding balls and ore particles causes grinding of finer particles. SAG mills are characterized by their large diameter and short length as compared to ball mills. The inside of the mill is lined with lifting plates to lift the material inside the mill, where it then falls off the plates onto the rest of the ore charge. SAG mills are primarily used at gold, copper and platinum mines with applications also in the lead, zinc, silver, alumina and nickel industries. Tower mill. Tower mills, often called vertical mills, stirred mills or regrind mills, are a more efficient means of grinding material at smaller particle sizes, and can be used after ball mills in a grinding process. Like ball mills, grinding (steel) balls or pebbles are often added to stirred mills to help grind ore, however these mills contain a large screw mounted vertically to lift and grind material. In tower mills, there is no cascading action as in standard grinding mills. Stirred mills are also common for mixing quicklime (CaO) into a lime slurry. There are several advantages to the tower mill: low noise, efficient energy usage, and low operating costs. Vertical shaft impactor mill (VSI mill). A VSI mill throws rock or ore particles against a wear plate by slinging them from a spinning center that rotates on a vertical shaft. This type of mill uses the same principle as a VSI crusher. Types of grinding mills. <templatestyles src="Div col/styles.css"/> See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Z_d=\\frac{d_{80,1}}{d_{80,2}}\\," }, { "math_id": 1, "text": "Z_S=\\frac{S_{v,2}}{S_{v,1}}=\\frac{S_{m,2}}{S_{m,1}}\\," }, { "math_id": 2, "text": "Z_a=\\frac{d_1}{a}\\," }, { "math_id": 3, "text": "E= 10 W \\left(\\frac{1}{\\sqrt {P_{80}}} - \\frac{1}{\\sqrt {F_{80}}}\\right)\\," } ]
https://en.wikipedia.org/wiki?curid=1095698
1096151
Parabolic fractal distribution
In probability and statistics, the parabolic fractal distribution is a type of discrete probability distribution in which the logarithm of the frequency or size of entities in a population is a quadratic polynomial of the logarithm of the rank (with the largest example having rank 1). This can markedly improve the fit over a simple power-law relationship (see references below). In the Laherrère/Deheuvels paper below, examples include galaxy sizes (ordered by luminosity), towns (in the USA, France, and world), spoken languages (by number of speakers) in the world, and oil fields in the world (by size). They also mention utility for this distribution in fitting seismic events (no example). The authors assert the advantage of this distribution is that it can be fitted using the largest known examples of the population being modeled, which are often readily available and complete, then the fitted parameters found can be used to compute the size of the entire population. So, for example, the populations of the hundred largest cities on the planet can be sorted and fitted, and the parameters found used to extrapolate to the smallest villages, to estimate the population of the planet. Another example is estimating total world oil reserves using the largest fields. In a number of applications, there is a so-called King effect where the top-ranked item(s) have a significantly greater frequency or size than the model predicts on the basis of the other items. The Laherrère/Deheuvels paper shows the example of Paris, when sorting the sizes of towns in France. When the paper was written Paris was the largest city with about ten million inhabitants, but the next largest town had only about 1.5 million. Towns in France excluding Paris closely follow a parabolic distribution, well enough that the 56 largest gave a very good estimate of the population of the country. But that distribution would predict the largest city to have about two million inhabitants, not 10 million. The King Effect is named after the notion that a King must defeat all rivals for the throne and takes their wealth, estates and power, thereby creating a buffer between himself and the next-richest of his subjects. That specific effect (intentionally created) may apply to corporate sizes, where the largest businesses use their wealth to buy up smaller rivals. Absent intent, the King Effect may occur as a result of some persistent growth advantage due to scale, or to some unique advantage. Larger cities are more efficient connectors of people, talent and other resources. Unique advantages might include being a port city, or a Capital city where law is made, or a center of activity where physical proximity increases opportunity and creates a feedback loop. An example is the motion picture industry; where actors, writers and other workers move to where the most studios are, and new studios are founded in the same place because that is where the most talent resides. To test for the King Effect, the distribution must be fitted excluding the 'k' top-ranked items, but without assigning new rank numbers to the remaining members of the population. For example, in France the ranks are (as of 2010): A fitting algorithm would process pairs {(1,12.09), (2,2.12), (3,1.72), (4,1.20), (5,1.15)} and find the parameters for the best parabolic fit through those points. To test for the King Effect we just exclude the first pair (or first 'k' pairs), and find parabolic parameters that fit the remainder of the points. So for France we would fit the four points {(2,2.12), (3,1.72), (4,1.20), (5,1.15)}. Then we can use those parameters to estimate the size of cities ranked [1,k] and determine if they are King Effect members or normal members. By comparison, Zipf's law fits a line through the points (also using the log of the rank and log of the value). A parabola (with one more parameter) will fit better, but far from the vertex the parabola is "also" nearly linear. Thus, although it is a judgment call for the statistician, if the fitted parameters put the vertex far from the points fitted, or if the parabolic curve is not a significantly better fit than a line, those may be symptomatic of overfitting (aka over-parameterization). The line (with two parameters instead of three) is probably the better generalization. More parameters always fit better, but at the cost of adding unexplained parameters or unwarranted assumptions (such as the assumption that a slight parabolic curve is a more appropriate model than a line). Alternatively, it is possible to force the fitted parabola to have its vertex at the rank 1 position. In that case, it is not certain the parabola will fit better (have less error) than a straight line; and the choice might be made between the two based on which has the least error. Definition. The probability mass function is given, as a function of the rank "n", by formula_0 where "b" and "c" are parameters of the distribution.
[ { "math_id": 0, "text": " f(n;b,c) \\propto n^{-b} \\exp(-c(\\log n)^2) ," } ]
https://en.wikipedia.org/wiki?curid=1096151
1096165
Fowler–Noll–Vo hash function
Non-cryptographic hash function Fowler–Noll–Vo (or FNV) is a non-cryptographic hash function created by Glenn Fowler, Landon Curt Noll, and Kiem-Phong Vo. The basis of the FNV hash algorithm was taken from an idea sent as reviewer comments to the IEEE POSIX P1003.2 committee by Glenn Fowler and Phong Vo in 1991. In a subsequent ballot round, Landon Curt Noll improved on their algorithm. In an email message to Landon, they named it the "Fowler/Noll/Vo" or FNV hash. Overview. The current versions are FNV-1 and FNV-1a, which supply a means of creating non-zero "FNV offset basis". FNV currently comes in 32-, 64-, 128-, 256-, 512-, and 1024-bit variants. For pure FNV implementations, this is determined solely by the availability of "FNV primes" for the desired bit length; however, the FNV webpage discusses methods of adapting one of the above versions to a smaller length that may or may not be a power of two. The FNV hash algorithms and reference FNV source code have been released into the public domain. The Python programming language previously used a modified version of the FNV scheme for its default codice_0 function. From Python 3.4, FNV has been replaced with SipHash to resist "hash flooding" denial-of-service attacks. FNV is not a cryptographic hash. The hash. One of FNV's key advantages is that it is very simple to implement. Start with an initial hash value of "FNV offset basis". For each byte in the input, multiply "hash" by the "FNV prime", then XOR it with the byte from the input. The alternate algorithm, FNV-1a, reverses the multiply and XOR steps. FNV-1 hash. The FNV-1 hash algorithm is as follows: algorithm fnv-1 is "hash" := FNV_offset_basis for each "byte_of_data" to be hashed do "hash" := "hash" × FNV_prime "hash" := "hash" XOR "byte_of_data" return "hash" In the above pseudocode, all variables are unsigned integers. All variables, except for "byte_of_data", have the same number of bits as the FNV hash. The variable, "byte_of_data", is an 8-bit unsigned integer. As an example, consider the 64-bit FNV-1 hash: FNV-1a hash. The FNV-1a hash differs from the FNV-1 hash only by the order in which the multiply and XOR is performed: algorithm fnv-1a is "hash" := FNV_offset_basis for each "byte_of_data" to be hashed do "hash" := "hash" XOR "byte_of_data" "hash" := "hash" × FNV_prime return "hash" The above pseudocode has the same assumptions that were noted for the FNV-1 pseudocode. The change in order leads to slightly better avalanche characteristics. FNV-0 hash (deprecated). The FNV-0 hash differs from the FNV-1 hash only by the initialisation value of the "hash" variable: algorithm fnv-0 is "hash" := 0 for each "byte_of_data" to be hashed do "hash" := "hash" × FNV_prime "hash" := "hash" XOR "byte_of_data" return "hash" The above pseudocode has the same assumptions that were noted for the FNV-1 pseudocode. A consequence of the initialisation of the hash to 0 is that empty messages and all messages consisting of only the byte 0, regardless of their length, hash to 0. Use of the FNV-0 hash is deprecated except for the computing of the FNV offset basis for use as the FNV-1 and FNV-1a hash parameters. FNV offset basis. There are several different FNV offset bases for various bit lengths. These offset bases are computed by computing the FNV-0 from the following 32 octets when expressed in ASCII: This is one of Landon Curt Noll's signature lines. This is the only current practical use for the deprecated FNV-0. FNV prime. An "FNV prime" is a prime number and is determined as follows: For a given integer s such that 4 < "s" < 11, let "n" = 2"s" and "t" = ⌊(5 + "n") / 12⌋; then the n-bit FNV prime is the smallest prime number p that is of the form formula_0 such that: * 0 < "b" < 28, * the number of one-bits in the binary representation of b is either 4 or 5, and * "p" mod (240 − 224 − 1) > 224 + 28 + 7. Experimentally, FNV primes matching the above constraints tend to have better dispersion properties. They improve the polynomial feedback characteristic when an FNV prime multiplies an intermediate hash value. As such, the hash values produced are more scattered throughout the n-bit hash space. FNV hash parameters. The above FNV prime constraints and the definition of the FNV offset basis yield the following table of FNV hash parameters: Non-cryptographic hash. The FNV hash was designed for fast hash table and checksum use, not cryptography. The authors have identified the following properties as making the algorithm unsuitable as a cryptographic hash function: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "256^t + 2^8 + \\mathrm{b}\\," } ]
https://en.wikipedia.org/wiki?curid=1096165
10962546
Pseudo-Euclidean space
In mathematics and theoretical physics, a pseudo-Euclidean space of signature ("k", "n-k") is a finite-dimensional real "n"-space together with a non-degenerate quadratic form "q". Such a quadratic form can, given a suitable choice of basis ("e"1, …, "e""n"), be applied to a vector "x" = "x"1"e"1 + ⋯ + "x""n""e""n", giving formula_0 which is called the "scalar square" of the vector "x". For Euclidean spaces, "k" = "n", implying that the quadratic form is positive-definite. When 0 < "k" < "n", then "q" is an isotropic quadratic form. Note that if 1 ≤ "i" ≤ "k" < "j" ≤ "n", then "q"("e""i" + "e""j") = 0, so that "e""i" + "e""j" is a null vector. In a pseudo-Euclidean space with "k" < "n", unlike in a Euclidean space, there exist vectors with negative scalar square. As with the term "Euclidean space", the term "pseudo-Euclidean space" may be used to refer to an affine space or a vector space depending on the author, with the latter alternatively being referred to as a pseudo-Euclidean vector space (see point–vector distinction). Geometry. The geometry of a pseudo-Euclidean space is consistent despite some properties of Euclidean space not applying, most notably that it is not a metric space as explained below. The affine structure is unchanged, and thus also the concepts line, plane and, generally, of an affine subspace (flat), as well as line segments. Positive, zero, and negative scalar squares. A null vector is a vector for which the quadratic form is zero. Unlike in a Euclidean space, such a vector can be non-zero, in which case it is self-orthogonal. If the quadratic form is indefinite, a pseudo-Euclidean space has a linear cone of null vectors given by { "x" | "q"("x") 0 }. When the pseudo-Euclidean space provides a model for spacetime (see below), the null cone is called the light cone of the origin. The null cone separates two open sets, respectively for which "q"("x") > 0 and "q"("x") < 0. If "k" ≥ 2, then the set of vectors for which "q"("x") > 0 is connected. If "k" = 1, then it consists of two disjoint parts, one with "x"1 > 0 and another with "x"1 < 0. Similarly, if "n" − "k" ≥ 2, then the set of vectors for which "q"("x") < 0 is connected. If "n" − "k" = 1, then it consists of two disjoint parts, one with "x""n" > 0 and another with "x""n" < 0. Interval. The quadratic form "q" corresponds to the square of a vector in the Euclidean case. To define the vector norm (and distance) in an invariant manner, one has to get square roots of scalar squares, which leads to possibly imaginary distances; see square root of negative numbers. But even for a triangle with positive scalar squares of all three sides (whose square roots are real and positive), the triangle inequality does not hold in general. Hence terms "norm" and "distance" are avoided in pseudo-Euclidean geometry, which may be replaced with "scalar square" and "interval" respectively. Though, for a curve whose tangent vectors all have scalar squares of the same sign, the arc length is defined. It has important applications: see proper time, for example. Rotations and spheres. The rotations group of such space is the indefinite orthogonal group O("q"), also denoted as O("k", "n" − "k") without a reference to particular quadratic form. Such "rotations" preserve the form "q" and, hence, the scalar square of each vector including whether it is positive, zero, or negative. Whereas Euclidean space has a unit sphere, pseudo-Euclidean space has the hypersurfaces { "x" | "q"("x") 1 } and { "x" | "q"("x") −1 }. Such a hypersurface, called a quasi-sphere, is preserved by the appropriate indefinite orthogonal group. Symmetric bilinear form. The quadratic form "q" gives rise to a symmetric bilinear form defined as follows: formula_1 The quadratic form can be expressed in terms of the bilinear form: "q"("x") = ⟨"x", "x"⟩. When ⟨"x", "y"⟩ = 0, then "x" and "y" are orthogonal vectors of the pseudo-Euclidean space. This bilinear form is often referred to as the scalar product, and sometimes as "inner product" or "dot product", but it does not define an inner product space and it does not have the properties of the dot product of Euclidean vectors. If "x" and "y" are orthogonal and "q"("x")"q"("y") < 0, then "x" is hyperbolic-orthogonal to "y". The standard basis of the real "n"-space is orthogonal. There are no ortho"normal" bases in a pseudo-Euclidean space for which the bilinear form is indefinite, because it cannot be used to define a vector norm. Subspaces and orthogonality. For a (positive-dimensional) subspace "U" of a pseudo-Euclidean space, when the quadratic form "q" is restricted to "U", following three cases are possible: One of the most jarring properties (for a Euclidean intuition) of pseudo-Euclidean vectors and flats is their orthogonality. When two non-zero Euclidean vectors are orthogonal, they are not collinear. The intersections of any Euclidean linear subspace with its orthogonal complement is the {0} subspace. But the definition from the previous subsection immediately implies that any vector ν of zero scalar square is orthogonal to itself. Hence, the isotropic line "N" = ⟨ν⟩ generated by a null vector ν is a subset of its orthogonal complement "N"⊥. The formal definition of the orthogonal complement of a vector subspace in a pseudo-Euclidean space gives a perfectly well-defined result, which satisfies the equality dim "U" + dim "U"⊥ = "n" due to the quadratic form's non-degeneracy. It is just the condition "U" ∩ "U"⊥ = {0} or, equivalently, "U" + "U"⊥ = all space, which can be broken if the subspace "U" contains a null direction. While subspaces form a lattice, as in any vector space, this ⊥ operation is not an orthocomplementation, in contrast to inner product spaces. For a subspace "N" composed "entirely" of null vectors (which means that the scalar square "q", restricted to "N", equals to 0), always holds: "N" ⊂ "N"⊥ or, equivalently, "N" ∩ "N"⊥ = "N". Such a subspace can have up to min("k", "n" − "k") dimensions. For a (positive) Euclidean "k"-subspace its orthogonal complement is a ("n" − "k")-dimensional negative "Euclidean" subspace, and vice versa. Generally, for a ("d"+ + "d"− + "d"0)-dimensional subspace "U" consisting of "d"+ positive and "d"− negative dimensions (see Sylvester's law of inertia for clarification), its orthogonal "complement" "U"⊥ has ("k" − "d"+ − "d"0) positive and ("n" − "k" − "d"− − "d"0) negative dimensions, while the rest "d"0 ones are degenerate and form the "U" ∩ "U"⊥ intersection. Parallelogram law and Pythagorean theorem. The parallelogram law takes the form formula_2 Using the square of the sum identity, for an arbitrary triangle one can express the scalar square of the third side from scalar squares of two sides and their bilinear form product: formula_3 This demonstrates that, for orthogonal vectors, a pseudo-Euclidean analog of the Pythagorean theorem holds: formula_4 Angle. Generally, absolute value of the bilinear form on two vectors may be greater than , equal to it, or less. This causes similar problems with definition of angle (see ) as appeared above for distances. If "k" = 1 (only one positive term in "q"), then for vectors of positive scalar square: formula_5 which permits definition of the hyperbolic angle, an analog of angle between these vectors through inverse hyperbolic cosine: formula_6 It corresponds to the distance on a ("n" − 1)-dimensional hyperbolic space. This is known as rapidity in the context of theory of relativity discussed below. Unlike Euclidean angle, it takes values from [0, +∞) and equals to 0 for antiparallel vectors. There is no reasonable definition of the angle between a null vector and another vector (either null or non-null). Algebra and tensor calculus. Like Euclidean spaces, every pseudo-Euclidean vector space generates a Clifford algebra. Unlike properties above, where replacement of "q" to −"q" changed numbers but not geometry, the sign reversal of the quadratic form results in a distinct Clifford algebra, so for example Cl1,2(R) and Cl2,1(R) are not isomorphic. Just like over any vector space, there are pseudo-Euclidean tensors. Like with a Euclidean structure, there are raising and lowering indices operators but, unlike the case with Euclidean tensors, there is no bases where these operations do not change values of components. If there is a vector "v""β", the corresponding covariant vector is: formula_7 and with the standard-form formula_8 the first "k" components of "v""α" are numerically the same as ones of "v""β", but the rest "n" − "k" have opposite signs. The correspondence between contravariant and covariant tensors makes a tensor calculus on pseudo-Riemannian manifolds a generalization of one on Riemannian manifolds. Examples. A very important pseudo-Euclidean space is Minkowski space, which is the mathematical setting in which the theory of special relativity is formulated. For Minkowski space, "n" = 4 and "k" = 3 so that formula_9 The geometry associated with this pseudo-metric was investigated by Poincaré. Its rotation group is the Lorentz group. The Poincaré group includes also translations and plays the same role as Euclidean groups of ordinary Euclidean spaces. Another pseudo-Euclidean space is the plane "z" = "x" + "yj" consisting of split-complex numbers, equipped with the quadratic form formula_10 This is the simplest case of an indefinite pseudo-Euclidean space ("n" = 2, "k" = 1) and the only one where the null cone dissects the remaining space into "four" open sets. The group SO+(1, 1) consists of so named hyperbolic rotations. Footnotes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "q(x) = \\left(x_1^2 + \\dots + x_k^2\\right) - \\left( x_{k+1}^2 + \\dots + x_n^2\\right)" }, { "math_id": 1, "text": "\\langle x, y\\rangle = \\tfrac12[q(x + y) - q(x) - q(y)] = \\left(x_1 y_1 + \\ldots + x_k y_k\\right) - \\left(x_{k+1}y_{k+1} + \\ldots + x_n y_n\\right)." }, { "math_id": 2, "text": "q(x) + q(y) = \\tfrac12(q(x + y) + q(x - y))." }, { "math_id": 3, "text": "q(x + y) = q(x) + q(y) + 2\\langle x, y \\rangle." }, { "math_id": 4, "text": "\\langle x, y \\rangle = 0 \\Rightarrow q(x) + q(y) = q(x + y)." }, { "math_id": 5, "text": "|\\langle x, y\\rangle| \\ge \\sqrt{q(x)q(y)}\\,," }, { "math_id": 6, "text": "\\operatorname{arcosh}\\frac{|\\langle x, y\\rangle|}{\\sqrt{q(x)q(y)}}\\,." }, { "math_id": 7, "text": "v_\\alpha = q_{\\alpha\\beta} v^\\beta\\,," }, { "math_id": 8, "text": "q_{\\alpha\\beta} = \\begin{pmatrix}\n I_{k\\times k} & 0 \\\\\n 0 & -I_{(n-k)\\times(n-k)}\n\\end{pmatrix}" }, { "math_id": 9, "text": "q(x) = x_1^2 + x_2^2 + x_3^2 - x_4^2," }, { "math_id": 10, "text": "\\lVert z \\rVert = z z^* = z^* z = x^2 - y^2." } ]
https://en.wikipedia.org/wiki?curid=10962546
10963217
Subtract a square
Subtraction game Subtract-a-square (also referred to as take-a-square) is a two-player mathematical subtraction game. It is played by two people with a pile of coins (or other tokens) between them. The players take turns removing coins from the pile, always removing a non-zero square number of coins. The game is usually played as a "normal play" game, which means that the player who removes the last coin wins. It is an impartial game, meaning that the set of moves available from any position does not depend on whose turn it is. Solomon W. Golomb credits the invention of this game to Richard A. Epstein. Example. A normal play game starting with 13 coins is a win for the first player provided they start with a subtraction of 1: player 1: 13 - 1*1 = 12 Player 2 now has three choices: subtract 1, 4 or 9. In each of these cases, player 1 can ensure that within a few moves the number 2 gets passed on to player 2: player 2: 12 - 1*1 = 11 player 2: 12 - 2*2 = 8 player 2: 12 - 3*3 = 3 player 1: 11 - 3*3 = 2 player 1: 8 - 1*1 = 7 player 1: 3 - 1*1 = 2 player 2: 7 - 1*1 = 6 or: 7 - 2*2 = 3 player 1: 6 - 2*2 = 2 3 - 1*1 = 2 Now player 2 has to subtract 1, and player 1 subsequently does the same: player 2: 2 - 1*1 = 1 player 1: 1 - 1*1 = 0 player 2 loses Mathematical theory. In the above example, the number '13' represents a winning or 'hot' position, whilst the number '2' represents a losing or 'cold' position. Given an integer list with each integer labeled 'hot' or 'cold', the strategy of the game is simple: try to pass on a 'cold' number to your opponent. This is always possible provided you are being presented a 'hot' number. Which numbers are 'hot' and which numbers are 'cold' can be determined recursively: Using this algorithm, a list of cold numbers is easily derived: 0, 2, 5, 7, 10, 12, 15, 17, 20, 22, 34, 39, 44, … (sequence in the OEIS) A faster divide and conquer algorithm can compute the same sequence of numbers, up to any threshold formula_0, in time formula_1. There are infinitely many cold numbers. More strongly, the number of cold numbers up to some threshold formula_0 must be at least proportional to the square root of formula_0, for otherwise there would not be enough of them to provide winning moves from all the hot numbers. Cold numbers tend to end in 0, 2, 4, 5, 7, or 9. Cold values that end with other digits are quite uncommon. This holds in particular for cold numbers ending in 6. Out of all the over 180,000 cold numbers less than 40 million, only one ends in a 6: 11,356. No two cold numbers can differ by a square, because if they did then a move from the larger of the two to the smaller would be winning, contradicting the assumption that they are both cold. Therefore, by the Furstenberg–Sárközy theorem, the natural density of the cold numbers is zero. That is, for every formula_2, and for all sufficiently large formula_0, the fraction of the numbers up to formula_0 that are cold is less than formula_3. More strongly, for every formula_0 there are formula_4 cold numbers up to formula_0. The exact growth rate of the cold numbers remains unknown, but experimentally the number of cold positions up to any given threshold formula_0 appears to be roughly formula_5. Extensions. The game subtract-a-square can also be played with multiple numbers. At each turn the player to make a move first selects one of the numbers, and then subtracts a square from it. Such a 'sum of normal games' can be analysed using the Sprague–Grundy theorem. This theorem states that each position in the game subtract-a-square may be mapped onto an equivalent nim heap size. Optimal play consists of moving to a collection of numbers such that the nim-sum of their equivalent nim heap sizes is zero, when this is possible. The equivalent nim heap size of a position may be calculated as the minimum excluded value of the equivalent sizes of the positions that can be reached by a single move. For subtract-a-square positions of values 0, 1, 2, ... the equivalent nim heap sizes are 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 3, 2, 3, 4, … (sequence in the OEIS). In particular, a position of subtract-a-square is cold if and only if its equivalent nim heap size is zero. It is also possible to play variants of this game using other allowed moves than the square numbers. For instance, Golomb defined an analogous game based on the Moser–de Bruijn sequence, a sequence that grows at a similar asymptotic rate to the squares, for which it is possible to determine more easily the set of cold positions and to define an easily computed optimal move strategy. Additional goals may also be added for the players without changing the winning conditions. For example, the winner can be given a "score" based on how many moves it took to win (the goal being to obtain the lowest possible score) and the loser given the goal to force the winner to take as long as possible to reach victory. With this additional pair of goals and an assumption of optimal play by both players, the scores for starting positions 0, 1, 2, ... are 0, 1, 2, 3, 1, 2, 3, 4, 5, 1, 4, 3, 6, 7, 3, 4, 1, 8, 3, 5, 6, 3, 8, 5, 5, 1, 5, 3, 7, … (sequence in the OEIS). Misère game. Subtract-a-square can also be played as a "misère" game, in which the player to make the last subtraction loses. The recursive algorithm to determine 'hot' and 'cold' numbers for the misère game is the same as that for the normal game, except that for the misère game the number 1 is 'cold' whilst 2 is 'hot'. It follows that the cold numbers for the misère variant are the cold numbers for the normal game shifted by 1: Misère play 'cold' numbers: 1, 3, 6, 8, 11, 13, 16, 18, 21, 23, 35, 40, 45, ... References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "O(n\\log^2 n)" }, { "math_id": 2, "text": "\\epsilon > 0" }, { "math_id": 3, "text": "\\epsilon" }, { "math_id": 4, "text": "O(n/(\\log n)^{\\frac{1}{4}\\log\\log\\log\\log n})" }, { "math_id": 5, "text": "n^{0.7}" } ]
https://en.wikipedia.org/wiki?curid=10963217
1096323
Morley's trisector theorem
3 intersections of any triangle's adjacent angle trisectors form an equilateral triangle In plane geometry, Morley's trisector theorem states that in any triangle, the three points of intersection of the adjacent angle trisectors form an equilateral triangle, called the first Morley triangle or simply the Morley triangle. The theorem was discovered in 1899 by Anglo-American mathematician Frank Morley. It has various generalizations; in particular, if all the trisectors are intersected, one obtains four other equilateral triangles. Proofs. There are many proofs of Morley's theorem, some of which are very technical. Several early proofs were based on delicate trigonometric calculations. Recent proofs include an algebraic proof by Alain Connes (1998, 2004) extending the theorem to general fields other than characteristic three, and John Conway's elementary geometry proof. The latter starts with an equilateral triangle and shows that a triangle may be built around it which will be similar to any selected triangle. Morley's theorem does not hold in spherical and hyperbolic geometry. One proof uses the trigonometric identity which, by using of the sum of two angles identity, can be shown to be equal to formula_0 The last equation can be verified by applying the sum of two angles identity to the left side twice and eliminating the cosine. Points formula_1 are constructed on formula_2 as shown. We have formula_3, the sum of any triangle's angles, so formula_4 Therefore, the angles of triangle formula_5 are formula_6 and formula_7 From the figure and Also from the figure formula_8 and The law of sines applied to triangles formula_9 and formula_10 yields and Express the height of triangle formula_11 in two ways formula_12 and formula_13 where equation (1) was used to replace formula_14 and formula_15 in these two equations. Substituting equations (2) and (5) in the formula_16 equation and equations (3) and (6) in the formula_17 equation gives formula_18 and formula_19 Since the numerators are equal formula_20 or formula_21 Since angle formula_22 and angle formula_23 are equal and the sides forming these angles are in the same ratio, triangles formula_5 and formula_24 are similar. Similar angles formula_25 and formula_26 equal formula_27, and similar angles formula_24 and formula_5 equal formula_28 Similar arguments yield the base angles of triangles formula_29 and formula_30 In particular angle formula_31 is found to be formula_32 and from the figure we see that formula_33 Substituting yields formula_34 where equation (4) was used for angle formula_10 and therefore formula_35 Similarly the other angles of triangle formula_36 are found to be formula_37 Side and area. The first Morley triangle has side lengths formula_38 where "R" is the circumradius of the original triangle and "A, B," and "C" are the angles of the original triangle. Since the area of an equilateral triangle is formula_39 the area of Morley's triangle can be expressed as formula_40 Morley's triangles. Morley's theorem entails 18 equilateral triangles. The triangle described in the trisector theorem above, called the first Morley triangle, has vertices given in trilinear coordinates relative to a triangle "ABC" as follows: formula_41 Another of Morley's equilateral triangles that is also a central triangle is called the second Morley triangle and is given by these vertices: formula_42 The third of Morley's 18 equilateral triangles that is also a central triangle is called the third Morley triangle and is given by these vertices: formula_43 The first, second, and third Morley triangles are pairwise homothetic. Another homothetic triangle is formed by the three points "X" on the circumcircle of triangle "ABC" at which the line "XX" −1 is tangent to the circumcircle, where "X" −1 denotes the isogonal conjugate of "X". This equilateral triangle, called the circumtangential triangle, has these vertices: formula_44 A fifth equilateral triangle, also homothetic to the others, is obtained by rotating the circumtangential triangle π/6 about its center. Called the circumnormal triangle, its vertices are as follows: formula_45 An operation called "extraversion" can be used to obtain one of the 18 Morley triangles from another. Each triangle can be extraverted in three different ways; the 18 Morley triangles and 27 extravert pairs of triangles form the 18 vertices and 27 edges of the Pappus graph. Related triangle centers. The Morley center, "X"(356), centroid of the first Morley triangle, is given in trilinear coordinates by formula_46 1st Morley–Taylor–Marr center, "X"(357): The first Morley triangle is perspective to triangle formula_47: the lines each connecting a vertex of the original triangle with the opposite vertex of the Morley triangle concur at the point formula_48 Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\sin(3\\theta)=-4\\sin^3\\theta+3\\sin\\theta." }, { "math_id": 1, "text": "D, E, F" }, { "math_id": 2, "text": "\\overline{BC}" }, { "math_id": 3, "text": "3\\alpha+3\\beta+3\\gamma=180^\\circ" }, { "math_id": 4, "text": "\\alpha+\\beta+\\gamma=60^\\circ." }, { "math_id": 5, "text": "XEF" }, { "math_id": 6, "text": "\\alpha, (60^\\circ+\\beta)," }, { "math_id": 7, "text": "(60^\\circ+\\gamma)." }, { "math_id": 8, "text": "\\angle{AYC}=180^\\circ-\\alpha-\\gamma=120^\\circ+\\beta" }, { "math_id": 9, "text": "AYC" }, { "math_id": 10, "text": "AZB" }, { "math_id": 11, "text": "ABC" }, { "math_id": 12, "text": "h=\\overline{AB} \\sin(3\\beta)=\\overline{AB}\\cdot 4\\sin\\beta\\sin(60^\\circ+\\beta)\\sin(120^\\circ+\\beta)" }, { "math_id": 13, "text": "h=\\overline{AC} \\sin(3\\gamma)=\\overline{AC}\\cdot 4\\sin\\gamma\\sin(60^\\circ+\\gamma)\\sin(120^\\circ+\\gamma)." }, { "math_id": 14, "text": "\\sin(3\\beta)" }, { "math_id": 15, "text": "\\sin(3\\gamma)" }, { "math_id": 16, "text": "\\beta" }, { "math_id": 17, "text": "\\gamma" }, { "math_id": 18, "text": "h=4\\overline{AB}\\sin\\beta\\cdot\\frac{\\overline{DX}}{\\overline{XE}}\\cdot\\frac{\\overline{AC}}{\\overline{AY}}\\sin\\gamma" }, { "math_id": 19, "text": "h=4\\overline{AC}\\sin\\gamma\\cdot\\frac{\\overline{DX}}{\\overline{XF}}\\cdot\\frac{\\overline{AB}}{\\overline{AZ}}\\sin\\beta" }, { "math_id": 20, "text": "\\overline{XE}\\cdot\\overline{AY}=\\overline{XF}\\cdot\\overline{AZ}" }, { "math_id": 21, "text": "\\frac{\\overline{XE}}{\\overline{XF}}=\\frac{\\overline{AZ}}{\\overline{AY}}." }, { "math_id": 22, "text": "EXF" }, { "math_id": 23, "text": "ZAY" }, { "math_id": 24, "text": "AZY" }, { "math_id": 25, "text": "AYZ" }, { "math_id": 26, "text": "XFE" }, { "math_id": 27, "text": "(60^\\circ+\\gamma)" }, { "math_id": 28, "text": "(60^\\circ+\\beta)." }, { "math_id": 29, "text": "BXZ" }, { "math_id": 30, "text": "CYX." }, { "math_id": 31, "text": "BZX" }, { "math_id": 32, "text": "(60^\\circ+\\alpha)" }, { "math_id": 33, "text": "\\angle{AZY}+\\angle{AZB}+\\angle{BZX}+\\angle{XZY}=360^\\circ." }, { "math_id": 34, "text": "(60^\\circ+\\beta)+(120^\\circ+\\gamma)+(60^\\circ+\\alpha)+\\angle{XZY}=360^\\circ" }, { "math_id": 35, "text": "\\angle{XZY}=60^\\circ." }, { "math_id": 36, "text": "XYZ" }, { "math_id": 37, "text": "60^\\circ." }, { "math_id": 38, "text": "\na^\\prime=b^\\prime=c^\\prime=8R\\,\\sin\\tfrac13A\\,\\sin\\tfrac13B\\,\\sin\\tfrac13C,\n" }, { "math_id": 39, "text": "\\tfrac{\\sqrt{3}}{4}a'^2," }, { "math_id": 40, "text": "\n\\text{Area} = 16 \\sqrt{3}R^2\\, \\sin^2\\!\\tfrac13A\\, \\sin^2\\!\\tfrac13B\\, \\sin^2\\!\\tfrac13C.\n" }, { "math_id": 41, "text": "\\begin{array}{ccccccc}\n A \\text{-vertex} &=& 1 &:& 2 \\cos\\tfrac13 C &:& 2 \\cos\\tfrac13 B \\\\[5mu]\n B \\text{-vertex} &=& 2 \\cos\\tfrac13 C &:& 1 &:& 2 \\cos\\tfrac13 A \\\\[5mu]\n C \\text{-vertex} &=& 2 \\cos\\tfrac13 B &:& 2 \\cos\\tfrac13 A &:& 1\n\\end{array}" }, { "math_id": 42, "text": "\\begin{array}{ccccccc}\n A \\text{-vertex} &=& 1 &:& 2 \\cos\\tfrac13(C - 2\\pi) &:& 2 \\cos\\tfrac13(B - 2\\pi) \\\\[5mu]\n B \\text{-vertex} &=& 2 \\cos\\tfrac13(C - 2\\pi) &:& 1 &:& 2 \\cos\\tfrac13(A - 2\\pi) \\\\[5mu]\n C \\text{-vertex} &=& 2 \\cos\\tfrac13(B - 2\\pi) &:& 2 \\cos\\tfrac13(A - 2\\pi) &:& 1\n\\end{array}" }, { "math_id": 43, "text": "\\begin{array}{ccccccc}\n A \\text{-vertex} &=& 1 &:& 2 \\cos\\tfrac13(C + 2\\pi) &:& 2 \\cos\\tfrac13(B + 2\\pi) \\\\[5mu]\n B \\text{-vertex} &=& 2 \\cos\\tfrac13(C + 2\\pi) &:& 1 &:& 2 \\cos\\tfrac13(A + 2\\pi) \\\\[5mu]\n C \\text{-vertex} &=& 2 \\cos\\tfrac13(B + 2\\pi) &:& 2 \\cos\\tfrac13(A + 2\\pi) &:& 1\n\\end{array}" }, { "math_id": 44, "text": "\\begin{array}{lllllll}\n A \\text{-vertex} &=& \\phantom{-}\\csc\\tfrac13(C - B) &:& \\phantom{-}\\csc\\tfrac13(2C + B) &:& -\\csc\\tfrac13(C + 2B) \\\\[5mu]\n B \\text{-vertex} &=& -\\csc\\tfrac13(A + 2C) &:& \\phantom{-}\\csc\\tfrac13(A - C) &:& \\phantom{-}\\csc\\tfrac13(2A + C) \\\\[5mu]\n C \\text{-vertex} &=& \\phantom{-}\\csc\\tfrac13(2B + A) &:& -\\csc\\tfrac13(B + 2A) &:& \\phantom{-}\\csc\\tfrac13(B - A)\n\\end{array}" }, { "math_id": 45, "text": "\\begin{array}{lllllll}\n A \\text{-vertex} &=& \\phantom{-}\\sec\\tfrac13(C - B) &:& -\\sec\\tfrac13(2C + B) &:& -\\sec\\tfrac13(C + 2B) \\\\[5mu]\n B \\text{-vertex} &=& -\\sec\\tfrac13(A + 2C) &:& \\phantom{-}\\sec\\tfrac13(A - C) &:& -\\sec\\tfrac13(2A + C) \\\\[5mu]\n C \\text{-vertex} &=& -\\sec\\tfrac13(2B + A) &:& -\\sec\\tfrac13(B + 2A) &:& \\phantom{-}\\sec\\tfrac13(B - A)\n\\end{array}" }, { "math_id": 46, "text": "\n\\cos\\tfrac13A + 2\\cos\\tfrac13B\\,\\cos\\tfrac13C \\,:\\, \\cos\\tfrac13B + 2\\cos\\tfrac13C\\,\\cos\\tfrac13A \\,:\\, \\cos\\tfrac13C + 2\\cos\\tfrac13A\\,\\cos\\tfrac13B\n" }, { "math_id": 47, "text": "\\triangle ABC" }, { "math_id": 48, "text": "\n\\sec\\tfrac13A \\,:\\, \\sec\\tfrac13B \\,:\\, \\sec\\tfrac13C\n" } ]
https://en.wikipedia.org/wiki?curid=1096323
1096354
Trip distribution
Trip distribution (or destination choice or zonal interchange analysis) is the second component (after trip generation, but before mode choice and route assignment) in the traditional four-step transportation forecasting model. This step matches tripmakers’ origins and destinations to develop a “trip table”, a matrix that displays the number of trips going from each origin to each destination. Historically, this component has been the least developed component of the transportation planning model. Where: "T" "ij" = trips from origin "i" to destination "j". Note that the practical value of trips on the diagonal, e.g. from zone 1 to zone 1, is zero since no intra-zonal trip occurs. Work trip distribution is the way that travel demand models understand how people take jobs. There are trip distribution models for other (non-work) activities such as the choice of location for grocery shopping, which follow the same structure. History. Over the years, modelers have used several different formulations of trip distribution. The first was the Fratar or Growth model (which did not differentiate trips by purpose). This structure extrapolated a base year trip table to the future based on growth, but took no account of changing spatial accessibility due to increased supply or changes in travel patterns and congestion. (Simple Growth factor model, Furness Model and Detroit model are models developed at the same time period) The next models developed were the gravity model and the intervening opportunities model. The most widely used formulation is still the gravity model. While studying traffic in Baltimore, Maryland, Alan Voorhees developed a mathematical formula to predict traffic patterns based on land use. This formula has been instrumental in the design of numerous transportation and public works projects around the world. He wrote "A General Theory of Traffic Movement," (Voorhees, 1956) which applied the gravity model to trip distribution, which translates trips generated in an area to a matrix that identifies the number of trips from each origin to each destination, which can then be loaded onto the network. Evaluation of several model forms in the 1960s concluded that "the gravity model and intervening opportunity model proved of about equal reliability and utility in simulating the 1948 and 1955 trip distribution for Washington, D.C." (Heanue and Pyers 1966). The Fratar model was shown to have weakness in areas experiencing land use changes. As comparisons between the models showed that either could be calibrated equally well to match observed conditions, because of computational ease, gravity models became more widely spread than intervening opportunities models. Some theoretical problems with the intervening opportunities model were discussed by Whitaker and West (1968) concerning its inability to account for all trips generated in a zone which makes it more difficult to calibrate, although techniques for dealing with the limitations have been developed by Ruiter (1967). With the development of logit and other discrete choice techniques, new, demographically disaggregate approaches to travel demand were attempted. By including variables other than travel time in determining the probability of making a trip, it is expected to have a better prediction of travel behavior. The logit model and gravity model have been shown by Wilson (1967) to be of essentially the same form as used in statistical mechanics, the entropy maximization model. The application of these models differs in concept in that the gravity model uses impedance by travel time, perhaps stratified by socioeconomic variables, in determining the probability of trip making, while a discrete choice approach brings those variables inside the utility or impedance function. Discrete choice models require more information to estimate and more computational time. Ben-Akiva and Lerman (1985) have developed combination destination choice and mode choice models using a logit formulation for work and non-work trips. Because of computational intensity, these formulations tended to aggregate traffic zones into larger districts or rings in estimation. In current application, some models, including for instance the transportation planning model used in Portland, Oregon, use a logit formulation for destination choice. Allen (1984) used utilities from a logit based mode choice model in determining composite impedance for trip distribution. However, that approach, using mode choice log-sums implies that destination choice depends on the same variables as mode choice. Levinson and Kumar (1995) employ mode choice probabilities as a weighting factor and develop a specific impedance function or “f-curve” for each mode for work and non-work trip purposes. Mathematics. At this point in the transportation planning process, the information for zonal interchange analysis is organized in an origin-destination table. On the left is listed trips produced in each zone. Along the top are listed the zones, and for each zone we list its attraction. The table is "n" x "n", where "n" = the number of zones. Each cell in our table is to contain the number of trips from zone "i" to zone "j". We do not have these within-cell numbers yet, although we have the row and column totals. With data organized this way, our task is to fill in the cells for tables headed "t" = 1 through say "t" = "n". Actually, from home interview travel survey data and attraction analysis we have the cell information for "t" = 1. The data are a sample, so we generalize the sample to the universe. The techniques used for zonal interchange analysis explore the empirical rule that fits the "t" = 1 data. That rule is then used to generate cell data for "t" = 2, "t" = 3, "t" = 4, etc., to "t" = "n". The first technique developed to model zonal interchange involves a model such as this: formula_0 where: Zone "i" generates "T" "i" trips; how many will go to zone "j"? That depends on the attractiveness of "j" compared to the attractiveness of all places; attractiveness is tempered by the distance a zone is from zone "i". We compute the fraction comparing "j" to all places and multiply "T" ;"i" by it. The rule is often of a gravity form: formula_7 where: But in the zonal interchange mode, we use numbers related to trip origins ("T" ;"i") and trip destinations ("T" ;"j") rather than populations. There are many model forms because we may use weights and special calibration parameters, e.g., one could write say: formula_10 or formula_11 where: Gravity model. The gravity model illustrates the macroscopic relationships between places (say homes and workplaces). It has long been posited that the interaction between two locations declines with increasing (distance, time, and cost) between them, but is positively associated with the amount of activity at each location (Isard, 1956). In analogy with physics, Reilly (1929) formulated Reilly's law of retail gravitation, and J. Q. Stewart (1948) formulated definitions of demographic gravitation, force, energy, and potential, now called accessibility (Hansen, 1959). The distance decay factor of 1/distance has been updated to a more comprehensive function of generalized cost, which is not necessarily linear - a negative exponential tends to be the preferred form. The gravity model has been corroborated many times as a basic underlying aggregate relationship (Scott 1988, Cervero 1989, Levinson and Kumar 1995). The rate of decline of the interaction (called alternatively, the impedance or friction factor, or the utility or propensity function) has to be empirically measured, and varies by context. Limiting the usefulness of the gravity model is its aggregate nature. Though policy also operates at an aggregate level, more accurate analyses will retain the most detailed level of information as long as possible. While the gravity model is very successful in explaining the choice of a large number of individuals, the choice of any given individual varies greatly from the predicted value. As applied in an urban travel demand context, the disutilities are primarily time, distance, and cost, although discrete choice models with the application of more expansive utility expressions are sometimes used, as is stratification by income or vehicle ownership. Mathematically, the gravity model often takes the form: formula_14 formula_15 formula_16 where It is doubly constrained, in the sense that for any "i" the total number of trips from "i" predicted by the model always (mechanically, for any parameter values) equals the real total number of trips from "i". Similarly, the total number of trips to "j" predicted by the model equals the real total number of trips to "j", for any "j". Entropy analysis. Wilson (1970) gives another way to think about zonal interchange problem. This section treats Wilson’s methodology to give a grasp of central ideas. To start, consider some trips where there are seven people in origin zones commuting to seven jobs in destination zones. One configuration of such trips will be: formula_19 where 0! = 1. That configuration can appear in 1,260 ways. We have calculated the number of ways that configuration of trips might have occurred, and to explain the calculation, let’s recall those coin tossing experiments talked about so much in elementary statistics. The number of ways a two-sided coin can come up is formula_20, where n is the number of times we toss the coin. If we toss the coin once, it can come up heads or tails, formula_21. If we toss it twice, it can come up HH, HT, TH, or TT, four ways, and formula_22. To ask the specific question about, say, four coins coming up all heads, we calculate formula_23. Two heads and two tails would be formula_24. We are solving the equation: formula_25 An important point is that as "n" gets larger, our distribution gets more and more peaked, and it is more and more reasonable to think of a most likely state. However, the notion of most likely state comes not from this thinking; it comes from statistical mechanics, a field well known to Wilson and not so well known to transportation planners. The result from statistical mechanics is that a descending series is most likely. Think about the way the energy from lights in the classroom is affecting the air in the classroom. If the effect resulted in an ascending series, many of the atoms and molecules would be affected a lot and a few would be affected a little. The descending series would have many affected not at all or not much and only a few affected very much. We could take a given level of energy and compute excitation levels in ascending and descending series. Using the formula above, we would compute the ways particular series could occur, and we would conclude that descending series dominate. That is more-or-less Boltzmann's Law, formula_26 That is, the particles at any particular excitation level "j" will be a negative exponential function of the particles in the ground state, formula_27, the excitation level, formula_28, and a parameter formula_29, which is a function of the (average) energy available to the particles in the system. The two paragraphs above have to do with ensemble methods of calculation developed by Gibbs, a topic well beyond the reach of these notes. Returning to the O-D matrix, note that we have not used as much information as we would have from an O and D survey and from our earlier work on trip generation. For the same travel pattern in the O-D matrix used before, we would have row and column totals, i.e.: Consider the way the four folks might travel, 4!/(2!1!1!) = 12; consider three folks, 3!/(0!2!1!) = 3. All travel can be combined in 12×3 = 36 ways. The possible configuration of trips is, thus, seen to be much constrained by the column and row totals. We put this point together with the earlier work with our matrix and the notion of most likely state to say that we want to formula_30 subject to formula_31 where: formula_32 and this is the problem that we have solved above. Wilson adds another consideration; he constrains the system to the amount of energy available (i.e., money), and we have the additional constraint, formula_33 where "C" is the quantity of resources available and formula_12 is the travel cost from "i" to "j". The discussion thus far contains the central ideas in Wilson’s work, but we are not yet to the place where the reader will recognize the model as it is formulated by Wilson. First, writing the formula_34 function to be maximized using Lagrangian multipliers, we have: formula_35 where formula_36 and formula_29 are the Lagrange multipliers, formula_29 having an energy sense. Second, it is convenient to maximize the natural log (ln) rather than formula_37, for then we may use Stirling's approximation. formula_38 so formula_39 Third, evaluating the maximum, we have formula_40 with solution formula_41 formula_42 Finally, substituting this value of formula_1 back into our constraint equations, we have: formula_43 and, taking the constant multiples outside of the summation sign formula_44 Let formula_45 we have formula_46 which says that the most probable distribution of trips has a gravity model form, formula_1 is proportional to trip origins and destinations. The constants formula_47, formula_48, and formula_29 ensure that constraints are met. Turning now to computation, we have a large problem. First, we do not know the value of "C", which earlier on we said had to do with the money available, it was a cost constraint. Consequently, we have to set formula_29 to different values and then find the best set of values for formula_47 and formula_48. We know what formula_29 means – the greater the value of formula_29, the less the cost of average distance traveled. (Compare formula_29 in Boltzmann's Law noted earlier.) Second, the values of formula_47 and formula_48 depend on each other. So for each value of formula_29, we must use an iterative solution. There are computer programs to do this. Wilson's method has been applied to the Lowry model. Issues. Congestion. One of the key drawbacks to the application of many early models was the inability to take account of congested travel time on the road network in determining the probability of making a trip between two locations. Although Wohl noted as early as 1963 research into the feedback mechanism or the “interdependencies among assigned or distributed volume, travel time (or travel ‘resistance’) and route or system capacity”, this work has yet to be widely adopted with rigorous tests of convergence, or with a so-called “equilibrium” or “combined” solution (Boyce et al. 1994). Haney (1972) suggests internal assumptions about travel time used to develop demand should be consistent with the output travel times of the route assignment of that demand. While small methodological inconsistencies are necessarily a problem for estimating base year conditions, forecasting becomes even more tenuous without an understanding of the feedback between supply and demand. Initially heuristic methods were developed by Irwin and Von Cube and others, and later formal mathematical programming techniques were established by Suzanne Evans. Stability of travel times. A key point in analyzing feedback is the finding in earlier research that commuting times have remained stable over the past thirty years in the Washington Metropolitan Region, despite significant changes in household income, land use pattern, family structure, and labor force participation. Similar results have been found in the Twin Cities The stability of travel times and distribution curves over the past three decades gives a good basis for the application of aggregate trip distribution models for relatively long term forecasting. This is not to suggest that there exists a constant travel time budget. Footnotes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\nT_{ij} = T_i\\frac{{A_j f\\left( {C_{ij} } \\right)K_{ij} }}\n{{\\sum_{j' = 1}^n {A_{j'} f\\left( {C_{ij'} } \\right)K_{ij'} } }}\n" }, { "math_id": 1, "text": "T_{ij}" }, { "math_id": 2, "text": "T_i" }, { "math_id": 3, "text": "A_j" }, { "math_id": 4, "text": "f(C_{ij})" }, { "math_id": 5, "text": "C_{ij}^b" }, { "math_id": 6, "text": "K_{ij}" }, { "math_id": 7, "text": "\nT_{ij} = a\\frac{{P_i P_j }}\n{{C_{ij}^b }}\n" }, { "math_id": 8, "text": "P_i; P_j" }, { "math_id": 9, "text": "a; b" }, { "math_id": 10, "text": "\nT_{ij} = a\\frac{{T_i^c T_j^d }}\n{{C_{ij}^b }}\n" }, { "math_id": 11, "text": "\nT_{ij} = \\frac{{cT_i dT_j }}\n{{C_{ij}^b }}\n" }, { "math_id": 12, "text": "C_{ij}" }, { "math_id": 13, "text": "T_j" }, { "math_id": 14, "text": "\nT_{ij} = K_i K_j T_i T_j f(C_{ij} )\n" }, { "math_id": 15, "text": "\n\\sum_j {T_{ij} = T_i } ,\\sum_i {T_{ij} = T_j } \n" }, { "math_id": 16, "text": "\nK_i = \\frac{1}\n{{\\sum_j {K_j T_j f(C_{ij} )} }},K_j = \\frac{1}\n{{\\sum_i {K_i T_i f(C_{ij} )} }}\n" }, { "math_id": 17, "text": "K_i, K_j" }, { "math_id": 18, "text": "f" }, { "math_id": 19, "text": "\nw\\left( {T_{ij} } \\right) = \\frac{{7!}}\n{{2!1!1!0!2!1!}} = 1260\n" }, { "math_id": 20, "text": "2^n" }, { "math_id": 21, "text": "2^1 = 2" }, { "math_id": 22, "text": "2^2 = 4" }, { "math_id": 23, "text": "4!/(4!0!) = 1" }, { "math_id": 24, "text": "4!/(2!2!) = 6" }, { "math_id": 25, "text": "\nw = \\frac{{n!}}\n{{\\prod_{i = 1}^n {n_i !} }}\n" }, { "math_id": 26, "text": "\np_j = p_0 e^{\\beta e_j } \n" }, { "math_id": 27, "text": "p_0" }, { "math_id": 28, "text": "e_j" }, { "math_id": 29, "text": "\\beta" }, { "math_id": 30, "text": "\n\\max w\\left( {T_{ij} } \\right) = \\frac{{T!}}\n{{\\prod_{ij} {T_{ij}!} }} \n" }, { "math_id": 31, "text": " \n\\sum_j {T_{ij} = T_i } ;\n\\sum_i {T_{ij} = T_j } \n" }, { "math_id": 32, "text": "\nT = \\sum_j {\\sum_i {T_{ij} } } = \\sum_i {T_i } = \\sum_j {T_j } \n" }, { "math_id": 33, "text": "\n\\sum_i {\\sum_j {T_{ij} C_{ij} = C} } \n" }, { "math_id": 34, "text": "\\Lambda" }, { "math_id": 35, "text": "\n\\Lambda(T_{ij},\\lambda_i,\\lambda_j) = \\frac{{T!}}\n{{\\prod_{ij} {Tij!} }} + \\sum_i {\\lambda _i \\left( {T_i - \\sum_j {T_{ij} } } \\right)} + \\sum_j {\\lambda _j \\left( {T_j - \\sum_i {T_{ij} } } \\right) + \\beta \\left( {C - \\sum_i {\\sum_j {T_{ij} C_{ij} } } } \\right)} \n" }, { "math_id": 36, "text": "\\lambda_i, \\lambda_j" }, { "math_id": 37, "text": "w(T_{ij})" }, { "math_id": 38, "text": "\n\\ln N! \\approx N\\ln N - N\n" }, { "math_id": 39, "text": "\n\\frac{{\\partial \\ln N!}}\n{{\\partial N}} \\approx \\ln N\n" }, { "math_id": 40, "text": "\n\\frac{{\\partial \\Lambda(T_{ij},\\lambda_i,\\lambda_j) }}\n{{\\partial T_{ij} }} = - \\ln T_{ij} - \\lambda _i - \\lambda _j - \\beta C_{ij} = 0\n" }, { "math_id": 41, "text": "\n\\ln T_{ij} = - \\lambda _i - \\lambda _j - \\beta C_{ij} \n" }, { "math_id": 42, "text": "\nT_{ij} = e^{ - \\lambda _i - \\lambda _j - \\beta C_{ij} } \n" }, { "math_id": 43, "text": "\n\\sum_j {e^{ - \\lambda _i - \\lambda _j - \\beta C_{ij} } } = T_i;\n\\sum_i {e^{ - \\lambda _i - \\lambda _j - \\beta C_{ij} } } = T_j\n" }, { "math_id": 44, "text": "\ne^{ - \\lambda _i } = \\frac{{T_i }}\n{{\\sum_j {e^{ - \\lambda _j - \\beta C_{ij} } } }};e^{ - \\lambda _j } = \\frac{{T_j }}\n{{\\sum_i {e^{ - \\lambda _i - \\beta C_{ij} } } }}\n" }, { "math_id": 45, "text": "\n\\frac{{e^{ - \\lambda _i } }}\n{{T_i }} = A_i ;\\frac{{e^{ - \\lambda _j } }}\n{{T_j }} = B_j \n" }, { "math_id": 46, "text": "\nT_{ij} = A_i B_j T_i T_j e^{ - \\beta C_{ij} } \n" }, { "math_id": 47, "text": "A_i" }, { "math_id": 48, "text": "B_j" } ]
https://en.wikipedia.org/wiki?curid=1096354
1096396
Categorical theory
In mathematical logic, a theory is categorical if it has exactly one model (up to isomorphism). Such a theory can be viewed as "defining" its model, uniquely characterizing the model's structure. In first-order logic, only theories with a finite model can be categorical. Higher-order logic contains categorical theories with an infinite model. For example, the second-order Peano axioms are categorical, having a unique model whose domain is the set of natural numbers formula_0 In model theory, the notion of a categorical theory is refined with respect to cardinality. A theory is "κ"-categorical (or categorical in "κ") if it has exactly one model of cardinality "κ" up to isomorphism. Morley's categoricity theorem is a theorem of Michael D. Morley (1965) stating that if a first-order theory in a countable language is categorical in some uncountable cardinality, then it is categorical in all uncountable cardinalities. Saharon Shelah (1974) extended Morley's theorem to uncountable languages: if the language has cardinality "κ" and a theory is categorical in some uncountable cardinal greater than or equal to "κ" then it is categorical in all cardinalities greater than "κ". History and motivation. Oswald Veblen in 1904 defined a theory to be categorical if all of its models are isomorphic. It follows from the definition above and the Löwenheim–Skolem theorem that any first-order theory with a model of infinite cardinality cannot be categorical. One is then immediately led to the more subtle notion of "κ"-categoricity, which asks: for which cardinals "κ" is there exactly one model of cardinality "κ" of the given theory "T" up to isomorphism? This is a deep question and significant progress was only made in 1954 when Jerzy Łoś noticed that, at least for complete theories "T" over countable languages with at least one infinite model, he could only find three ways for "T" to be "κ"-categorical at some "κ": In other words, he observed that, in all the cases he could think of, "κ"-categoricity at any one uncountable cardinal implied "κ"-categoricity at all other uncountable cardinals. This observation spurred a great amount of research into the 1960s, eventually culminating in Michael Morley's famous result that these are in fact the only possibilities. The theory was subsequently extended and refined by Saharon Shelah in the 1970s and beyond, leading to stability theory and Shelah's more general programme of classification theory. Examples. There are not many natural examples of theories that are categorical in some uncountable cardinal. The known examples include: There are also examples of theories that are categorical in "ω" but not categorical in uncountable cardinals. The simplest example is the theory of an equivalence relation with exactly two equivalence classes, both of which are infinite. Another example is the theory of dense linear orders with no endpoints; Cantor proved that any such countable linear order is isomorphic to the rational numbers: see Cantor's isomorphism theorem. Properties. Every categorical theory is complete. However, the converse does not hold. Any theory "T" categorical in some infinite cardinal "κ" is very close to being complete. More precisely, the Łoś–Vaught test states that if a satisfiable theory has no finite models and is categorical in some infinite cardinal "κ" at least equal to the cardinality of its language, then the theory is complete. The reason is that all infinite models are first-order equivalent to some model of cardinal "κ" by the Löwenheim–Skolem theorem, and so are all equivalent as the theory is categorical in "κ". Therefore, the theory is complete as all models are equivalent. The assumption that the theory have no finite models is necessary. Notes. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathbb{N}." } ]
https://en.wikipedia.org/wiki?curid=1096396
10964966
List of NBA teams by single season win percentage
This is a list of the all-time best regular season winning percentages in the NBA. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mbox{Average point differential}=\\frac{\\mbox{Total points for}-\\mbox{Total points against}}{\\mbox{Total games played}}" } ]
https://en.wikipedia.org/wiki?curid=10964966
10967
Forgetting curve
Decline of memory retention in time The forgetting curve hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it. A related concept is the strength of memory that refers to the durability that memory traces in the brain. The stronger the memory, the longer period of time that a person is able to recall it. A typical graph of the forgetting curve purports to show that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material. The forgetting curve supports one of the seven kinds of memory failures: transience, which is the process of forgetting that occurs with the passage of time. History. From 1880 to 1885, Hermann Ebbinghaus ran a limited, incomplete study on himself and published his hypothesis in 1885 as "" (later translated into English as "Memory: A Contribution to Experimental Psychology"). Ebbinghaus studied the memorisation of nonsense syllables, such as "WID" and "ZOF" (CVCs or Consonant–Vowel–Consonant) by repeatedly testing himself after various time periods and recording the results. He plotted these results on a graph creating what is now known as the "forgetting curve". Ebbinghaus investigated the rate of forgetting, but not the effect of spaced repetition on the increase in retrievability of memories. Ebbinghaus's publication also included an equation to approximate his forgetting curve: formula_0 Here, formula_1 represents 'Savings' expressed as a percentage, and formula_2 represents time in minutes, counting from one minute before end of learning. The constants c and k are 1.25 and 1.84 respectively. Savings is defined as the relative amount of time saved on the second learning trial as a result of having had the first. A savings of 100% would indicate that all items were still known from the first trial. A 75% savings would mean that relearning missed items required 25% as long as the original learning session (to learn all items). 'Savings' is thus, analogous to retention rate. In 2015, an attempt to replicate the forgetting curve with one study subject has shown the experimental results similar to Ebbinghaus' original data. Ebbinghaus' experiment has significantly contributed to experimental psychology. He was the first to carry out a series of well-designed experiments on the subject of forgetting, and he was one of the first to choose artificial stimuli in the research of experimental psychology. Since his introduction of nonsense syllables, a large number of experiments in experimental psychology has been based on highly controlled artificial stimuli. Increasing rate of learning. Hermann Ebbinghaus hypothesized that the speed of forgetting depends on a number of factors such as the difficulty of the learned material (e.g. how meaningful it is), its representation and other physiological factors such as stress and sleep. He further hypothesized that the basal forgetting rate differs little between individuals. He concluded that the difference in performance can be explained by mnemonic representation skills. He went on to hypothesize that basic training in mnemonic techniques can help overcome those differences in part. He asserted that the best methods for increasing the strength of memory are: His premise was that each repetition in learning increases the optimum interval before the next repetition is needed (for near-perfect retention, initial repetitions may need to be made within days, but later they can be made after years). He discovered that information is easier to recall when it's built upon things you already know, and the forgetting curve was flattened by every repetition. It appeared that by applying frequent training in learning, the information was solidified by repeated recalling. Later research also suggested that, other than the two factors Ebbinghaus proposed, higher original learning would also produce slower forgetting. The more information was originally learned, the slower the forgetting rate would be. Spending time each day to remember information will greatly decrease the effects of the forgetting curve. Some learning consultants claim reviewing material in the first 24 hours after learning information is the optimum time to actively recall the content and reset the forgetting curve. Evidence suggests waiting 10–20% of the time towards when the information will be needed is the optimum time for a single review. Some memories remain free from the detrimental effects of interference and do not necessarily follow the typical forgetting curve as various noise and outside factors influence what information would be remembered. There is debate among supporters of the hypothesis about the shape of the curve for events and facts that are more significant to the subject. Some supporters, for example, suggest that memories of shocking events such as the Kennedy Assassination or 9/11 are vividly imprinted in memory (flashbulb memory). Others have compared contemporaneous written recollections with recollections recorded years later, and found considerable variations as the subject's memory incorporates after-acquired information. There is considerable research in this area as it relates to eyewitness identification testimony, and eyewitness accounts are found demonstrably unreliable. Equations. Many equations have since been proposed to approximate forgetting, perhaps the simplest being an exponential curve described by the equation formula_3 where formula_4 is retrievability (a measure of how easy it is to retrieve a piece of information from memory), formula_5 is stability of memory (determines how fast formula_4 falls over time in the absence of training, testing or other recall), and formula_2 is time. Simple equations such as this one were not found to provide a good fit to the available data. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "b = \\frac{100k}{(\\log(t))^c +k}" }, { "math_id": 1, "text": "b" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "R = e^{-\\frac{t}{S}}," }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=10967
10968553
Methylmalonyl CoA epimerase
Methylmalonyl CoA epimerase (EC 5.1.99.1, "methylmalonyl-CoA racemase", "methylmalonyl coenzyme A racemase", "DL-methylmalonyl-CoA racemase", "2-methyl-3-oxopropanoyl-CoA 2-epimerase [incorrect]") is an enzyme involved in fatty acid catabolism that is encoded in human by the "MCEE" gene located on chromosome 2. It is routinely and incorrectly labeled as "methylmalonyl-CoA racemase". It is not a racemase because the CoA moiety has 5 other stereocenters. Structure. The "MCEE" gene is located in the 2p13 region and contains 4 exons, and encodes for a protein that is approximately 18 kDa in size and located to the mitochondrial matrix. Several natural variants in amino acid sequences exist. The structure of the MCEE protein has been resolved by X-ray crystallography at 1.8-angstrom resolution. Function. The MCEE gene encodes an enzyme that interconverts D- and L- methylmalonyl-CoA during the degradation of branched-chain amino acids, odd chain-length fatty acids, and other metabolites. In biochemistry terms, it catalyzes the reaction that converts ("S")-methylmalonyl-CoA to the ("R") form. This enzyme catalyses the following chemical reaction ("S")-methylmalonyl-CoA formula_0 ("R")-methylmalonyl-CoA Methylmalonyl CoA epimerase plays an important role in the catabolism of fatty acids with odd-length carbon chains. In the catabolism of even-chain saturated fatty acids, the β-oxidation pathway breaks down fatty acyl-CoA molecules in repeated sequences of four reactions to yield one acetyl CoA per repeated sequence. This means that, for each round of β-oxidation, the fatty acyl-Co-A is shortened by two carbons. If the fatty acid began with an even number of carbons, this process could break down an entire saturated fatty acid into acetyl-CoA units. If the fatty acid began with an odd number of carbons, however, β-oxidation would break the fatty acyl-CoA down until the three carbon propionyl-CoA is formed. In order to convert this to the metabolically useful succinyl-CoA, three reactions are needed. The propionyl-CoA is first carboxylated to ("S")-methylmalonyl-CoA by the enzyme Propionyl-CoA carboxylase. Methylmalonyl CoA epimerase then catalyzes the rearrangement of ("S")-methylmalonyl-CoA to the ("R") form in a reaction that uses a vitamin B12 cofactor and a resonance-stabilized carbanion intermediate. The ("R")-methylmalonyl-CoA is then converted to succinyl-CoA in a reaction catalyzed by methylmalonyl-CoA mutase. Acting as a general base, the enzyme abstracts a proton from the β-carbon of ("R")-methylmalonyl-CoA. This results in the formation of a carbanion intermediate in which the α-carbon is stabilized by resonance. The enzyme then acts as a general acid to protonate the β-carbon, resulting in the formation of ("S")-methylmalonyl-CoA. Clinical significance. Mutations in the MCEE gene causes methymalonyl-CoA epimerase deficiency (MCEED), a rare autosomal recessive inborn error of metabolism in amino acid metabolisms involving branched-chain amino acids valine, leucine, and isoleucine. Patients with MCEED may present with life-threatening neonatal metabolic acidosis, hyperammonemia, feeding difficulties, and coma. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10968553
10969277
Rank product
The rank product is a biologically motivated rank test for the detection of differentially expressed genes in replicated microarray experiments. It is a simple non-parametric statistical method based on ranks of fold changes. In addition to its use in expression profiling, it can be used to combine ranked lists in various application domains, including proteomics, metabolomics, statistical meta-analysis, and general feature selection. Calculation of the rank product. Given "n" genes and "k" replicates, let formula_0 the rank of gene "g" in the "i"-th replicate. Compute the rank product via the geometric mean: formula_1 Determination of significance levels. Simple permutation-based estimation is used to determine how likely a given RP value or better is observed in a random experiment. Exact probability distribution and accurate approximation. Permutation re-sampling requires a computationally demanding number of permutations to get reliable estimates of the "p"-values for the most differentially expressed genes, if "n" is large. Eisinga, Breitling and Heskes (2013) provide the exact probability mass distribution of the rank product statistic. Calculation of the exact "p"-values offers a substantial improvement over permutation approximation, most significantly for that part of the distribution rank product analysis is most interested in, i.e., the thin right tail. However, exact statistical significance of large rank products may take unacceptable long amounts of time to compute. Heskes, Eisinga and Breitling (2014) provide a method to determine accurate approximate "p"-values of the rank product statistic in a computationally fast manner.
[ { "math_id": 0, "text": "r_{g,i}" }, { "math_id": 1, "text": "RP(g)=(\\Pi_{i=1}^kr_{g,i})^{1/k}" }, { "math_id": 2, "text": "\\mathrm{E}_{\\mathrm{RP}}(g)=c/p" }, { "math_id": 3, "text": "\\mathrm{pfp}(g)=\\mathrm{E}_{RP}(g)/\\mathrm{rank}(g)" }, { "math_id": 4, "text": "\\mathrm{rank}(g)" }, { "math_id": 5, "text": "\\mathrm{RP}" } ]
https://en.wikipedia.org/wiki?curid=10969277
10971756
Broyden's method
Quasi-Newton root-finding method for the multivariable case In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in "k" variables. It was originally described by C. G. Broyden in 1965. Newton's method for solving f(x) 0 uses the Jacobian matrix, J, at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian at most only at the first iteration and to do rank-one updates at other iterations. In 1979 Gay proved that when Broyden's method is applied to a linear system of size "n" × "n", it terminates in 2 "n" steps, although like all quasi-Newton methods, it may not converge for nonlinear systems. Description of the method. Solving single-variable equation. In the secant method, we replace the first derivative "f"′ at "x""n" with the finite-difference approximation: formula_0 and proceed similar to Newton's method: formula_1 where "n" is the iteration index. Solving a system of nonlinear equations. Consider a system of "k" nonlinear equations formula_2 where f is a vector-valued function of vector x: formula_3 formula_4 For such problems, Broyden gives a generalization of the one-dimensional Newton's method, replacing the derivative with the Jacobian J. The Jacobian matrix is determined iteratively, based on the secant equation in the finite-difference approximation: formula_5 where "n" is the iteration index. For clarity, let us define: formula_6 formula_7 formula_8 so the above may be rewritten as formula_9 The above equation is underdetermined when "k" is greater than one. Broyden suggests using the current estimate of the Jacobian matrix J"n"−1 and improving upon it by taking the solution to the secant equation that is a minimal modification to J"n"−1: formula_10 This minimizes the following Frobenius norm: formula_11 We may then proceed in the Newton direction: formula_12 Broyden also suggested using the Sherman–Morrison formula to update directly the inverse of the Jacobian matrix: formula_13 This first method is commonly known as the "good Broyden's method". A similar technique can be derived by using a slightly different modification to J"n"−1. This yields a second method, the so-called "bad Broyden's method" (but see): formula_14 This minimizes a different Frobenius norm: formula_15 Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multiple dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its update. The Broyden Class of Methods. In addition to the two methods described above, Broyden defined a whole class of related methods. In general, methods in the "Broyden class" are given in the form formula_16 where formula_17 formula_18 and formula_19 and formula_20 for each formula_21. The choice of formula_22 determines the method. Other methods in the Broyden class have been introduced by other authors. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f'(x_n) \\simeq \\frac{f(x_n) - f(x_{n-1})}{x_n - x_{n - 1}}," }, { "math_id": 1, "text": "x_{n + 1} = x_n - \\frac{f(x_n)}{f^\\prime(x_n)}" }, { "math_id": 2, "text": "\\mathbf f(\\mathbf x) = \\mathbf 0 ," }, { "math_id": 3, "text": "\\mathbf x = (x_1, x_2, x_3, \\dotsc, x_k)," }, { "math_id": 4, "text": "\\mathbf f(\\mathbf x) = \\big(f_1(x_1, x_2, \\dotsc, x_k), f_2(x_1, x_2, \\dotsc, x_k), \\dotsc, f_k(x_1, x_2, \\dotsc, x_k)\\big)." }, { "math_id": 5, "text": "\\mathbf J_n (\\mathbf x_n - \\mathbf x_{n - 1}) \\simeq \\mathbf f(\\mathbf x_n) - \\mathbf f(\\mathbf x_{n - 1})," }, { "math_id": 6, "text": "\\mathbf f_n = \\mathbf f(\\mathbf x_n)," }, { "math_id": 7, "text": "\\Delta \\mathbf x_n = \\mathbf x_n - \\mathbf x_{n - 1}," }, { "math_id": 8, "text": "\\Delta \\mathbf f_n = \\mathbf f_n - \\mathbf f_{n - 1}," }, { "math_id": 9, "text": "\\mathbf J_n \\Delta \\mathbf x_n \\simeq \\Delta \\mathbf f_n." }, { "math_id": 10, "text": "\\mathbf J_n = \\mathbf J_{n - 1} + \\frac{\\Delta \\mathbf f_n - \\mathbf J_{n - 1} \\Delta \\mathbf x_n}{\\|\\Delta \\mathbf x_n\\|^2} \\Delta \\mathbf x_n^{\\mathrm T}." }, { "math_id": 11, "text": "\\|\\mathbf J_n - \\mathbf J_{n - 1}\\|_{\\rm F} ." }, { "math_id": 12, "text": "\\mathbf x_{n + 1} = \\mathbf x_n - \\mathbf J_n^{-1} \\mathbf f(\\mathbf x_n) ." }, { "math_id": 13, "text": "\\mathbf J_n^{-1} = \\mathbf J_{n - 1}^{-1} + \\frac{\\Delta \\mathbf x_n - \\mathbf J^{-1}_{n - 1} \\Delta \\mathbf f_n}{\\Delta \\mathbf x_n^{\\mathrm T} \\mathbf J^{-1}_{n - 1} \\Delta \\mathbf f_n} \\Delta \\mathbf x_n^{\\mathrm T} \\mathbf J^{-1}_{n - 1}." }, { "math_id": 14, "text": "\\mathbf J_n^{-1} = \\mathbf J_{n - 1}^{-1} + \\frac{\\Delta \\mathbf x_n - \\mathbf J^{-1}_{n - 1} \\Delta \\mathbf f_n}{\\|\\Delta \\mathbf f_n\\|^2} \\Delta \\mathbf f_n^{\\mathrm T}." }, { "math_id": 15, "text": "\\|\\mathbf J_n^{-1} - \\mathbf J_{n - 1}^{-1}\\|_{\\rm F}." }, { "math_id": 16, "text": "\n\\mathbf{J}_{k+1}=\\mathbf{J}_k-\\frac{\\mathbf{J}_k s_k s_k^T \\mathbf{J}_k}{s_k^T \\mathbf{J}_k s_k}+\\frac{y_k y_k^T}{y_k^T s_k}+\\phi_k\\left(s_k^T \\mathbf{J}_k s_k\\right) v_k v_k^T,\n" }, { "math_id": 17, "text": "y_k := \\mathbf{f}(\\mathbf{x}_{k+1}) - \\mathbf{f}(\\mathbf{x}_{k})," }, { "math_id": 18, "text": "s_k := \\mathbf{x}_{k+1} - \\mathbf{x}_k," }, { "math_id": 19, "text": "v_k = \\left[\\frac{y_k}{y_k^T s_k} - \\frac{\\mathbf{J}_k s_k}{s_k^T \\mathbf{J}_k s_k}\\right]," }, { "math_id": 20, "text": "\\phi_k \\in \\mathbb{R}" }, { "math_id": 21, "text": "k = 1, 2, ..." }, { "math_id": 22, "text": "\\phi_k" }, { "math_id": 23, "text": "\\phi_k = 1" } ]
https://en.wikipedia.org/wiki?curid=10971756
10972761
Mathematics of cyclic redundancy checks
Methods of error detection and correction in communications The cyclic redundancy check (CRC) is based on division in the ring of polynomials over the finite field GF(2) (the integers modulo 2), that is, the set of polynomials where each coefficient is either zero or one, and arithmetic operations wrap around. Any string of bits can be interpreted as the coefficients of a message polynomial of this sort, and to find the CRC, we multiply the message polynomial by formula_0 and then find the remainder when dividing by a degree-formula_1 generator polynomial. The coefficients of the remainder polynomial are the bits of the CRC. Formulation. In general, computation of CRC corresponds to Euclidean division of polynomials over GF(2): formula_2 Here formula_3 is the original message polynomial and formula_4 is the degree-formula_1 generator polynomial. The bits of formula_5 are the original message with formula_1 zeroes added at the end. The CRC 'checksum' is formed by the coefficients of the remainder polynomial formula_6 whose degree is strictly less than formula_1. The quotient polynomial formula_7 is of no interest. Using modulo operation, it can be stated that formula_8 In communication, the sender attaches the formula_1 bits of R after the original message bits of M, which could be shown to be equivalent to sending out formula_9 (the "codeword".) The receiver, knowing formula_4 and therefore formula_1, separates M from R and repeats the calculation, verifying that the received and computed R are equal. If they are, then the receiver assumes the received message bits are correct. In practice CRC calculations most closely resemble long division in binary, except that the subtractions involved do not borrow from more significant digits, and thus become exclusive or operations. A CRC is a checksum in a strict mathematical sense, as it can be expressed as the weighted modulo-2 sum of per-bit syndromes, but that word is generally reserved more specifically for sums computed using larger moduli, such as 10, 256, or 65535. CRCs can also be used as part of error-correcting codes, which allow not only the detection of transmission errors, but the reconstruction of the correct message. These codes are based on closely related mathematical principles. Polynomial arithmetic modulo 2. Since the coefficients are constrained to a single bit, any math operation on CRC polynomials must map the coefficients of the result to either zero or one. For example, in addition: formula_10 Note that formula_11 is equivalent to zero in the above equation because addition of coefficients is performed modulo 2: formula_12 Polynomial addition modulo 2 is the same as bitwise XOR. Since XOR is the inverse of itself, polynominal subtraction modulo 2 is the same as bitwise XOR too. Multiplication is similar (a carry-less product): formula_13 We can also divide polynomials mod 2 and find the quotient and remainder. For example, suppose we're dividing formula_14 by formula_15. We would find that formula_16 In other words, formula_17 The division yields a quotient of "x"2 + 1 with a remainder of −1, which, since it is odd, has a last bit of 1. In the above equations, formula_14 represents the original message bits codice_0, formula_18 is the generator polynomial, and the remainder formula_19 (equivalently, formula_20) is the CRC. The degree of the generator polynomial is 1, so we first multiplied the message by formula_21 to get formula_14. Variations. There are several standard variations on CRCs, any or all of which may be used with any CRC polynomial. "Implementation variations" such as endianness and CRC presentation only affect the mapping of bit strings to the coefficients of formula_3 and formula_6, and do not impact the properties of the algorithm. This simplifies many implementations by avoiding the need to treat the last few bytes of the message specially when checking CRCs. The reason this method is used is because an unmodified CRC does not distinguish between two messages which differ only in the number of leading zeroes, because leading zeroes do not affect the value of formula_3. When this inversion is done, the CRC does distinguish between such messages. In practice, the last two variations are invariably used together. They change the transmitted CRC, so must be implemented at both the transmitter and the receiver. While presetting the shift register to ones is straightforward to do at both ends, inverting affects receivers implementing the first variation, because the CRC of a full codeword that already includes a CRC is no longer zero. Instead, it is a fixed non-zero pattern, the CRC of the inversion pattern of formula_1 ones. The CRC thus may be checked either by the obvious method of computing the CRC on the message, inverting it, and comparing with the CRC in the message stream, or by calculating the CRC on the entire codeword and comparing it with an expected fixed value formula_27, called the check polynomial, residue or magic number. This may be computed as formula_28, or equivalently by computing the unmodified CRC of a message consisting of formula_1 ones, formula_29. These inversions are extremely common but not universally performed, even in the case of the CRC-32 or CRC-16-CCITT polynomials. Reversed representations and reciprocal polynomials. Polynomial representations. Example of CCITT 16-bit Polynomial in the forms described (bits inside square brackets are included in the word representation; bits outside are implied 1 bits; vertical bars designate nibble boundaries): 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 coefficient 1 [0 0 0 1 |0 0 0 0 |0 0 1 0 |0 0 0 1] Normal [ 1 | 0 | 2 | 1 ] Nibbles of Normal 0x1021 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [1 0 0 0 |0 1 0 0 |0 0 0 0 |1 0 0 0] 1 Reverse [ 8 | 4 | 0 | 8 ] Nibbles of Reverse 0x8408 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1 [0 0 0 0 |1 0 0 0 |0 0 0 1 |0 0 0 1] Reciprocal [ 0 | 8 | 1 | 1 ] Nibbles of Reciprocal 0x0811 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Reverse reciprocal 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Koopman [1 0 0 0 |1 0 0 0 |0 0 0 1 |0 0 0 0] 1 [ 8 | 8 | 1 | 0 ] Nibbles 0x8810 All the well-known CRC generator polynomials of degree formula_1 have two common hexadecimal representations. In both cases, the coefficient of formula_0 is omitted and understood to be 1. The msbit-first form is often referred to in the literature as the "normal" representation, while the lsbit-first is called the "reversed" representation. It is essential to use the correct form when implementing a CRC. If the coefficient of formula_30 happens to be zero, the forms can be distinguished at a glance by seeing which end has the bit set. To further confuse the matter, the paper by P. Koopman and T. Chakravarty converts CRC generator polynomials to hexadecimal numbers in yet another way: msbit-first, but including the formula_0 coefficient and omitting the formula_20 coefficient. This "Koopman" representation has the advantage that the degree can be determined from the hexadecimal form and the coefficients are easy to read off in left-to-right order. However, it is not used anywhere else and is not recommended due to the risk of confusion. Reciprocal polynomials. A reciprocal polynomial is created by assigning the formula_31 through formula_20 coefficients of one polynomial to the formula_20 through formula_31 coefficients of a new polynomial. That is, the reciprocal of the degree formula_1 polynomial formula_4 is formula_32. The most interesting property of reciprocal polynomials, when used in CRCs, is that they have exactly the same error-detecting strength as the polynomials they are reciprocals of. The reciprocal of a polynomial generates the same "codewords", only bit reversed — that is, if all but the first formula_1 bits of a codeword under the original polynomial are taken, reversed and used as a new message, the CRC of that message under the reciprocal polynomial equals the reverse of the first formula_1 bits of the original codeword. But the reciprocal polynomial is not the same as the original polynomial, and the CRCs generated using it are not the same (even modulo bit reversal) as those generated by the original polynomial. Error detection strength. The error-detection ability of a CRC depends on the degree of its key polynomial and on the specific key polynomial used. The "error polynomial" formula_33 is the symmetric difference of the received message codeword and the correct message codeword. An error will go undetected by a CRC algorithm if and only if the error polynomial is divisible by the CRC polynomial. (As an aside, there is never reason to use a polynomial with a zero formula_20 term. Recall that a CRC is the remainder of the message polynomial times formula_0 divided by the CRC polynomial. A polynomial with a zero formula_20 term always has formula_26 as a factor. So if formula_41 is the original CRC polynomial and formula_42, then formula_43 formula_44 formula_45 That is, the CRC of any message with the formula_41 polynomial is the same as that of the same message with the formula_46 polynomial with a zero appended. It is just a waste of a bit.) The combination of these factors means that good CRC polynomials are often primitive polynomials (which have the best 2-bit error detection) or primitive polynomials of degree formula_47, multiplied by formula_18 (which detects all odd numbers of bit errors, and has half the two-bit error detection ability of a primitive polynomial of degree formula_1). Bitfilters. Analysis technique using bitfilters allows one to very efficiently determine the properties of a given generator polynomial. The results are the following: External links. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "x^n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "M(x) \\cdot x^n = Q(x) \\cdot G(x) + R(x)." }, { "math_id": 3, "text": "M(x)" }, { "math_id": 4, "text": "G(x)" }, { "math_id": 5, "text": "M(x) \\cdot x^n" }, { "math_id": 6, "text": "R(x)" }, { "math_id": 7, "text": "Q(x)" }, { "math_id": 8, "text": "R(x) = M(x) \\cdot x^n \\,\\bmod\\, G(x)." }, { "math_id": 9, "text": "M(x) \\cdot x^n - R(x)" }, { "math_id": 10, "text": "(x^3 + x) + (x + 1) = x^3 + 2x + 1 \\equiv x^3 + 1 \\pmod 2." }, { "math_id": 11, "text": "2x" }, { "math_id": 12, "text": "2x = x + x = x\\times(1 + 1) \\equiv x\\times0 = 0 \\pmod 2." }, { "math_id": 13, "text": "(x^2 + x)(x + 1) = x^3 + 2x^2 + x \\equiv x^3 + x \\pmod 2." }, { "math_id": 14, "text": "x^3 + x^2 + x" }, { "math_id": 15, "text": "x + 1" }, { "math_id": 16, "text": "\\frac{x^3 + x^2 + x}{x+1} = (x^2 + 1) - \\frac{1}{x+1}." }, { "math_id": 17, "text": "(x^3 + x^2 + x) = (x^2 + 1)(x + 1) - 1 \\equiv (x^2 + 1)(x + 1) + 1 \\pmod 2." }, { "math_id": 18, "text": "x+1" }, { "math_id": 19, "text": "1" }, { "math_id": 20, "text": "x^0" }, { "math_id": 21, "text": "x^1" }, { "math_id": 22, "text": "M(x) \\cdot x^n - R(x) = Q(x) \\cdot G(x)" }, { "math_id": 23, "text": "M(x) \\cdot x^n + \\sum_{i=m}^{m+n-1} x^i = Q(x) \\cdot G(x) + R (x)" }, { "math_id": 24, "text": "m > \\deg(M(x))" }, { "math_id": 25, "text": "\\left ( \\sum_{i=m}^{m+n-1} x^i \\right ) \\,\\bmod\\, G(x)" }, { "math_id": 26, "text": "x" }, { "math_id": 27, "text": "C(x)" }, { "math_id": 28, "text": "C(x) = \\left ( \\sum_{i=n}^{2n-1} x^i \\right )\\,\\bmod\\,G(x)" }, { "math_id": 29, "text": "M(x) = \\sum_{i=0}^{n-1} x^i" }, { "math_id": 30, "text": "x^{n-1}" }, { "math_id": 31, "text": "x^i" }, { "math_id": 32, "text": "x^nG(x^{-1})" }, { "math_id": 33, "text": "E(x)" }, { "math_id": 34, "text": "x^k" }, { "math_id": 35, "text": "i \\le k" }, { "math_id": 36, "text": "E(x) = x^i + x^k = x^k \\cdot (x^{i-k} + 1), \\; i > k" }, { "math_id": 37, "text": "x^{i-k} + 1" }, { "math_id": 38, "text": "{i-k}" }, { "math_id": 39, "text": "2^n - 1" }, { "math_id": 40, "text": "n-i" }, { "math_id": 41, "text": "K(x)" }, { "math_id": 42, "text": "K(x) = x \\cdot K'(x)" }, { "math_id": 43, "text": "M(x) \\cdot x^{n-1} = Q(x) \\cdot K'(x) + R(x)" }, { "math_id": 44, "text": "M(x) \\cdot x^n = Q(x) \\cdot x \\cdot K'(x) + x \\cdot R(x)" }, { "math_id": 45, "text": "M(x) \\cdot x^n = Q(x) \\cdot K(x) + x \\cdot R(x)" }, { "math_id": 46, "text": "K'(x)" }, { "math_id": 47, "text": "n-1" }, { "math_id": 48, "text": "1+\\cdots+X^n" }, { "math_id": 49, "text": "n+1" }, { "math_id": 50, "text": "2^{n-1}-1" }, { "math_id": 51, "text": "n=16" }, { "math_id": 52, "text": "2^n-1" } ]
https://en.wikipedia.org/wiki?curid=10972761
1097654
Udny Yule
British statistician and geneticist George Udny Yule, CBE, FRS (18 February 1871 – 26 June 1951), usually known as Udny Yule, was a British statistician, particularly known for the Yule distribution and proposing the preferential attachment model for random graphs. Personal life. Yule was born at Beech Hill, a house in Morham near Haddington, Scotland and died in Cambridge, England. He came from an established Scottish family composed of army officers, civil servants, scholars, and administrators. His father, Sir George Udny Yule (1813–1886) was a brother of the noted orientalist Sir Henry Yule (1820–1889). His great uncle was the botanist John Yule. In 1899, Yule married May Winifred Cummings. The marriage was annulled in 1912, producing no children. Education and teaching. Udny Yule was educated at Winchester College and at the age of 16 at University College London where he read engineering. After a year in Bonn doing research in experimental physics under Heinrich Rudolf Hertz, Yule returned to University College in 1893 to work as a demonstrator for Karl Pearson, one of his former teachers. Pearson was beginning to work in statistics and Yule followed him into this new field. Yule progressed to an assistant professorship but he left in 1899 to a better-paid position as secretary to an examination board, working under Philip Magnus at the City and Guilds Institute. In 1902 Yule became Newmarch lecturer in statistics at University College, a position he held together with his post at the City and Guilds Institute. He continued to publish articles and also wrote an influential textbook, "Introduction to the Theory of Statistics" (1911), based on his lectures. In 1912 Yule moved to Cambridge University to a newly created Lectureship in Statistics and he remained in Cambridge for the rest of his life. During the First World War Yule worked for the army and then for the Ministry of Food. A heart attack in 1931 left him semi-invalided and led to his early retirement. His flow of publications almost ceased but, in the 1940s he found new interests, one of which led to a book, "The Statistical Study of Literary Vocabulary". Scholarship. Yule was a prolific writer, the highlight of his publications being perhaps the textbook "Introduction to the Theory of Statistics", which went through fourteen editions and was published in several languages. He was active in the Royal Statistical Society, was awarded its Guy Medal in Gold in 1911, and served as its president in 1924–26. Yule's first paper on statistics appeared in 1895: "On the Correlation of Total Pauperism with Proportion of Out-relief". Yule was interested in applying statistical techniques to social problems and he quickly became a member of the Royal Statistical Society. For many years the only members interested in mathematical statistics were Yule, Edgeworth and Bowley. In 1897–99 Yule wrote important papers on correlation and regression. After 1900 he worked on a parallel theory of association. His approach to association was quite different from Pearson's and relations between them deteriorated. Yule had broad interests and his collaborators included the agricultural meteorologist R. H. Hooker, the medical statistician Major Greenwood and the agricultural scientist Sir Frank Engledow. Yule's sympathy towards the newly rediscovered Mendelian theory of genetics led to several papers. In the 1920s Yule wrote three influential papers on time series analysis, "On the time-correlation problem" (1921), a critique of the variate difference method, an investigation of a form of spurious correlation (1926) and "On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer's Sunspot Numbers" (1927), which used an autoregressive model to model the sunspot time series instead of the established periodogram method of Schuster. Yule distribution. In 1922, J. C. Willis published "Age and Area", based on botanical field work in Ceylon, where he studied the distributional patterns of the Ceylonese vascular plants in great detail. He compiled a table of the number of existent species in each genus of flowering plants, and the same for the Rubiaceae, and for the Chrysomelid beetles. Let formula_0 be the number of genera with formula_1 existent species. When formula_0 is plotted on a log-log plot, each of these follows a straight line. This shows that formula_2 for some formula_3. That is, the distribution has a power-law tail. The figures are found in, or page 241 and 242. In 1925 Yule published the paper "A Mathematical Theory of Evolution, based on the Conclusions of Dr. J. C. Willis, F.R.S.", where he proposed a stochastic process which reproduces the power-law tail. This was later called the Yule process, but is now better known as preferential attachment. Herbert A. Simon dubbed the resulting distribution the Yule distribution in his honour. Assessment. Frank Yates culminated his 1952 obituary of Yule by saying: “To summarize we may, I think, justly conclude that though Yule did not fully develop any completely new branches of statistical theory, he took the first steps in many directions which were later to prove fruitful lines for further progress… He can indeed rightly claim to be one of the pioneers of modern statistics”. Yule made important contributions to the theory and practice of correlation, regression, and association, as well as to time series analysis. He pioneered the use of preferential attachment stochastic processes to explain the origin of power law distribution. The Yule distribution, a discrete power law, is named after him. Although Yule taught at Cambridge for twenty years, he had little impact on the development of statistics there. M. S. Bartlett recalled him as a "mentor" but his famous association with Maurice Kendall, who revised the "Introduction to the Theory of Statistics", only came about after Kendall had graduated. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "y(x)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y(x) \\propto x^a" }, { "math_id": 3, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=1097654
1097849
Relativistic beaming
Change in luminosity of a moving object due to special relativity In physics, relativistic beaming (also known as Doppler beaming, Doppler boosting, or the headlight effect) is the process by which relativistic effects modify the apparent luminosity of emitting matter that is moving at speeds close to the speed of light. In an astronomical context, relativistic beaming commonly occurs in two oppositely-directed relativistic jets of plasma that originate from a central compact object that is accreting matter. Accreting compact objects and relativistic jets are invoked to explain x-ray binaries, gamma-ray bursts, and, on a much larger scale, active galactic nuclei (of which quasars are a particular variety). Beaming affects the apparent brightness of a moving object. Consider a cloud of gas moving relative to the observer and emitting electromagnetic radiation. If the gas is moving towards the observer, it will be brighter than if it were at rest, but if the gas is moving away, it will appear fainter. The magnitude of the effect is illustrated by the AGN jets of the galaxies M87 and 3C 31 (see images at right). M87 has twin jets aimed almost directly towards and away from Earth; the jet moving towards Earth is clearly visible (the long, thin blueish feature in the top image), while the other jet is so much fainter it is not visible. In 3C 31, both jets (labeled in the lower figure) are at roughly right angles to our line of sight, and thus, both are visible. The upper jet actually points slightly more in Earth's direction and is therefore brighter. Relativistically moving objects are beamed due to a variety of physical effects. Light aberration causes most of the photons to be emitted along the object's direction of motion. The Doppler effect changes the energy of the photons by red- or blue-shifting them. Finally, time intervals as measured by clocks moving alongside the emitting object are different from those measured by an observer on Earth due to time dilation and photon arrival time effects. How all of these effects modify the brightness, or apparent luminosity, of a moving object is determined by the equation describing the relativistic Doppler effect (which is why relativistic beaming is also known as Doppler beaming). A simple jet model. The simplest model for a jet is one where a single, homogeneous sphere is travelling towards the Earth at nearly the speed of light. This simple model is also an unrealistic one, but it does illustrate the physical process of beaming quite well. Synchrotron spectrum and the spectral index. Relativistic jets emit most of their energy via synchrotron emission. In our simple model the sphere contains highly relativistic electrons and a steady magnetic field. Electrons inside the blob travel at speeds just a tiny fraction below the speed of light and are whipped around by the magnetic field. Each change in direction by an electron is accompanied by the release of energy in the form of a photon. With enough electrons and a powerful enough magnetic field the relativistic sphere can emit a huge number of photons, ranging from those at relatively weak radio frequencies to powerful X-ray photons. The figure of the sample spectrum shows basic features of a simple synchrotron spectrum. At low frequencies the jet sphere is opaque and its luminosity increases with frequency until it peaks and begins to decline. In the sample image this "peak frequency" occurs at formula_0. At frequencies higher than this the jet sphere is transparent. The luminosity decreases with frequency until a "break frequency" is reached, after which it declines more rapidly. In the same image the break frequency occurs when formula_1. The sharp break frequency occurs because at very high frequencies the electrons which emit the photons lose most of their energy very rapidly. A sharp decrease in the number of high energy electrons means a sharp decrease in the spectrum. The changes in slope in the synchrotron spectrum are parameterized with a "spectral index". The spectral index, α, over a given frequency range is simply the slope on a diagram of formula_2 vs. formula_3. (Of course for α to have real meaning the spectrum must be very nearly a straight line across the range in question.) Beaming equation. In the simple jet model of a single homogeneous sphere the observed luminosity is related to the intrinsic luminosity as formula_4 where formula_5 The observed luminosity therefore depends on the speed of the jet and the angle to the line of sight through the Doppler factor, formula_6, and also on the properties inside the jet, as shown by the exponent with the spectral index. The beaming equation can be broken down into a series of three effects: Aberration. Aberration is the change in an object's apparent direction caused by the relative transverse motion of the observer. In inertial systems it is equal and opposite to the light time correction. In everyday life aberration is a well-known phenomenon. Consider a person standing in the rain on a day when there is no wind. If the person is standing still, then the rain drops will follow a path that is straight down to the ground. However, if the person is moving, for example in a car, the rain will appear to be approaching at an angle. This apparent change in the direction of the incoming raindrops is aberration. The amount of aberration depends on the speed of the emitted object or wave relative to the observer. In the example above this would be the speed of a car compared to the speed of the falling rain. This does not change when the object is moving at a speed close to formula_7. Like the classic and relativistic effects, aberration depends on: 1) the speed of the emitter at the time of emission, and 2) the speed of the observer at the time of absorption. In the case of a relativistic jet, beaming (emission aberration) will make it appear as if more energy is sent forward, along the direction the jet is traveling. In the simple jet model a homogeneous sphere will emit energy equally in all directions in the rest frame of the sphere. In the rest frame of Earth the moving sphere will be observed to be emitting most of its energy along its direction of motion. The energy, therefore, is ‘beamed’ along that direction. Quantitatively, aberration accounts for a change in luminosity of formula_8 Time dilation. Time dilation is a well-known consequence of special relativity and accounts for a change in observed luminosity of formula_9 Blue- or redshifting. Blue- or redshifting can change the observed luminosity at a particular frequency, but this is not a beaming effect. Blueshifting accounts for a change in observed luminosity of formula_10 Lorentz invariants. A more-sophisticated method of deriving the beaming equations starts with the quantity formula_11. This quantity is a Lorentz invariant, so the value is the same in different reference frames. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\log \\nu = 3" }, { "math_id": 1, "text": "\\log \\nu = 7" }, { "math_id": 2, "text": "\\log S" }, { "math_id": 3, "text": "\\log \\nu" }, { "math_id": 4, "text": "S_o = S_e D^p\\,," }, { "math_id": 5, "text": "p = 3 - \\alpha\\,." }, { "math_id": 6, "text": "D" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "D^2\\,." }, { "math_id": 9, "text": "D^1\\,." }, { "math_id": 10, "text": "\\frac{1}{D^\\alpha}\\,." }, { "math_id": 11, "text": "\\frac{S}{\\nu^3}" }, { "math_id": 12, "text": "\\theta\\,\\!" }, { "math_id": 13, "text": "v_j\\,\\!" }, { "math_id": 14, "text": "S_e\\,\\!" }, { "math_id": 15, "text": "S_o\\,\\!" }, { "math_id": 16, "text": "\\alpha\\,\\!" }, { "math_id": 17, "text": "S \\propto \\nu^{\\alpha}\\,\\!" }, { "math_id": 18, "text": "c\\,\\! = 2.9979 \\times 10^{8}" }, { "math_id": 19, "text": "\\beta = \\frac{v_j}{c}" }, { "math_id": 20, "text": "\\gamma = \\frac{1}{\\sqrt{1 - \\beta^2}}" }, { "math_id": 21, "text": "D = \\frac{1}{\\gamma (1 - \\beta \\cos\\theta)}" } ]
https://en.wikipedia.org/wiki?curid=1097849
10978612
Effective dimension
In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size). Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable (or partially locatable) in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 (since it is a point) but effective dimension 1 (because, roughly speaking, it can't be effectively localized any better than a small interval, which has Hausdorff dimension 1). Rigorous definitions. This article will define effective dimension for subsets of Cantor space 2ω; closely related definitions exist for subsets of Euclidean space R"n". We will move freely between considering a set "X" of natural numbers, the infinite sequence formula_0 given by the characteristic function of "X", and the real number with binary expansion 0."X". Martingales and other gales. A "martingale" on Cantor space 2ω is a function "d": 2ω → R≥ 0 from Cantor space to nonnegative reals which satisfies the fairness condition: formula_1 A martingale is thought of as a betting strategy, and the function formula_2 gives the capital of the better after seeing a sequence σ of 0s and 1s. The fairness condition then says that the capital after a sequence σ is the average of the capital after seeing σ0 and σ1; in other words the martingale gives a betting scheme for a bookie with 2:1 odds offered on either of two "equally likely" options, hence the name fair. A "supermartingale" on Cantor space is a function "d" as above which satisfies a modified fairness condition: formula_3 A supermartingale is a betting strategy where the expected capital after a bet is no more than the capital before a bet, in contrast to a martingale where the two are always equal. This allows more flexibility, and is very similar in the non-effective case, since whenever a supermartingale "d" is given, there is a modified function "d"' which wins at least as much money as "d" and which is actually a martingale. However it is useful to allow the additional flexibility once one starts talking about actually giving algorithms to determine the betting strategy, as some algorithms lend themselves more naturally to producing supermartingales than martingales. An "s"-"gale" is a function "d" as above of the form formula_4 for "e" some martingale. An "s"-"supergale" is a function "d" as above of the form formula_4 for "e" some supermartingale. An "s"-(super)gale is a betting strategy where some amount of capital is lost to inflation at each step. Note that "s"-gales and "s"-supergales are examples of supermartingales, and the 1-gales and 1-supergales are precisely the martingales and supermartingales. Collectively, these objects are known as "gales". A gale "d" "succeeds" on a subset "X" of the natural numbers if formula_5 where formula_6 denotes the "n"-digit string consisting of the first "n" digits of "X". A gale "d" "succeeds strongly" on "X" if formula_7. All of these notions of various gales have no effective content, but one must necessarily restrict oneself to a small class of gales, since some gale can be found which succeeds on any given set. After all, if one knows a sequence of coin flips in advance, it is easy to make money by simply betting on the known outcomes of each flip. A standard way of doing this is to require the gales to be either computable or close to computable: A gale "d" is called "constructive", "c.e.", or "lower semi-computable" if the numbers formula_2 are uniformly left-c.e. reals (i.e. can uniformly be written as the limit of an increasing computable sequence of rationals). The effective Hausdorff dimension of a set of natural numbers "X" is formula_8. The effective packing dimension of "X" is formula_9. Kolmogorov complexity definition. Kolmogorov complexity can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence "w" a natural number "K(w)" that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output "w" when run. The effective Hausdorff dimension of a set of natural numbers "X" is formula_10. The effective packing dimension of a set "X" is formula_11. From this one can see that both the effective Hausdorff dimension and the effective packing dimension of a set are between 0 and 1, with the effective packing dimension always at least as large as the effective Hausdorff dimension. Every random sequence will have effective Hausdorff and packing dimensions equal to 1, although there are also nonrandom sequences with effective Hausdorff and packing dimensions of 1. Comparison to classical dimension. If "Z" is a subset of 2ω, its Hausdorff dimension is formula_12. The packing dimension of "Z" is formula_13. Thus the effective Hausdorff and packing dimensions of a set formula_14 are simply the classical Hausdorff and packing dimensions of formula_15 (respectively) when we restrict our attention to c.e. gales. Define the following: formula_16 formula_17 formula_18 formula_19 formula_20 formula_21 A consequence of the above is that these all have Hausdorff dimension formula_22. formula_23 and formula_24 all have packing dimension 1. formula_25 and formula_26 all have packing dimension formula_22. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\chi_X" }, { "math_id": 1, "text": "d(\\sigma)=\\frac12 (d(\\sigma 0)+d(\\sigma 1))" }, { "math_id": 2, "text": "d(\\sigma)" }, { "math_id": 3, "text": "d(\\sigma) \\geq \\frac12 (d(\\sigma 0)+d(\\sigma 1))" }, { "math_id": 4, "text": "d(\\sigma) = \\frac{e(\\sigma)}{2^{(1-s)|\\sigma|}}" }, { "math_id": 5, "text": "\\limsup_n d(X|n)=\\infty" }, { "math_id": 6, "text": "X|n" }, { "math_id": 7, "text": "\\liminf_n d(X|n)=\\infty" }, { "math_id": 8, "text": "\\inf \\{s : \\mathrm{some\\ c.e.}\\ s\\mathrm{-gale\\ succeeds\\ on\\ } X \\}" }, { "math_id": 9, "text": "\\inf \\{s : \\mathrm{some\\ c.e.}\\ s\\mathrm{-gale\\ succeeds\\ strongly\\ on\\ } X\\}" }, { "math_id": 10, "text": "\\liminf_n \\frac{K(X|n)}n" }, { "math_id": 11, "text": "\\limsup_n \\frac{K(X|n)}n" }, { "math_id": 12, "text": "\\inf \\{s : \\mathrm{some}\\ s\\mathrm{-gale\\ succeeds\\ on\\ all\\ elements\\ of\\ } Z \\}" }, { "math_id": 13, "text": "\\inf \\{s : \\mathrm{some}\\ s\\mathrm{-gale\\ succeeds\\ strongly\\ on\\ all\\ elements\\ of\\ } Z \\}" }, { "math_id": 14, "text": "X" }, { "math_id": 15, "text": "\\{X\\}" }, { "math_id": 16, "text": "H_{\\beta} := \\{X \\in 2^\\omega : X\\ \\mathrm{has\\ effective\\ Hausdorff\\ dimension\\ } \\beta \\}" }, { "math_id": 17, "text": "H_{\\leq \\beta} := \\{X \\in 2^\\omega : X\\ \\mathrm{has\\ effective\\ Hausdorff\\ dimension\\ } \\leq \\beta \\}" }, { "math_id": 18, "text": "H_{< \\beta} := \\{X \\in 2^\\omega : X\\ \\mathrm{has\\ effective\\ Hausdorff\\ dimension\\ } < \\beta \\}" }, { "math_id": 19, "text": "P_{\\beta} := \\{X \\in 2^\\omega : X\\ \\mathrm{has\\ effective\\ packing\\ dimension\\ } \\beta \\}" }, { "math_id": 20, "text": "P_{\\leq \\beta} := \\{X \\in 2^\\omega : X\\ \\mathrm{has\\ effective\\ packing\\ dimension\\ } \\leq \\beta \\}" }, { "math_id": 21, "text": "P_{< \\beta} := \\{X \\in 2^\\omega : X\\ \\mathrm{has\\ effective\\ packing\\ dimension\\ } < \\beta \\}" }, { "math_id": 22, "text": "\\beta" }, { "math_id": 23, "text": "H_{\\beta}, H_{\\leq \\beta}" }, { "math_id": 24, "text": "H_{< \\beta}" }, { "math_id": 25, "text": "P_{\\beta}, P_{\\leq \\beta}" }, { "math_id": 26, "text": "P_{< \\beta}" } ]
https://en.wikipedia.org/wiki?curid=10978612
1097925
Conserved current
Concept in physics and mathematics that satisfies the continuity equation In physics a conserved current is a current, formula_0, that satisfies the continuity equation formula_1. The continuity equation represents a conservation law, hence the name. Indeed, integrating the continuity equation over a volume formula_2, large enough to have no net currents through its surface, leads to the conservation law formula_3 where formula_4 is the conserved quantity. In gauge theories the gauge fields couple to conserved currents. For example, the electromagnetic field couples to the conserved electric current. Conserved quantities and symmetries. Conserved current is the flow of the canonical conjugate of a quantity possessing a continuous translational symmetry. The continuity equation for the conserved current is a statement of a "conservation law". Examples of canonical conjugate quantities are: Conserved currents play an extremely important role in theoretical physics, because Noether's theorem connects the existence of a conserved current to the existence of a symmetry of some quantity in the system under study. In practical terms, all conserved currents are the Noether currents, as the existence of a conserved current implies the existence of a symmetry. Conserved currents play an important role in the theory of partial differential equations, as the existence of a conserved current points to the existence of constants of motion, which are required to define a foliation and thus an integrable system. The conservation law is expressed as the vanishing of a 4-divergence, where the Noether charge forms the zeroth component of the 4-current. Examples. Electromagnetism. The "conservation of charge", for example, in the notation of Maxwell's equations, formula_5 where The equation would apply equally to masses (or other conserved quantities), where the word "mass" is substituted for the words "electric charge" above. Complex scalar field. The Lagrangian density formula_7 of a complex scalar field formula_8 is invariant under the symmetry transformation formula_9 Defining formula_10 we find the Noether current formula_11 which satisfies the continuity equation.
[ { "math_id": 0, "text": "j^\\mu" }, { "math_id": 1, "text": "\\partial_\\mu j^\\mu=0" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": " \\frac{\\partial}{\\partial t}Q = 0\\;," }, { "math_id": 4, "text": "Q = \\int_V j^0 dV" }, { "math_id": 5, "text": "\\frac{\\partial \\rho} {\\partial t} + \\nabla \\cdot \\mathbf{J} = 0" }, { "math_id": 6, "text": " \\mathbf J = \\rho \\mathbf v " }, { "math_id": 7, "text": " \\mathcal{L}=\\partial_\\mu\\phi^*\\,\\partial^\\mu\\phi +V(\\phi^*\\,\\phi)" }, { "math_id": 8, "text": " \\phi:\\mathbb{R}^{n+1}\\mapsto\\mathbb{C} " }, { "math_id": 9, "text": " \\phi\\mapsto\\phi'=\\phi\\,e^{i\\alpha}\\, . " }, { "math_id": 10, "text": " \\delta\\phi=\\phi'-\\phi " }, { "math_id": 11, "text": " j^\\mu:=\\frac{d\\mathcal{L}}{d(\\partial_\\mu)\\phi}\\,\\frac{d(\\delta\\phi)}{d\\alpha}\\bigg|_{\\alpha=0}+\\frac{d\\mathcal{L}}{d(\\partial_\\mu)\\phi^*}\\,\\frac{d(\\delta\\phi^*)}{d\\alpha}\\bigg|_{\\alpha=0}= i\\,\\phi\\,(\\partial^\\mu\\phi^*)-i\\,\\phi^*\\,(\\partial^\\mu\\phi)" } ]
https://en.wikipedia.org/wiki?curid=1097925
10980467
Unate function
A unate function is a type of boolean function which has monotonic properties. They have been studied extensively in switching theory. A function formula_0 is said to be positive unate in formula_1 if for all possible values of formula_2, formula_3 formula_4 Likewise, it is negative unate in formula_1 if formula_5 If for every formula_1 "f" is either positive or negative unate in the variable formula_1 then it is said to be unate (note that some formula_1 may be positive unate and some negative unate to satisfy the definition of unate function). A function is binate if it is not unate (i.e., is neither positive unate nor negative unate in at least one of its variables). For example, the logical disjunction function "or" with boolean values used for true (1) and false (0) is positive unate. Conversely, Exclusive or is non-unate, because the transition from 0 to 1 on input x0 is both positive unate and negative unate, depending on the input value on x1. Positive unateness can also be considered as passing the same slope (no change in the input) and negative unate is passing the opposite slope... non unate is dependence on more than one input (of same or different slopes)
[ { "math_id": 0, "text": "f(x_1,x_2,\\ldots,x_n)" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "x_j" }, { "math_id": 3, "text": "j\\neq i" }, { "math_id": 4, "text": "f(x_1,x_2,\\ldots,x_{i-1},1,x_{i+1},\\ldots,x_n) \\ge f(x_1,x_2,\\ldots,x_{i-1},0,x_{i+1},\\ldots,x_n).\\," }, { "math_id": 5, "text": "f(x_1,x_2,\\ldots,x_{i-1},0,x_{i+1},\\ldots,x_n) \\ge f(x_1,x_2,\\ldots,x_{i-1},1,x_{i+1},\\ldots,x_n).\\," } ]
https://en.wikipedia.org/wiki?curid=10980467
10983
First-order logic
Type of logical system First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists""" is a quantifier, while "x" is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic. A theory about a topic, such as set theory, a theory for groups, or a formal theory of arithmetic, is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic. The term "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted. In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets. There are many deductive systems for first-order logic which are both sound, i.e. all provable statements are true in all models, and complete, i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem. First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic. The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). Introduction. While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate evaluates to true or false for an entity or entities in the domain of discourse. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as "p" and "q". They are not viewed as an application of a predicate, such as formula_0, to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form formula_1 for some individual formula_2, in the first sentence the value of the variable "x" is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic. The truth of a formula such as ""x" is a philosopher" depends on which object is denoted by "x" and on the interpretation of the predicate "is a philosopher". Consequently, ""x" is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. Relationships between predicates can be stated using logical connectives. For example, the first-order formula "if "x" is a philosopher, then "x" is a scholar", is a conditional statement with ""x" is a philosopher" as its hypothesis, and ""x" is a scholar" as its conclusion, which again needs specification of "x" in order to have a definite truth value. Quantifiers can be applied to variables in a formula. The variable "x" in the previous formula can be universally quantified, for instance, with the first-order sentence "For every "x", if "x" is a philosopher, then "x" is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if "x" is a philosopher, then "x" is a scholar" holds for "all" choices of "x". The "negation" of the sentence "For every "x", if "x" is a philosopher, then "x" is a scholar" is logically equivalent to the sentence "There exists "x" such that "x" is a philosopher and "x" is not a scholar". The existential quantifier "there exists" expresses the idea that the claim ""x" is a philosopher and "x" is not a scholar" holds for "some" choice of "x". The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, in an interpretation with the domain of discourse consisting of all human beings and the predicate "is a philosopher" understood as "was the author of the "Republic"", the sentence "There exists "x" such that "x" is a philosopher" is seen as being true, as witnessed by Plato. There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions. Syntax. Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of well-formed expressions: "terms", which intuitively represent objects, and "formulas", which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of "symbols", where all the symbols together form the "alphabet" of the language. Alphabet. As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet into "logical symbols", which always have the same meaning, and "non-logical symbols", whose meaning varies by interpretation. For example, the logical symbol formula_3 always represents "and"; it is never interpreted as "or", which is represented by the logical symbol formula_4. However, a non-logical predicate symbol such as Phil("x") could be interpreted to mean ""x" is a philosopher", ""x" is a man named Philip", or any other unary predicate depending on the interpretation at hand. Logical symbols. Logical symbols are a set of characters that vary by author, but usually include the following: (see below). Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices. Other logical symbols include the following: Non-logical symbols. Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes: When the arity of a predicate symbol or function symbol is clear from context, the superscript "n" is often omitted. In this traditional approach, there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a "signature". Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, &lt;} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem. Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics. In this approach, every non-logical symbol is of one of the following types: The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. Formation rules. The formation rules define the terms and formulas of first-order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms. Terms. The set of "terms" is inductively defined by the following rules: Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. Formulas. The set of "formulas" (also called "well-formed formulas" or "WFFs") is inductively defined by the following rules: Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be "atomic formulas". For example: formula_12 is a formula, if "f" is a unary function symbol, "P" a unary predicate symbol, and Q a ternary predicate symbol. However, formula_13 is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as "unique readability" of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. Notational conventions. For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is: Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula: formula_16 might be written as: formula_17 In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation, cf. also term structure vs. representation. The definitions above use infix notation for binary connectives such as formula_15. A less common convention is Polish notation, in which one writes formula_18, formula_19 and so on in front of their arguments rather than between them. This convention is advantageous in that it allows all punctuation symbols to be discarded. As such, Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read. In Polish notation, the formula: formula_12 becomes "∀x∀y→Pfx¬→ PxQfyxz". Free and bound variables. In a formula, a variable may occur "free" or "bound" (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol "x", each occurrence of a variable symbol "x" in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol "x" appears.p.297 Then, an occurrence of "x" is said to be bound if that occurrence of "x" lies within the scope of at least one of either formula_20 or formula_21. Finally, "x" is bound in φ if all occurrences of "x" in φ are bound.pp.142--143 Intuitively, a variable symbol is free in a formula if at no point is it quantified:pp.142--143 in ∀"y" "P"("x", "y"), the sole occurrence of variable "x" is free while that of "y" is bound. The free and bound variable occurrences in a formula are defined inductively as follows. For example, in ∀"x" ∀"y" ("P"("x") → "Q"("x","f"("x"),"z")), "x" and "y" occur only bound, "z" occurs only free, and "w" is neither because it does not occur in the formula. Free and bound variables of a formula need not be disjoint sets: in the formula "P"("x") → ∀"x" "Q"("x"), the first occurrence of "x", as argument of "P", is free while the second one, as argument of "Q", is bound. A formula in first-order logic with no free variable occurrences is called a "first-order sentence". These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil("x") is true must depend on what "x" represents. But the sentence ∃"x" Phil("x") will be either true or false in a given interpretation. Example: ordered abelian groups. In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written formula_24 Semantics. An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) First-order structures. The most common way of specifying an interpretation (especially in mathematics) is to specify a "structure" (also called a "model"; see below). The structure consists of a domain of discourse "D" and an interpretation function I mapping non-logical symbols to predicates, functions, and constants. The domain of discourse "D" is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, formula_25 states the existence of some object in "D" for which the predicate "P" is true (or, more precisely, for which the predicate assigned to the predicate symbol "P" by the interpretation is true). For example, one can take "D" to be the set of integers. Non-logical symbols are interpreted as follows: Evaluation of truth values. A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as formula_30. The truth value of this formula changes depending on the values that "x" and "y" denote. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to "M" and formula_47 if and only if it is true according to "M" and every other variable assignment formula_48. There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation "M", one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in "M"; say that for each "d" in the domain the constant symbol "c""d" is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. Validity, satisfiability, and logical consequence. If a sentence φ evaluates to "true" under a given interpretation "M", one says that "M" "satisfies" φ; this is denoted formula_52. A sentence is "satisfiable" if there is some interpretation under which it is true. This is a bit different from the symbol formula_51 from model theory, where formula_53 denotes satisfiability in a model, i.e. "there is a suitable assignment of values in formula_54's domain to variable symbols of formula_55". Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables formula_56, ..., formula_57 is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables formula_56, ..., formula_57. This has the same effect as saying that a formula φ is satisfied if and only if its universal closure formula_58 is satisfied. A formula is "logically valid" (or simply "valid") if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic. A formula φ is a "logical consequence" of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. Algebraizations. An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: These algebras are all lattices that properly extend the two-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions. First-order theories, models, and elementary classes. A "first-order theory" of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called "effective". Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be a "model" of the theory. An "elementary class" is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory. Many theories have an "intended interpretation", a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models. A theory is "consistent" if it is not possible to prove a contradiction from the axioms of the theory. A theory is "complete" if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. Empty domains. The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. Deductive systems. A "deductive system" is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called "derivations" in proof theory. They are also often called "proofs" but are completely formalized unlike natural-language mathematical proofs. A deductive system is "sound" if any formula that can be derived in the system is logically valid. Conversely, a deductive system is "complete" if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called "effective". A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. Rules of inference. A "rule of inference" states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is the "rule of substitution". If "t" is a term and φ is a formula possibly containing the variable "x", then φ["t"/"x"] is the result of replacing all free instances of "x" by "t" in φ. The substitution rule states that for any φ and any term "t", one can conclude φ["t"/"x"] from φ provided that no free variable of "t" becomes bound during the substitution process. (If some free variable of "t" becomes bound, then to substitute "t" for "x" it is first necessary to change the bound variables of φ to differ from the free variables of "t".) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by formula_61, in the signature of (0,1,+,×,=) of arithmetic. If "t" is the term "x + 1", the formula φ["t"/"y"] is formula_62, which will be false in many interpretations. The problem is that the free variable "x" of "t" became bound during the substitution. The intended replacement can be obtained by renaming the bound variable "x" of φ to something else, say "z", so that the formula after substitution is formula_63, which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. Hilbert-style systems and natural deduction. A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a "logical axiom", a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. Sequent calculus. The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses "sequents", which are expressions of the form: formula_64 where A1, ..., A"n", B1, ..., B"k" are formulas and the turnstile symbol formula_65 is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that formula_66 implies formula_67. Tableaux method. Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has formula_68 at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that formula_69 is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent formula_69 and children C and D. Resolution. The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses formula_70 and formula_71, the conclusion formula_72 can be obtained. Provable identities. Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include: formula_73 formula_74 formula_75 formula_76 formula_77 formula_78 formula_79 (where formula_2 must not occur free in formula_39) formula_80 (where formula_2 must not occur free in formula_39) Equality and its axioms. There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are: These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as "Leibniz's law", "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol "f", is (equivalent to) a special case of the third schema, using the formula: φ(z): "f"(..., "x", ...) = "f"(..., "z", ...) Then "x" = "y" → ("f"(..., "x", ...) = "f"(..., "x", ...) → "f"(..., "x", ...) = "f"(..., "y", ...)). Since "x" = "y" is given, and "f"(..., "x", ...) = "f"(..., "x", ...) true by reflexivity, we have "f"(..., "x", ...) = "f"(..., "y", ...) Many other properties of equality are consequences of the axioms above, for example: First-order logic without equality. An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as "first-order logic without equality". If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation. When this second convention is followed, the term "normal model" is used to refer to an interpretation where no distinct individuals "a" and "b" satisfy "a" = "b". In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered. First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. Defining equality within a theory. If a theory has a binary formula "A"("x","y") which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms "s" and "t" to be equal if any relation is unchanged by changing "s" to "t" in any argument. Some theories allow other "ad hoc" definitions of equality: Metalogical properties. One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. Completeness and undecidability. Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics. The Löwenheim–Skolem theorem. The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ("x") such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox. The compactness theorem. The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ("x","y") of first-order logic, in the logic of graphs, that expresses the idea that there is a path from "x" to "y". Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as formula_83 also enjoys compactness. Lindström's theorem. Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: Limitations. Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers formula_84 and formula_85. Expressiveness. The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. Formalizing natural languages. First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages, such as FO(.). Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic". Restrictions, extensions, and variations. There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. Restricted languages. First-order logic can be studied in languages with fewer logical symbols than were described above: Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. Many-sorted logic. Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. "Many-sorted first-order logic" allows variables to have different "sorts", which have different domains. This is also called "typed first-order logic", and the sorts called "types" (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic. When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic. One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols formula_104 and formula_105 and the axiom: formula_106. Then the elements satisfying formula_107 are thought of as elements of the first sort, and elements satisfying formula_108 as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula formula_109, one writes: formula_110. Additional quantifiers. Additional quantifiers can be added to first-order logic. Infinitary logics. Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denoted "L"αβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is "L"ωω. In the logic "L"∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as "L"κω. For example, "L"ω1ω permits countable conjunctions and disjunctions. The set of free variables in a formula of "L"κω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in "L"κ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic "L"κλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. Fixpoint logic. Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators. Higher-order logics. The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus formula_111 is a legal first-order formula, but formula_112 is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as "full semantics". The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. Automated theorem proving and formal methods. Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see . See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{isPhil}" }, { "math_id": 1, "text": "\\text{isPhil}(x)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "\\land" }, { "math_id": 4, "text": "\\lor" }, { "math_id": 5, "text": "\\|" }, { "math_id": 6, "text": "+" }, { "math_id": 7, "text": "\\varphi" }, { "math_id": 8, "text": "\\lnot\\varphi" }, { "math_id": 9, "text": "\\varphi\\rightarrow\\psi" }, { "math_id": 10, "text": "\\forall x \\varphi" }, { "math_id": 11, "text": "\\exists x \\varphi" }, { "math_id": 12, "text": "\\forall x \\forall y (P(f(x)) \\rightarrow\\neg (P(x) \\rightarrow Q(f(y),x,z)))" }, { "math_id": 13, "text": "\\forall x\\, x \\rightarrow" }, { "math_id": 14, "text": "\\lnot" }, { "math_id": 15, "text": "\\to" }, { "math_id": 16, "text": "\\lnot \\forall x P(x) \\to \\exists x \\lnot P(x)" }, { "math_id": 17, "text": "(\\lnot [\\forall x P(x)]) \\to \\exists x [\\lnot P(x)]." }, { "math_id": 18, "text": "\\rightarrow" }, { "math_id": 19, "text": "\\wedge" }, { "math_id": 20, "text": "\\exists x" }, { "math_id": 21, "text": "\\forall x" }, { "math_id": 22, "text": "(\\forall x \\forall y \\, [\\mathop{\\leq}(\\mathop{+}(x, y), z) \\to \\forall x\\, \\forall y\\, \\mathop{+}(x, y) = 0)]" }, { "math_id": 23, "text": "\\forall x \\forall y ( x + y \\leq z) \\to \\forall x \\forall y (x+y = 0)." }, { "math_id": 24, "text": "(\\forall x)(\\forall y)[x+ y = y + x]." }, { "math_id": 25, "text": "\\exists x P(x)" }, { "math_id": 26, "text": "I(c)=10" }, { "math_id": 27, "text": "c" }, { "math_id": 28, "text": "I(P)" }, { "math_id": 29, "text": "\\{\\mathrm{true, false}\\}" }, { "math_id": 30, "text": "y = x" }, { "math_id": 31, "text": "t_1, \\ldots, t_n" }, { "math_id": 32, "text": "d_1, \\ldots, d_n" }, { "math_id": 33, "text": "f(t_1, \\ldots, t_n)" }, { "math_id": 34, "text": "(I(f))(d_1,\\ldots,d_n)" }, { "math_id": 35, "text": "P(t_1,\\ldots,t_n)" }, { "math_id": 36, "text": "\\langle v_1,\\ldots,v_n \\rangle \\in I(P)" }, { "math_id": 37, "text": "v_1,\\ldots,v_n" }, { "math_id": 38, "text": "t_1,\\ldots,t_n" }, { "math_id": 39, "text": "P" }, { "math_id": 40, "text": "D^n" }, { "math_id": 41, "text": "t_1 = t_2" }, { "math_id": 42, "text": "t_1" }, { "math_id": 43, "text": "t_2" }, { "math_id": 44, "text": "\\neg \\varphi" }, { "math_id": 45, "text": "\\varphi \\rightarrow\n\\psi" }, { "math_id": 46, "text": "\\exists x \\varphi(x)" }, { "math_id": 47, "text": "\\mu" }, { "math_id": 48, "text": "\\mu'" }, { "math_id": 49, "text": "\\forall x \\varphi(x)" }, { "math_id": 50, "text": "\\varphi(c_d)" }, { "math_id": 51, "text": "\\vDash" }, { "math_id": 52, "text": "M \\vDash \\varphi" }, { "math_id": 53, "text": "M\\vDash\\phi" }, { "math_id": 54, "text": "M" }, { "math_id": 55, "text": "\\phi" }, { "math_id": 56, "text": "x_1" }, { "math_id": 57, "text": "x_n" }, { "math_id": 58, "text": "\\forall x_1 \\dots \\forall x_n \\phi (x_1, \\dots, x_n)" }, { "math_id": 59, "text": "\\varphi \\lor \\exists x \\psi" }, { "math_id": 60, "text": "\\exists x (\\varphi \\lor \\psi)" }, { "math_id": 61, "text": "\\exists x (x = y)" }, { "math_id": 62, "text": "\\exists x ( x = x+1)" }, { "math_id": 63, "text": "\\exists z ( z = x+1)" }, { "math_id": 64, "text": "A_1, \\ldots, A_n \\vdash B_1, \\ldots, B_k," }, { "math_id": 65, "text": "\\vdash" }, { "math_id": 66, "text": "(A_1 \\land \\cdots\\land A_n)" }, { "math_id": 67, "text": "(B_1\\lor\\cdots\\lor B_k)" }, { "math_id": 68, "text": "\\lnot A" }, { "math_id": 69, "text": "C \\lor D" }, { "math_id": 70, "text": "A_1 \\lor\\cdots\\lor A_k \\lor C" }, { "math_id": 71, "text": "B_1\\lor\\cdots\\lor B_l\\lor\\lnot C" }, { "math_id": 72, "text": "A_1\\lor\\cdots\\lor A_k\\lor B_1\\lor\\cdots\\lor B_l" }, { "math_id": 73, "text": "\\lnot \\forall x \\, P(x) \\Leftrightarrow \\exists x \\, \\lnot P(x)" }, { "math_id": 74, "text": "\\lnot \\exists x \\, P(x) \\Leftrightarrow \\forall x \\, \\lnot P(x)" }, { "math_id": 75, "text": "\\forall x \\, \\forall y \\, P(x,y) \\Leftrightarrow \\forall y \\, \\forall x \\, P(x,y)" }, { "math_id": 76, "text": "\\exists x \\, \\exists y \\, P(x,y) \\Leftrightarrow \\exists y \\, \\exists x \\, P(x,y)" }, { "math_id": 77, "text": "\\forall x \\, P(x) \\land \\forall x \\, Q(x) \\Leftrightarrow \\forall x \\, (P(x) \\land Q(x)) " }, { "math_id": 78, "text": "\\exists x \\, P(x) \\lor \\exists x \\, Q(x) \\Leftrightarrow \\exists x \\, (P(x) \\lor Q(x)) " }, { "math_id": 79, "text": "P \\land \\exists x \\, Q(x) \\Leftrightarrow \\exists x \\, (P \\land Q(x)) " }, { "math_id": 80, "text": "P \\lor \\forall x \\, Q(x) \\Leftrightarrow \\forall x \\, (P \\lor Q(x)) " }, { "math_id": 81, "text": "\\forall x \\forall y [ \\forall z (z \\in x \\Leftrightarrow z \\in y) \\Rightarrow x = y]" }, { "math_id": 82, "text": "\\forall x \\forall y [ \\forall z (z \\in x \\Leftrightarrow z \\in y) \\Rightarrow \\forall z (x \\in z \\Leftrightarrow y \\in z) ]" }, { "math_id": 83, "text": "\\Sigma_1^1" }, { "math_id": 84, "text": "\\exists^{\\ge n}" }, { "math_id": 85, "text": "\\exists^{\\le n}" }, { "math_id": 86, "text": "\\neg \\forall x \\neg \\varphi(x)" }, { "math_id": 87, "text": "\\neg \\exists x \\neg \\varphi(x)" }, { "math_id": 88, "text": "\\exists" }, { "math_id": 89, "text": "\\forall" }, { "math_id": 90, "text": "\\varphi \\lor \\psi" }, { "math_id": 91, "text": "\\lnot (\\lnot \\varphi \\land \\lnot \\psi)" }, { "math_id": 92, "text": "\\varphi \\land \\psi" }, { "math_id": 93, "text": "\\lnot(\\lnot \\varphi \\lor \\lnot \\psi)" }, { "math_id": 94, "text": "\\vee" }, { "math_id": 95, "text": "\\neg" }, { "math_id": 96, "text": " \\; 0 " }, { "math_id": 97, "text": " \\; 0(x) " }, { "math_id": 98, "text": " \\; x=0 " }, { "math_id": 99, "text": "\\; P(0,y) " }, { "math_id": 100, "text": " \\forall x \\;(0(x) \\rightarrow P(x,y)) " }, { "math_id": 101, "text": " f(x_1,x_2,...,x_n) " }, { "math_id": 102, "text": " F(x_1,x_2,...,x_n,y) " }, { "math_id": 103, "text": " y = f(x_1,x_2,...,x_n) " }, { "math_id": 104, "text": "P_1(x)" }, { "math_id": 105, "text": "P_2(x)" }, { "math_id": 106, "text": "\\forall x ( P_1(x) \\lor P_2(x)) \\land \\lnot \\exists x (P_1(x) \\land P_2(x))" }, { "math_id": 107, "text": "P_1" }, { "math_id": 108, "text": "P_2" }, { "math_id": 109, "text": "\\varphi(x)" }, { "math_id": 110, "text": "\\exists x (P_1(x) \\land \\varphi(x))" }, { "math_id": 111, "text": "\\exists a ( \\text{Phil}(a))" }, { "math_id": 112, "text": "\\exists \\text{Phil} ( \\text{Phil}(a))" } ]
https://en.wikipedia.org/wiki?curid=10983
10983365
Database storage structures
Database tables and indexes may be stored on disk in one of a number of forms, including ordered/unordered flat files, ISAM, heap files, hash buckets, or B+ trees. Each form has its own particular advantages and disadvantages. The most commonly used forms are B-trees and ISAM. Such forms or structures are one aspect of the overall schema used by a database engine to store information. Unordered. Unordered storage typically stores the records in the order they are inserted. Such storage offers good insertion efficiency (formula_0), but inefficient retrieval times (formula_1). Typically these retrieval times are better, however, as most databases use indexes on the primary keys, resulting in retrieval times of formula_2 or formula_0 for keys that are the same as the database row offsets within the storage system. Ordered. Ordered storage typically stores the records in order and may have to rearrange or increase the file size when a new record is inserted, resulting in lower insertion efficiency. However, ordered storage provides more efficient retrieval as the records are pre-sorted, resulting in a complexity of formula_2. Structured files. Heap files. Heap files are lists of unordered records of variable size. Although sharing a similar name, heap files are widely different from in-memory heaps. In-memory heaps are ordered, as opposed to heap files. B+ trees. These are the most commonly used in practice. Data orientation. Most conventional relational databases use "row-oriented" storage, meaning that all data associated with a given row is stored together. By contrast, column-oriented DBMS store all data from a given column together in order to more quickly serve data warehouse-style queries. Correlation databases are similar to row-based databases, but apply a layer of indirection to map multiple instances of the same value to the same numerical identifier.
[ { "math_id": 0, "text": "O\\left(1\\right)" }, { "math_id": 1, "text": "O\\left(n\\right)" }, { "math_id": 2, "text": "O\\left(\\log n\\right)" } ]
https://en.wikipedia.org/wiki?curid=10983365
10985744
Rotational diffusion
Mechanics concept Rotational diffusion is the rotational movement which acts upon any object such as particles, molecules, atoms when present in a fluid, by random changes in their orientations. Whilst the directions and intensities of these changes are statistically random, they do not arise randomly and are instead the result of interactions between particles. One example occurs in colloids, where relatively large insoluble particles are suspended in a greater amount of fluid. The changes in orientation occur from collisions between the particle and the many molecules forming the fluid surrounding the particle, which each transfer kinetic energy to the particle, and as such can be considered random due to the varied speeds and amounts of fluid molecules incident on each individual particle at any given time. The analogue to translational diffusion which determines the particle's position in space, rotational diffusion randomises the orientation of any particle it acts on. Anything in a solution will experience rotational diffusion, from the microscopic scale where individual atoms may have an effect on each other, to the macroscopic scale. Applications. Rotational diffusion has multiple applications in chemistry and physics, and is heavily involved in many biology based fields. For example, protein-protein interaction is a vital step in the communication of biological signals. In order to communicate, the proteins must both come into contact with each other and be facing the appropriate way to interact with each other's binding site, which relies on the proteins ability to rotate. As an example concerning physics, rotational Brownian motion in astronomy can be used to explain the orientations of the orbital planes of binary stars, as well as the seemingly random spin axes of supermassive black holes. The random re-orientation of molecules (or larger systems) is an important process for many biophysical probes. Due to the equipartition theorem, larger molecules re-orient more slowly than do smaller objects and, hence, measurements of the rotational diffusion constants can give insight into the overall mass and its distribution within an object. Quantitatively, the mean square of the angular velocity about each of an object's principal axes is inversely proportional to its moment of inertia about that axis. Therefore, there should be three rotational diffusion constants - the eigenvalues of the rotational diffusion tensor - resulting in five rotational time constants. If two eigenvalues of the diffusion tensor are equal, the particle diffuses as a spheroid with two unique diffusion rates and three time constants. And if all eigenvalues are the same, the particle diffuses as a sphere with one time constant. The diffusion tensor may be determined from the Perrin friction factors, in analogy with the Einstein relation of translational diffusion, but often is inaccurate and direct measurement is required. The rotational diffusion tensor may be determined experimentally through fluorescence anisotropy, flow birefringence, dielectric spectroscopy, NMR relaxation and other biophysical methods sensitive to picosecond or slower rotational processes. In some techniques such as fluorescence it may be very difficult to characterize the full diffusion tensor, for example measuring two diffusion rates can sometimes be possible when there is a great difference between them, e.g., for very long, thin ellipsoids such as certain viruses. This is however not the case of the extremely sensitive, atomic resolution technique of NMR relaxation that can be used to fully determine the rotational diffusion tensor to very high precision. Rotational diffusion of macromolecules in complex biological fluids (i.e., cytoplasm) is slow enough to be measurable by techniques with microsecond time resolution, i.e. fluorescence correlation spectroscopy. Relation to translational diffusion. Much like translational diffusion in which particles in one area of high concentration slowly spread position through random walks until they are near-equally distributed over the entire space, in rotational diffusion, over long periods of time the directions which these particles face will spread until they follow a completely random distribution with a near-equal amount facing in all directions. As impacts from surrounding particles rarely, if ever, occur directly in the centre of mass of a 'target' particle, each impact will occur off-centre and as such it is important to note that the same collisions that cause translational diffusion cause rotational diffusion as some of the impact energy is transferred to translational kinetic energy and some is transferred into torque. Rotational version of Fick's law. A rotational version of Fick's law of diffusion can be defined. Let each rotating molecule be associated with a unit vector formula_0; for example, formula_0 might represent the orientation of an electric or magnetic dipole moment. Let "f"("θ, φ, t") represent the probability density distribution for the orientation of formula_0 at time "t". Here, "θ" and "φ" represent the spherical angles, with "θ" being the polar angle between formula_0 and the "z"-axis and "φ" being the azimuthal angle of formula_0 in the "x-y" plane. The rotational version of Fick's law states formula_1. This partial differential equation (PDE) may be solved by expanding "f(θ, φ, t)" in spherical harmonics formula_2 for which the mathematical identity holds formula_3. Thus, the solution of the PDE may be written formula_4, where "Clm" are constants fitted to the initial distribution and the time constants equal formula_5. Two-dimensional rotational diffusion. A sphere rotating around a fixed axis will rotate in two dimensions only and can be viewed from above the fixed axis as a circle. In this example, a sphere which is fixed on the vertical axis rotates around that axis only, meaning that the particle can have a θ value of 0 through 360 degrees, or 2π Radians, before having a net rotation of 0 again. These directions can be placed onto a graph which covers the entirety of the possible positions for the face to be at relative to the starting point, through 2π radians, starting with -π radians through 0 to π radians. Assuming all particles begin with single orientation of 0, the first measurement of directions taken will resemble a delta function at 0 as all particles will be at their starting, or 0th, position and therefore create an infinitely steep single line. Over time, the increasing amount of measurements taken will cause a spread in results; the initial measurements will see a thin peak form on the graph as the particle can only move slightly in a short time. Then as more time passes, the chance for the molecule to rotate further from its starting point increases which widens the peak, until enough time has passed that the measurements will be evenly distributed across all possible directions. The distribution of orientations will reach a point where they become uniform as they all randomly disperse to be nearly equal in all directions. This can be visualized in two ways. Basic equations. For rotational diffusion about a single axis, the mean-square angular deviation in time formula_6 is formula_7, where formula_8 is the rotational diffusion coefficient (in units of radians2/s). The angular drift velocity formula_9 in response to an external torque formula_10 (assuming that the flow stays non-turbulent and that inertial effects can be neglected) is given by formula_11, where formula_12 is the frictional drag coefficient. The relationship between the rotational diffusion coefficient and the rotational frictional drag coefficient is given by the Einstein relation (or Einstein–Smoluchowski relation): formula_13, where formula_14 is the Boltzmann constant and formula_15 is the absolute temperature. These relationships are in complete analogy to translational diffusion. The rotational frictional drag coefficient for a sphere of radius formula_16 is formula_17 where formula_18 is the dynamic (or shear) viscosity. The rotational diffusion of spheres, such as nanoparticles, may deviate from what is expected when in complex environments, such as in polymer solutions or gels. This deviation can be explained by the formation of a depletion layer around the nanoparticle. Langevin dynamics. Collisions with the surrounding fluid molecules will create a fluctuating torque on the sphere due to the varied speeds, numbers, and directions of impact. When trying to rotate a sphere via an externally applied torque, there will be a systematic drag resistance to rotation. With these two facts combined, it is possible to write the Langevin-like equation: formula_19 Where: The overall Torque on the particle will be the difference between: formula_21 and formula_22. This equation is the rotational version of Newtons second equation of motion. For example, in standard translational terms, a rocket will experience a boosting force from the engine whilst simultaneously experiencing a resistive force from the air it is travelling through. The same can be said for an object which is rotating. Due to the random nature of rotation of the particle, the "average" Brownian torque is equal in both directions of rotation. symbolised as: formula_23 This means the equation can be averaged to get: formula_24 Which is to say that the first derivative with respect to time of the average Angular momentum is equal to the negative of the Rotational friction coefficient divided by the moment of inertia, all multiplied by the average of the angular momentum. As formula_25 is the rate of change of angular momentum over time, and is equal to a negative value of a coefficient multiplied by formula_26, this shows that the angular momentum is decreasing over time, or decaying with a decay time of: formula_27. For a sphere of mass "m", uniform density "ρ" and radius "a", the moment of inertia is: formula_28. As mentioned above, the rotational drag is given by the Stokes friction for rotation: formula_29 Combining all of the equations and formula from above, we get: formula_30 where: Example: Spherical particle in water. Let's say there is a virus which can be modelled as a perfect sphere with the following conditions: First, the mass of the virus particle can be calculated: formula_32 From this, we now know all the variables to calculate moment of inertia: formula_33 Simultaneous to this, we can also calculate the rotational drag: formula_34 Combining these equations we get: formula_35 As the SI units for Pascal are formula_36 the units in the answer can be reduced to read: formula_37 For this example, the decay time of the virus is in the order of nanoseconds. Smoluchowski description of rotation. To write the Smoluchowski equation for a particle rotating in two dimensions, we introduce a probability density P(θ, t) to find the vector u at an angle θ and time t. This can be done by writing a continuity equation: formula_38 where the current can be written as: formula_39 Which can be combined to give the rotational diffusion equation: formula_40 We can express the current in terms of an angular velocity which is a result of Brownian torque TB through a rotational mobility with the equation: formula_41 Where: The only difference between rotational and translational diffusion in this case is that in the rotational diffusion, we have periodicity in the angle θ. As the particle is modelled as a sphere rotating in two dimensions, the space the particle can take is compact and finite, as the particle can rotate a distance of 2π before returning to its original position formula_45 We can create a conditional probability density, which is the probability of finding the vector u at the angle θ and time t given that it was at angle θ0 at time t=0 This is written as such: formula_46 The solution to this equation can be found through a Fourier series: formula_47 Where formula_48 is the Jacobian theta function of the third kind. By using the equation formula_49 The conditional probability density function can be written as : formula_50 For short times after the starting point where t ≈ t0 and θ ≈ θ0, the formula becomes: formula_51 The terms included in the are exponentially small and make little enough difference to not be included here. This means that at short times the conditional probability looks similar to translational diffusion, as both show extremely small perturbations near t0. However at long times, t » t0 , the behaviour of rotational diffusion is different to translational diffusion: formula_52 The main difference between rotational diffusion and translational diffusion is that rotational diffusion has a periodicity of formula_53, meaning that these two angles are identical. This is because a circle can rotate entirely once before being at the same angle as it was in the beginning, meaning that all the possible orientations can be mapped within the space of formula_54. This is opposed to translational diffusion, which has no such periodicity. The conditional probability of having the angle be θ is approximately formula_55 . This is because over long periods of time, the particle has had time rotate throughout the entire range of angles possible and as such, the angle θ could be any amount between θ0 and θ0 + 2 π. The probability is near-evenly distributed through each angle as at large enough times. This can be proven through summing the probability of all possible angles. As there are 2π possible angles, each with the probability of formula_55 , the total probability sums to 1, which means there is a certainty of finding the angle at some point on the circle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat{n}" }, { "math_id": 1, "text": "\n\\frac{1}{D_{\\mathrm{rot}}} \\frac{\\partial f}{\\partial t} = \\nabla^{2} f = \n\\frac{1}{\\sin\\theta} \\frac{\\partial}{\\partial \\theta}\\left( \\sin\\theta \\frac{\\partial f}{\\partial \\theta} \\right) + \n\\frac{1}{\\sin^{2} \\theta} \\frac{\\partial^{2} f}{\\partial \\phi^{2}}\n" }, { "math_id": 2, "text": "\nY^{m}_{l}\n" }, { "math_id": 3, "text": "\n\\frac{1}{\\sin\\theta} \\frac{\\partial}{\\partial \\theta}\\left( \\sin\\theta \\frac{\\partial Y^{m}_{l}}{\\partial \\theta} \\right) + \n\\frac{1}{\\sin^{2} \\theta} \\frac{\\partial^{2} Y^{m}_{l}}{\\partial \\phi^{2}} = -l(l+1) Y^{m}_{l}(\\theta,\\phi)\n" }, { "math_id": 4, "text": "\nf(\\theta, \\phi, t) = \\sum_{l=0}^{\\infty} \\sum_{m=-l}^{l} C_{lm} Y^{m}_{l}(\\theta, \\phi) e^{-t/\\tau_{l}}\n" }, { "math_id": 5, "text": "\n\\tau_{l} = \\frac{1}{D_{\\mathrm{rot}}l(l+1)}\n" }, { "math_id": 6, "text": " t " }, { "math_id": 7, "text": "\\left\\langle\\theta^2\\right\\rangle = 2 D_r t" }, { "math_id": 8, "text": " D_r " }, { "math_id": 9, "text": "\\Omega_d = (d\\theta/dt)_{\\rm drift} " }, { "math_id": 10, "text": " \\Gamma_{\\theta} " }, { "math_id": 11, "text": "\\Omega_d = \\frac{\\Gamma_\\theta}{f_r}" }, { "math_id": 12, "text": " f_r " }, { "math_id": 13, "text": "D_r = \\frac{k_{\\rm B} T}{f_r}" }, { "math_id": 14, "text": " k_{\\rm B} " }, { "math_id": 15, "text": " T " }, { "math_id": 16, "text": " R " }, { "math_id": 17, "text": "f_{r, \\textrm{sphere}} = 8 \\pi \\eta R^3" }, { "math_id": 18, "text": " \\eta " }, { "math_id": 19, "text": "\\frac{dL}{dt} = {I}\\, \\cdot \\frac{d^2{\\phi}}{dt^2} = - {\\zeta}^{r} \\cdot \\frac{d{\\theta}}{dt} + TB(t)" }, { "math_id": 20, "text": "\\frac{dL}{dt}" }, { "math_id": 21, "text": "TB(t)" }, { "math_id": 22, "text": "({\\zeta}^{r} \\cdot \\frac{d{\\theta}}{dt}) " }, { "math_id": 23, "text": " \\left \\langle TB(t) \\right \\rangle = 0 " }, { "math_id": 24, "text": "\\frac{d \\left \\langle L \\right \\rangle}{dt} = - {\\zeta}^{r} \\cdot \\left \\langle \\frac{d{\\theta}}{dt} \\right \\rangle = -\\frac{\\zeta^r}{I} \\left \\langle L \\right \\rangle" }, { "math_id": 25, "text": " \\frac{d \\left \\langle L \\right \\rangle}{dt} " }, { "math_id": 26, "text": " \\left \\langle L \\right \\rangle " }, { "math_id": 27, "text": " {\\tau{_L}} = \\frac{I}{\\zeta^r} " }, { "math_id": 28, "text": " I = \\frac{2ma^2}{5} = \\frac{8{\\pi}{\\rho}a^5}{15} " }, { "math_id": 29, "text": " {\\zeta^r} = 8\\pi\\eta a^3 " }, { "math_id": 30, "text": " {\\tau{_L}} = \\frac{\\rho a^2}{15\\eta} = \\frac{3}{10}\\tau_p " }, { "math_id": 31, "text": " \\tau_p " }, { "math_id": 32, "text": " m = \\frac {4\\rho\\pi a^{3}} {3} = \\frac {4 \\times 1500 \\times \\pi \\times (10^{-7})^3} {3} = 6.3 \\times 10^{-18} kg " }, { "math_id": 33, "text": " I = \\frac {2ma^{2}} {5} = \\frac {2 \\times (6.3\\times10^{-18}) \\times (10^{-7})^2} {5} = 2.5 \\times 10^{-32} kg \\cdot m^2 " }, { "math_id": 34, "text": " \\zeta^{r} = 8 \\pi \\eta a^{3} = 8 \\times \\pi \\times (8.9\\times10^{-4}) \\times (10^{-7})^3 = 2.237 \\times 10^{-23} Pa \\cdot s \\cdot m^3 " }, { "math_id": 35, "text": " \\tau_L = \\frac {I} {\\zeta^r} = \\frac {2.5 \\times 10^{-32} kg \\cdot m^2} {2.2 \\times 10^{-23} Pa \\cdot s \\cdot m^3} = 1.1 \\times 10^{-9} kg \\cdot Pa^{-1} \\cdot s^{-1} \\cdot m^{-1} " }, { "math_id": 36, "text": " kg \\cdot m^{-1} \\cdot s^{-2} " }, { "math_id": 37, "text": " \\tau_L = 1.1 \\times 10^{-9} s " }, { "math_id": 38, "text": " {\\partial P(\\theta,t)\\over\\partial t} = - {\\partial j(\\theta,t)\\over\\partial \\theta} " }, { "math_id": 39, "text": " j(\\theta,t) = - D^r {\\partial P(\\theta,t)\\over\\partial \\theta} " }, { "math_id": 40, "text": " {\\partial P(\\theta,t)\\over\\partial t} = D^r {\\partial^2 P(\\theta,t)\\over\\partial \\theta^2} = D^rP(\\theta,t) " }, { "math_id": 41, "text": " j_B(\\theta,t) = \\dot{\\theta}_B P(\\theta,t) " }, { "math_id": 42, "text": " \\dot{\\theta}_B = \\mu^rT_B " }, { "math_id": 43, "text": " T_B = - {\\partial V_B \\over \\partial \\theta} " }, { "math_id": 44, "text": " V_B(\\theta,t) = k_BT \\ln P(\\theta,t) " }, { "math_id": 45, "text": " P(\\theta + 2\\pi , t ) = {P(\\theta,t)} " }, { "math_id": 46, "text": " P(\\theta,0 \\mid \\theta_0) = \\delta (\\theta - \\theta_0) " }, { "math_id": 47, "text": " P(\\theta,t\\mid\\theta_0) = \\frac {1} {2\\pi} \\left [1+ 2\\sum_{m=1}^\\infty e^{-D^rm^2t}\\cos m(\\theta - \\theta_0) \\right ] = \\frac{1}{2\\pi} \\Theta_3 (\\frac {1}{2} (\\theta - \\theta_0), e^{-D^rt}) " }, { "math_id": 48, "text": " \\Theta_3(z,\\tau) " }, { "math_id": 49, "text": " \\Theta_3(z,\\tau) = (-i\\tau)^{-1/2}\\exp\\biggl(\\frac{z^2}{i\\pi\\tau}\\biggl) \\Theta_3 \\biggl(\\frac{z}{\\tau}, - \\frac{1}{\\tau}\\biggl) " }, { "math_id": 50, "text": " P(\\theta,t \\mid \\theta_0) = \\frac {1}{\\sqrt{4\\pi D^rt}} \\sum_{n=-\\infty}^\\infty \\exp \\left [- \\frac{(\\theta-\\theta_0-2n\\pi)^2}{4D^rt} \\right ] " }, { "math_id": 51, "text": " P(\\theta,t \\mid \\theta_0) \\approx \\frac {1}{\\sqrt{4\\pi D^rt}} \\exp \\left [ - \\frac{(\\theta-\\theta_0)^2} {4D^rt} \\right ] + \\cdots " }, { "math_id": 52, "text": " P(\\theta,t \\mid \\theta_0) \\approx \\frac{1}{2\\pi}, t \\rightarrow \\infty " }, { "math_id": 53, "text": " \\theta + (2 \\pi) = \\theta " }, { "math_id": 54, "text": " 2 \\pi " }, { "math_id": 55, "text": " \\frac {1}{2\\pi} " } ]
https://en.wikipedia.org/wiki?curid=10985744
10986798
Molecular symmetry
Symmetry of molecules of chemical compounds In chemistry, molecular symmetry describes the symmetry present in molecules and the classification of these molecules according to their symmetry. Molecular symmetry is a fundamental concept in chemistry, as it can be used to predict or explain many of a molecule's chemical properties, such as whether or not it has a dipole moment, as well as its allowed spectroscopic transitions. To do this it is necessary to use group theory. This involves classifying the states of the molecule using the irreducible representations from the character table of the symmetry group of the molecule. Symmetry is useful in the study of molecular orbitals, with applications to the Hückel method, to ligand field theory, and to the Woodward-Hoffmann rules. Many university level textbooks on physical chemistry, quantum chemistry, spectroscopy and inorganic chemistry discuss symmetry. Another framework on a larger scale is the use of crystal systems to describe crystallographic symmetry in bulk materials. There are many techniques for determining the symmetry of a given molecule, including X-ray crystallography and various forms of spectroscopy. Spectroscopic notation is based on symmetry considerations. Point group symmetry concepts. Elements. The point group symmetry of a molecule is defined by the presence or absence of 5 types of symmetry element. Operations. The five symmetry elements have associated with them five types of symmetry operation, which leave the geometry of the molecule indistinguishable from the starting geometry. They are sometimes distinguished from symmetry elements by a caret or circumflex. Thus, "Ĉ""n" is the rotation of a molecule around an axis and "Ê" is the identity operation. A symmetry element can have more than one symmetry operation associated with it. For example, the "C"4 axis of the square xenon tetrafluoride (XeF4) molecule is associated with two "Ĉ"4 rotations in opposite directions (90° and 270°), a "Ĉ"2 rotation (180°) and "Ĉ"1 (0° or 360°). Because "Ĉ"1 is equivalent to "Ê", "Ŝ"1 to σ and "Ŝ"2 to "î", all symmetry operations can be classified as either proper or improper rotations. For linear molecules, either clockwise or counterclockwise rotation about the molecular axis by any angle Φ is a symmetry operation. Symmetry groups. Groups. The symmetry operations of a molecule (or other object) form a group. In mathematics, a group is a set with a binary operation that satisfies the four properties listed below. In a symmetry group, the group elements are the symmetry operations (not the symmetry elements), and the binary combination consists of applying first one symmetry operation and then the other. An example is the sequence of a "C"4 rotation about the z-axis and a reflection in the xy-plane, denoted σ(xy)"C"4. By convention the order of operations is from right to left. A symmetry group obeys the defining properties of any group. The "order" of a group is the number of elements in the group. For groups of small orders, the group properties can be easily verified by considering its composition table, a table whose rows and columns correspond to elements of the group and whose entries correspond to their products. Point groups and permutation-inversion groups. The successive application (or "composition") of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the molecule. For example, a "C"2 rotation followed by a σv reflection is seen to be a σv' symmetry operation: σv*"C"2 = σv'. ("Operation "A" followed by "B" to form "C"" is written "BA" = "C"). Moreover, the set of all symmetry operations (including this composition operation) obeys all the properties of a group, given above. So ("S","*") is a group, where "S" is the set of all symmetry operations of some molecule, and * denotes the composition (repeated application) of symmetry operations. This group is called the point group of that molecule, because the set of symmetry operations leave at least one point fixed (though for some symmetries an entire axis or an entire plane remains fixed). In other words, a point group is a group that summarises all symmetry operations that all molecules in that category have. The symmetry of a crystal, by contrast, is described by a space group of symmetry operations, which includes translations in space. One can determine the symmetry operations of the point group for a particular molecule by considering the geometrical symmetry of its molecular model. However, when one uses a point group to classify molecular states, the operations in it are not to be interpreted in the same way. Instead the operations are interpreted as rotating and/or reflecting the vibronic (vibration-electronic) coordinates and these operations commute with the vibronic Hamiltonian. They are "symmetry operations" for that vibronic Hamiltonian. The point group is used to classify by symmetry the vibronic eigenstates of a rigid molecule. The symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, requires the use of the appropriate permutation-inversion group as introduced by Longuet-Higgins. Point groups describe the geometrical symmetry of a molecule whereas permutation-inversion groups describe the energy-invariant symmetry. Examples of point groups. Assigning each molecule a point group classifies molecules into categories with similar symmetry properties. For example, PCl3, POF3, XeO3, and NH3 all share identical symmetry operations. They all can undergo the identity operation "E", two different "C"3 rotation operations, and three different σv plane reflections without altering their identities, so they are placed in one point group, "C"3v, with order 6. Similarly, water (H2O) and hydrogen sulfide (H2S) also share identical symmetry operations. They both undergo the identity operation "E", one "C"2 rotation, and two σv reflections without altering their identities, so they are both placed in one point group, "C"2v, with order 4. This classification system helps scientists to study molecules more efficiently, since chemically related molecules in the same point group tend to exhibit similar bonding schemes, molecular bonding diagrams, and spectroscopic properties. Point group symmetry describes the symmetry of a molecule when fixed at its equilibrium configuration in a particular electronic state. It does not allow for tunneling between minima nor for the change in shape that can come about from the centrifugal distortion effects of molecular rotation. Common point groups. The following table lists many of the point groups applicable to molecules, labelled using the Schoenflies notation, which is common in chemistry and molecular spectroscopy. The descriptions include common shapes of molecules, which can be explained by the VSEPR model. In each row, the descriptions and examples have no higher symmetries, meaning that the named point group captures "all" of the point symmetries. Representations. A set of matrices that multiply together in a way that mimics the multiplication table of the elements of a group is called a representation of the group. For example, for the "C"2v point group, the following three matrices are part of a representation of the group: formula_1 Although an infinite number of such representations exist, the irreducible representations (or "irreps") of the group are all that are needed as all other representations of the group can be described as a direct sum of the irreducible representations. Also, the irreducibile representations are those matrix representations in which the matrices are in their most diagonal form possible. Character tables. For any group, its character table gives a tabulation (for the classes of the group) of the characters (the sum of the diagonal elements) of the matrices of all the irreducible representations of the group. As the number of irreducible representations equals the number of classes, the character table is square. The representations are labeled according to a set of conventions: The tables also capture information about how the Cartesian basis vectors, rotations about them, and quadratic functions of them transform by the symmetry operations of the group, by noting which irreducible representation transforms in the same way. These indications are conventionally on the righthand side of the tables. This information is useful because chemically important orbitals (in particular "p" and "d" orbitals) have the same symmetries as these entities. The character table for the "C"2v symmetry point group is given below: Consider the example of water (H2O), which has the "C"2v symmetry described above. The 2"p"x orbital of oxygen has B1 symmetry as in the fourth row of the character table above, with x in the sixth column). It is oriented perpendicular to the plane of the molecule and switches sign with a "C"2 and a σv'(yz) operation, but remains unchanged with the other two operations (obviously, the character for the identity operation is always +1). This orbital's character set is thus {1, −1, 1, −1}, corresponding to the B1 irreducible representation. Likewise, the 2"p"z orbital is seen to have the symmetry of the A1 irreducible representation ("i.e".: none of the symmetry operations change it), 2"p"y B2, and the 3"d"xy orbital A2. These assignments and others are noted in the rightmost two columns of the table. Historical background. Hans Bethe used characters of point group operations in his study of ligand field theory in 1929, and Eugene Wigner used group theory to explain the selection rules of atomic spectroscopy. The first character tables were compiled by László Tisza (1933), in connection to vibrational spectra. Robert Mulliken was the first to publish character tables in English (1933), and E. Bright Wilson used them in 1934 to predict the symmetry of vibrational normal modes. The complete set of 32 crystallographic point groups was published in 1936 by Rosenthal and Murphy. Symmetry of vibrational modes. Each normal mode of molecular vibration has a symmetry which forms a basis for one irreducible representation of the molecular symmetry group. For example, the water molecule has three normal modes of vibration: symmetric stretch in which the two O-H bond lengths vary in phase with each other, asymmetric stretch in which they vary out of phase, and bending in which the bond angle varies. The molecular symmetry of water is C2v with four irreducible representations A1, A2, B1 and B2. The symmetric stretching and the bending modes have symmetry A1, while the asymmetric mode has symmetry B2. The overall symmetry of the three vibrational modes is therefore Γvib = 2A1 + B2. Vibrational modes of ammonia. The molecular symmetry of ammonia (NH3) is C3v, with symmetry operations E, C3 and σv. For N = 4 atoms, the number of vibrational modes for a non-linear molecule is 3N-6 = 6, due to the relative motion of the nitrogen atom and the three hydrogen atoms. All three hydrogen atoms travel symmetrically along the N-H bonds, either in the direction of the nitrogen atom or away from it. This mode is known as symmetric stretch (v₁) and reflects the symmetry in the N-H bond stretching. Of the three vibrational modes, this one has the highest frequency. In the Bending (ν₂) vibration, the nitrogen atom stays on the axis of symmetry, while the three hydrogen atoms move in different directions from one another, leading to changes in the bond angles. The hydrogen atoms move like an umbrella, so this mode is often referred to as the "umbrella mode". There is also an Asymmetric Stretch mode (ν₃) in which one hydrogen atom approaches the nitrogen atom while the other two hydrogens move away. The total number of degrees of freedom for each symmetry species (or irreducible representation) can be determined. Ammonia has four atoms, and each atom is associated with three vector components. The symmetry group C3v for NH3 has the three symmetry species A1, A2 and E. The modes of vibration include the vibrational, rotational and translational modes. Total modes = 3A1 + A2 + 4E. This is a total of 12 modes because each E corresponds to 2 degenerate modes (at the same energy). Rotational modes = A2 + E (3 modes) Translational modes = A1 + E Vibrational modes = Total modes - Rotational modes - Translational modes = 3A1 + A2 + 4E - A2 - E - A1 - E = 2A1 + 2E (6 modes). Symmetry of molecular orbitals. Each molecular orbital also has the symmetry of one irreducible representation. For example, ethylene (C2H4) has symmetry group D2h, and its highest occupied molecular orbital (HOMO) is the bonding pi orbital which forms a basis for its irreducible representation B1u. Molecular rotation and molecular nonrigidity. As discussed above in the section Point groups and permutation-inversion groups, point groups are useful for classifying the vibrational and electronic states of "rigid" molecules (sometimes called "semi-rigid" molecules) which undergo only small oscillations about a single equilibrium geometry. Longuet-Higgins introduced a more general type of symmetry group suitable not only for classifying the vibrational and electronic states of rigid molecules but also for classifying their rotational and nuclear spin states. Further, such groups can be used to classify the states of "non-rigid" (or "fluxional") molecules that tunnel between equivalent geometries (called "versions") and to allow for the distorting effects of molecular rotation. These groups are known as "permutation-inversion" groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two. For example, ethane (C2H6) has three equivalent staggered conformations. Tunneling between the conformations occurs at ordinary temperatures by internal rotation of one methyl group relative to the other. This is not a rotation of the entire molecule about the "C"3 axis. Although each conformation has "D"3d symmetry, as in the table above, description of the internal rotation and associated quantum states and energy levels requires the more complete permutation-inversion group "G"36. Similarly, ammonia (NH3) has two equivalent pyramidal ("C"3v) conformations which are interconverted by the process known as nitrogen inversion. This is not the point group inversion operation i used for centrosymmetric rigid molecules (i.e., the inversion of vibrational displacements and electronic coordinates in the nuclear center of mass) since NH3 has no inversion center and is not centrosymmetric. Rather, it is the inversion of the nuclear and electronic coordinates in the molecular center of mass (sometimes called the parity operation), which happens to be energetically feasible for this molecule. The appropriate permutation-inversion group to be used in this situation is "D"3h(M) which is isomorphic with the point group "D"3h. Additionally, as examples, the methane (CH4) and H3+ molecules have highly symmetric equilibrium structures with "T"d and "D"3h point group symmetries respectively; they lack permanent electric dipole moments but they do have very weak pure rotation spectra because of rotational centrifugal distortion. The permutation-inversion groups required for the complete study of CH4 and H3+ are "T"d(M) and "D"3h(M), respectively. In its ground (N) electronic state the ethylene molecule C2H4 has "D"2h point group symmetry whereas in the excited (V) state it has "D"2d symmetry. To treat these two states together it is necessary to allow torsion and to use the double group of the permutation-inversion group "G"16. A second and less general approach to the symmetry of nonrigid molecules is due to Altmann. In this approach the symmetry groups are known as "Schrödinger supergroups" and consist of two types of operations (and their combinations): (1) the geometric symmetry operations (rotations, reflections, inversions) of rigid molecules, and (2) "isodynamic operations", which take a nonrigid molecule into an energetically equivalent form by a physically reasonable process such as rotation about a single bond (as in ethane) or a molecular inversion (as in ammonia). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\tfrac{360^\\circ} {n} " }, { "math_id": 1, "text": "\n \\underbrace{\n \\begin{bmatrix}\n -1 & 0 & 0 \\\\\n 0 & -1 & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}\n }_{C_{2}} \\times\n \\underbrace{\n \\begin{bmatrix}\n 1 & 0 & 0 \\\\\n 0 & -1 & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}\n }_{\\sigma_\\text{v}} = \n \\underbrace{\n \\begin{bmatrix}\n -1 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}\n }_{\\sigma'_\\text{v}}\n" } ]
https://en.wikipedia.org/wiki?curid=10986798
10987
Functor
Mapping between categories In mathematics, specifically category theory, a functor is a mapping between categories. Functors were first considered in algebraic topology, where algebraic objects (such as the fundamental group) are associated to topological spaces, and maps between these algebraic objects are associated to continuous maps between spaces. Nowadays, functors are used throughout modern mathematics to relate various categories. Thus, functors are important in all areas within mathematics to which category theory is applied. The words "category" and "functor" were borrowed by mathematicians from the philosophers Aristotle and Rudolf Carnap, respectively. The latter used "functor" in a linguistic context; see function word. Definition. Let "C" and "D" be categories. A functor "F" from "C" to "D" is a mapping that That is, functors must preserve identity morphisms and composition of morphisms. Covariance and contravariance. There are many constructions in mathematics that would be functors but for the fact that they "turn morphisms around" and "reverse composition". We then define a contravariant functor "F" from "C" to "D" as a mapping that Variance of functor (composite) Note that contravariant functors reverse the direction of composition. Ordinary functors are also called covariant functors in order to distinguish them from contravariant ones. Note that one can also define a contravariant functor as a "covariant" functor on the opposite category formula_17. Some authors prefer to write all expressions covariantly. That is, instead of saying formula_18 is a contravariant functor, they simply write formula_19 (or sometimes formula_20) and call it a functor. Contravariant functors are also occasionally called "cofunctors". There is a convention which refers to "vectors"—i.e., vector fields, elements of the space of sections formula_21 of a tangent bundle formula_22—as "contravariant" and to "covectors"—i.e., 1-forms, elements of the space of sections formula_23 of a cotangent bundle formula_24—as "covariant". This terminology originates in physics, and its rationale has to do with the position of the indices ("upstairs" and "downstairs") in expressions such as formula_25 for formula_26 or formula_27 for formula_28 In this formalism it is observed that the coordinate transformation symbol formula_29 (representing the matrix formula_30) acts on the "covector coordinates" "in the same way" as on the basis vectors: formula_31—whereas it acts "in the opposite way" on the "vector coordinates" (but "in the same way" as on the basis covectors: formula_32). This terminology is contrary to the one used in category theory because it is the covectors that have "pullbacks" in general and are thus "contravariant", whereas vectors in general are "covariant" since they can be "pushed forward". See also Covariance and contravariance of vectors. Opposite functor. Every functor formula_18 induces the opposite functor formula_33, where formula_17 and formula_34 are the opposite categories to formula_35 and formula_36. By definition, formula_37 maps objects and morphisms in the identical way as does formula_0. Since formula_17 does not coincide with formula_35 as a category, and similarly for formula_36, formula_37 is distinguished from formula_0. For example, when composing formula_38 with formula_39, one should use either formula_40 or formula_41. Note that, following the property of opposite category, formula_42. Bifunctors and multifunctors. A bifunctor (also known as a binary functor) is a functor whose domain is a product category. For example, the Hom functor is of the type "Cop" × "C" → Set. It can be seen as a functor in "two" arguments. The Hom functor is a natural example; it is contravariant in one argument, covariant in the other. A multifunctor is a generalization of the functor concept to "n" variables. So, for example, a bifunctor is a multifunctor with "n" = 2. Properties. Two important consequences of the functor axioms are: One can compose functors, i.e. if "F" is a functor from "A" to "B" and "G" is a functor from "B" to "C" then one can form the composite functor "G" ∘ "F" from "A" to "C". Composition of functors is associative where defined. Identity of composition of functors is the identity functor. This shows that functors can be considered as morphisms in categories of categories, for example in the category of small categories. A small category with a single object is the same thing as a monoid: the morphisms of a one-object category can be thought of as elements of the monoid, and composition in the category is thought of as the monoid operation. Functors between one-object categories correspond to monoid homomorphisms. So in a sense, functors between arbitrary categories are a kind of generalization of monoid homomorphisms to categories with more than one object. Relation to other categorical concepts. Let "C" and "D" be categories. The collection of all functors from "C" to "D" forms the objects of a category: the functor category. Morphisms in this category are natural transformations between functors. Functors are often defined by universal properties; examples are the tensor product, the direct sum and direct product of groups or vector spaces, construction of free groups and modules, direct and inverse limits. The concepts of limit and colimit generalize several of the above. Universal constructions often give rise to pairs of adjoint functors. Computer implementations. Functors sometimes appear in functional programming. For instance, the programming language Haskell has a class codice_0 where codice_1 is a polytypic function used to map functions ("morphisms" on "Hask", the category of Haskell types) between existing types to functions between some new types. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "F(X)" }, { "math_id": 4, "text": "f \\colon X \\to Y" }, { "math_id": 5, "text": "F(f) \\colon F(X) \\to F(Y)" }, { "math_id": 6, "text": "F(\\mathrm{id}_{X}) = \\mathrm{id}_{F(X)}\\,\\!" }, { "math_id": 7, "text": "F(g \\circ f) = F(g) \\circ F(f)" }, { "math_id": 8, "text": "f \\colon X \\to Y\\,\\!" }, { "math_id": 9, "text": "g \\colon Y\\to Z" }, { "math_id": 10, "text": "f \\colon X\\to Y" }, { "math_id": 11, "text": "F(f) \\colon F(Y) \\to F(X)" }, { "math_id": 12, "text": "F(\\mathrm{id}_X) = \\mathrm{id}_{F(X)}\\,\\!" }, { "math_id": 13, "text": "F(g \\circ f) = F(f) \\circ F(g)" }, { "math_id": 14, "text": "\\mathrm{Covariant} \\circ \\mathrm{Covariant} \\to \\mathrm{Covariant}" }, { "math_id": 15, "text": "\\mathrm{Contravariant} \\circ \\mathrm{Contravariant} \\to \\mathrm{Covariant}" }, { "math_id": 16, "text": "\\mathrm{Covariant} \\circ \\mathrm{Contravariant} \\to \\mathrm{Contravariant}\n" }, { "math_id": 17, "text": "C^\\mathrm{op}" }, { "math_id": 18, "text": "F \\colon C\\to D" }, { "math_id": 19, "text": "F \\colon C^{\\mathrm{op}} \\to D" }, { "math_id": 20, "text": "F \\colon C \\to D^{\\mathrm{op}}" }, { "math_id": 21, "text": "\\Gamma(TM)" }, { "math_id": 22, "text": "TM" }, { "math_id": 23, "text": "\\Gamma\\mathord\\left(T^*M\\right)" }, { "math_id": 24, "text": "T^*M" }, { "math_id": 25, "text": "{x'}^{\\,i} = \\Lambda^i_j x^j" }, { "math_id": 26, "text": "\\mathbf{x}' = \\boldsymbol{\\Lambda}\\mathbf{x}" }, { "math_id": 27, "text": "\\omega'_i = \\Lambda^j_i \\omega_j" }, { "math_id": 28, "text": "\\boldsymbol{\\omega}' = \\boldsymbol{\\omega}\\boldsymbol{\\Lambda}^\\textsf{T}." }, { "math_id": 29, "text": "\\Lambda^j_i" }, { "math_id": 30, "text": "\\boldsymbol{\\Lambda}^\\textsf{T}" }, { "math_id": 31, "text": "\\mathbf{e}_i = \\Lambda^j_i\\mathbf{e}_j" }, { "math_id": 32, "text": "\\mathbf{e}^i = \\Lambda^i_j \\mathbf{e}^j" }, { "math_id": 33, "text": "F^\\mathrm{op} \\colon C^\\mathrm{op}\\to D^\\mathrm{op}" }, { "math_id": 34, "text": "D^\\mathrm{op}" }, { "math_id": 35, "text": "C" }, { "math_id": 36, "text": "D" }, { "math_id": 37, "text": "F^\\mathrm{op}" }, { "math_id": 38, "text": "F \\colon C_0\\to C_1" }, { "math_id": 39, "text": "G \\colon C_1^\\mathrm{op}\\to C_2" }, { "math_id": 40, "text": "G\\circ F^\\mathrm{op}" }, { "math_id": 41, "text": "G^\\mathrm{op}\\circ F" }, { "math_id": 42, "text": "\\left(F^\\mathrm{op}\\right)^\\mathrm{op} = F" }, { "math_id": 43, "text": "D \\colon J\\to C" }, { "math_id": 44, "text": "D \\colon C\\to J" }, { "math_id": 45, "text": "U \\subseteq V" }, { "math_id": 46, "text": " f \\colon X \\to Y" }, { "math_id": 47, "text": "U \\in \\mathcal{P}(X)" }, { "math_id": 48, "text": "f(U) \\in \\mathcal{P}(Y)" }, { "math_id": 49, "text": " f \\colon X \\to Y " }, { "math_id": 50, "text": "V \\subseteq Y" }, { "math_id": 51, "text": "f^{-1}(V) \\subseteq X." }, { "math_id": 52, "text": "X = \\{0,1\\}" }, { "math_id": 53, "text": "F(X) = \\mathcal{P}(X) = \\{\\{\\}, \\{0\\}, \\{1\\}, X\\}" }, { "math_id": 54, "text": "f(0) = \\{\\}" }, { "math_id": 55, "text": "f(1) = X" }, { "math_id": 56, "text": "F(f)" }, { "math_id": 57, "text": "U" }, { "math_id": 58, "text": "f(U)" }, { "math_id": 59, "text": "\\{\\} \\mapsto f(\\{\\}) = \\{\\}" }, { "math_id": 60, "text": "\\mapsto" }, { "math_id": 61, "text": "(F(f))(\\{\\})= \\{\\}" }, { "math_id": 62, "text": "\n \\{0\\} \\mapsto f(\\{0\\}) = \\{f(0)\\} = \\{\\{\\}\\},\\ " }, { "math_id": 63, "text": "\n \\{1\\} \\mapsto f(\\{1\\}) = \\{f(1)\\} = \\{X\\},\\ " }, { "math_id": 64, "text": "\n \\{0,1\\} \\mapsto f(\\{0,1\\}) = \\{f(0), f(1)\\} = \\{\\{\\}, X\\}.\n" }, { "math_id": 65, "text": "f(\\{0, 1\\})" }, { "math_id": 66, "text": "V \\otimes W" } ]
https://en.wikipedia.org/wiki?curid=10987
10987985
Exact category
In mathematics, specifically in category theory, an exact category is a category equipped with short exact sequences. The concept is due to Daniel Quillen and is designed to encapsulate the properties of short exact sequences in abelian categories without requiring that morphisms actually possess kernels and cokernels, which is necessary for the usual definition of such a sequence. Definition. An exact category E is an additive category possessing a class "E" of "short exact sequences": triples of objects connected by arrows formula_0 satisfying the following axioms inspired by the properties of short exact sequences in an abelian category: formula_1 Admissible monomorphisms are generally denoted formula_13 and admissible epimorphisms are denoted formula_14 These axioms are not minimal; in fact, the last one has been shown by Bernhard Keller (1990) to be redundant. One can speak of an exact functor between exact categories exactly as in the case of exact functors of abelian categories: an exact functor formula_15 from an exact category D to another one E is an additive functor such that if formula_16 is exact in D, then formula_17 is exact in E. If D is a subcategory of E, it is an exact subcategory if the inclusion functor is fully faithful and exact. Motivation. Exact categories come from abelian categories in the following way. Suppose A is abelian and let E be any strictly full additive subcategory which is closed under taking extensions in the sense that given an exact sequence formula_18 in A, then if formula_19 are in E, so is formula_20. We can take the class "E" to be simply the sequences in E which are exact in A; that is, formula_0 is in "E" iff formula_18 is exact in A. Then E is an exact category in the above sense. We verify the axioms: formula_21 and a map formula_3 with formula_4 in E, one verifies that the following sequence is also exact; since E is stable under extensions, this means that formula_22 is in E: formula_23 Conversely, if E is any exact category, we can take A to be the category of left-exact functors from E into the category of abelian groups, which is itself abelian and in which E is a natural subcategory (via the Yoneda embedding, since Hom is left exact), stable under extensions, and in which a sequence is in "E" if and only if it is exact in A. formula_24 is a short exact sequence of abelian groups in which formula_25 are torsion-free, then formula_26 is seen to be torsion-free by the following argument: if formula_27 is a torsion element, then its image in formula_28 is zero, since formula_28 is torsion-free. Thus formula_27 lies in the kernel of the map to formula_28, which is formula_29, but that is also torsion-free, so formula_30. By the construction of #Motivation, Abtf is an exact category; some examples of exact sequences in it are: formula_31 formula_32 where the last example is inspired by de Rham cohomology (formula_33 and formula_34 are the closed and exact differential forms on the circle group); in particular, it is known that the cohomology group is isomorphic to the real numbers. This category is not abelian. formula_24 is an exact sequence in which formula_25 have torsion, then formula_26 naturally has all the torsion elements of formula_29. Thus it is an exact category.
[ { "math_id": 0, "text": "M' \\to M \\to M''\\ " }, { "math_id": 1, "text": " M' \\to M' \\oplus M''\\to M'';" }, { "math_id": 2, "text": "M \\to M''" }, { "math_id": 3, "text": "N \\to M''" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "M' \\to M" }, { "math_id": 6, "text": "M' \\to N" }, { "math_id": 7, "text": "N \\to M" }, { "math_id": 8, "text": "N \\to M \\to M''" }, { "math_id": 9, "text": "M \\to M''." }, { "math_id": 10, "text": "M \\to N" }, { "math_id": 11, "text": "M' \\to M \\to N" }, { "math_id": 12, "text": "M' \\to M." }, { "math_id": 13, "text": "\\rightarrowtail" }, { "math_id": 14, "text": "\\twoheadrightarrow." }, { "math_id": 15, "text": "F" }, { "math_id": 16, "text": "M' \\rightarrowtail M \\twoheadrightarrow M''" }, { "math_id": 17, "text": "F(M') \\rightarrowtail F(M) \\twoheadrightarrow F(M'')" }, { "math_id": 18, "text": "0 \\to M' \\to M \\to M'' \\to 0\\ " }, { "math_id": 19, "text": "M', M''" }, { "math_id": 20, "text": "M" }, { "math_id": 21, "text": "0 \\to M' \\xrightarrow{f} M \\to M'' \\to 0,\\ " }, { "math_id": 22, "text": "M \\times_{M''} N" }, { "math_id": 23, "text": "0 \\to M' \\xrightarrow{(f,0)} M \\times_{M''} N \\to N \\to 0.\\ " }, { "math_id": 24, "text": "0 \\to A \\to B \\to C \\to 0\\ " }, { "math_id": 25, "text": "A, C" }, { "math_id": 26, "text": "B" }, { "math_id": 27, "text": "b" }, { "math_id": 28, "text": "C" }, { "math_id": 29, "text": "A" }, { "math_id": 30, "text": "b = 0" }, { "math_id": 31, "text": "0 \\to \\mathbb{Z} \\xrightarrow{\\left(\\begin{smallmatrix} 1 \\\\ 2 \\end{smallmatrix}\\right)} \\mathbb{Z}^2 \\xrightarrow{(-2, 1)} \\mathbb{Z} \\to 0," }, { "math_id": 32, "text": "0 \\to d\\Omega^0(S^1) \\to \\Omega^1_c(S^1) \\to H^1_{\\text{dR}}(S^1) \\to 0," }, { "math_id": 33, "text": "\\Omega^1_c(S^1)" }, { "math_id": 34, "text": "d\\Omega^0(S^1)" } ]
https://en.wikipedia.org/wiki?curid=10987985
1098818
Gene expression programming
Evolutionary algorithm In computer programming, gene expression programming (GEP) is an evolutionary algorithm that creates computer programs or models. These computer programs are complex tree structures that learn and adapt by changing their sizes, shapes, and composition, much like a living organism. And like living organisms, the computer programs of GEP are also encoded in simple linear chromosomes of fixed length. Thus, GEP is a genotype–phenotype system, benefiting from a simple genome to keep and transmit the genetic information and a complex phenotype to explore the environment and adapt to it. Background. Evolutionary algorithms use populations of individuals, select individuals according to fitness, and introduce genetic variation using one or more genetic operators. Their use in artificial computational systems dates back to the 1950s where they were used to solve optimization problems (e.g. Box 1957 and Friedman 1959). But it was with the introduction of evolution strategies by Rechenberg in 1965 that evolutionary algorithms gained popularity. A good overview text on evolutionary algorithms is the book "An Introduction to Genetic Algorithms" by Mitchell (1996). Gene expression programming belongs to the family of evolutionary algorithms and is closely related to genetic algorithms and genetic programming. From genetic algorithms it inherited the linear chromosomes of fixed length; and from genetic programming it inherited the expressive parse trees of varied sizes and shapes. In gene expression programming the linear chromosomes work as the genotype and the parse trees as the phenotype, creating a genotype/phenotype system. This genotype/phenotype system is multigenic, thus encoding multiple parse trees in each chromosome. This means that the computer programs created by GEP are composed of multiple parse trees. Because these parse trees are the result of gene expression, in GEP they are called expression trees. Masood Nekoei, et al. utilized this expression programming style in ABC optimization to conduct ABCEP as a method that outperformed other evolutionary algorithms.ABCEP Encoding: the genotype. The genome of gene expression programming consists of a linear, symbolic string or chromosome of fixed length composed of one or more genes of equal size. These genes, despite their fixed length, code for expression trees of different sizes and shapes. An example of a chromosome with two genes, each of size 9, is the string (position zero indicates the start of each gene): codice_0 codice_1 where “L” represents the natural logarithm function and “a”, “b”, “c”, and “d” represent the variables and constants used in a problem. Expression trees: the phenotype. As shown , the genes of gene expression programming have all the same size. However, these fixed length strings code for expression trees of different sizes. This means that the size of the coding regions varies from gene to gene, allowing for adaptation and evolution to occur smoothly. For example, the mathematical expression: formula_0 can also be represented as an expression tree: where "Q” represents the square root function. This kind of expression tree consists of the phenotypic expression of GEP genes, whereas the genes are linear strings encoding these complex structures. For this particular example, the linear string corresponds to: codice_2 codice_3 which is the straightforward reading of the expression tree from top to bottom and from left to right. These linear strings are called k-expressions (from Karva notation). Going from k-expressions to expression trees is also very simple. For example, the following k-expression: codice_4 codice_5 is composed of two different terminals (the variables “a” and “b”), two different functions of two arguments (“*” and “+”), and a function of one argument (“Q”). Its expression gives: K-expressions and genes. The k-expressions of gene expression programming correspond to the region of genes that gets expressed. This means that there might be sequences in the genes that are not expressed, which is indeed true for most genes. The reason for these noncoding regions is to provide a buffer of terminals so that all k-expressions encoded in GEP genes correspond always to valid programs or expressions. The genes of gene expression programming are therefore composed of two different domains – a head and a tail – each with different properties and functions. The head is used mainly to encode the functions and variables chosen to solve the problem at hand, whereas the tail, while also used to encode the variables, provides essentially a reservoir of terminals to ensure that all programs are error-free. For GEP genes the length of the tail is given by the formula: formula_1 where "h" is the head's length and "n"max is maximum arity. For example, for a gene created using the set of functions F = {Q, +, −, ∗, /} and the set of terminals T = {a, b}, "n"max = 2. And if we choose a head length of 15, then "t" = 15 (2–1) + 1 = 16, which gives a gene length "g" of 15 + 16 = 31. The randomly generated string below is an example of one such gene: codice_6 codice_7 It encodes the expression tree: which, in this case, only uses 8 of the 31 elements that constitute the gene. It's not hard to see that, despite their fixed length, each gene has the potential to code for expression trees of different sizes and shapes, with the simplest composed of only one node (when the first element of a gene is a terminal) and the largest composed of as many nodes as there are elements in the gene (when all the elements in the head are functions with maximum arity). It's also not hard to see that it is trivial to implement all kinds of genetic modification (mutation, inversion, insertion, recombination, and so on) with the guarantee that all resulting offspring encode correct, error-free programs. Multigenic chromosomes. The chromosomes of gene expression programming are usually composed of more than one gene of equal length. Each gene codes for a sub-expression tree (sub-ET) or sub-program. Then the sub-ETs can interact with one another in different ways, forming a more complex program. The figure shows an example of a program composed of three sub-ETs. In the final program the sub-ETs could be linked by addition or some other function, as there are no restrictions to the kind of linking function one might choose. Some examples of more complex linkers include taking the average, the median, the midrange, thresholding their sum to make a binomial classification, applying the sigmoid function to compute a probability, and so on. These linking functions are usually chosen a priori for each problem, but they can also be evolved elegantly and efficiently by the cellular system of gene expression programming. Cells and code reuse. In gene expression programming, homeotic genes control the interactions of the different sub-ETs or modules of the main program. The expression of such genes results in different main programs or cells, that is, they determine which genes are expressed in each cell and how the sub-ETs of each cell interact with one another. In other words, homeotic genes determine which sub-ETs are called upon and how often in which main program or cell and what kind of connections they establish with one another. Homeotic genes and the cellular system. Homeotic genes have exactly the same kind of structural organization as normal genes and they are built using an identical process. They also contain a head domain and a tail domain, with the difference that the heads contain now linking functions and a special kind of terminals – genic terminals – that represent the normal genes. The expression of the normal genes results as usual in different sub-ETs, which in the cellular system are called ADFs (automatically defined functions). As for the tails, they contain only genic terminals, that is, derived features generated on the fly by the algorithm. For example, the chromosome in the figure has three normal genes and one homeotic gene and encodes a main program that invokes three different functions a total of four times, linking them in a particular way. From this example it is clear that the cellular system not only allows the unconstrained evolution of linking functions but also code reuse. And it shouldn't be hard to implement recursion in this system. Multiple main programs and multicellular systems. Multicellular systems are composed of more than one homeotic gene. Each homeotic gene in this system puts together a different combination of sub-expression trees or ADFs, creating multiple cells or main programs. For example, the program shown in the figure was created using a cellular system with two cells and three normal genes. The applications of these multicellular systems are multiple and varied and, like the multigenic systems, they can be used both in problems with just one output and in problems with multiple outputs. Other levels of complexity. The head/tail domain of GEP genes (both normal and homeotic) is the basic building block of all GEP algorithms. However, gene expression programming also explores other chromosomal organizations that are more complex than the head/tail structure. Essentially these complex structures consist of functional units or genes with a basic head/tail domain plus one or more extra domains. These extra domains usually encode random numerical constants that the algorithm relentlessly fine-tunes in order to find a good solution. For instance, these numerical constants may be the weights or factors in a function approximation problem (see the GEP-RNC algorithm below); they may be the weights and thresholds of a neural network (see the GEP-NN algorithm below); the numerical constants needed for the design of decision trees (see the GEP-DT algorithm below); the weights needed for polynomial induction; or the random numerical constants used to discover the parameter values in a parameter optimization task. The basic gene expression algorithm. The fundamental steps of the basic gene expression algorithm are listed below in pseudocode: The first four steps prepare all the ingredients that are needed for the iterative loop of the algorithm (steps 5 through 10). Of these preparative steps, the crucial one is the creation of the initial population, which is created randomly using the elements of the function and terminal sets. Populations of programs. Like all evolutionary algorithms, gene expression programming works with populations of individuals, which in this case are computer programs. Therefore, some kind of initial population must be created to get things started. Subsequent populations are descendants, via selection and genetic modification, of the initial population. In the genotype/phenotype system of gene expression programming, it is only necessary to create the simple linear chromosomes of the individuals without worrying about the structural soundness of the programs they code for, as their expression always results in syntactically correct programs. Fitness functions and the selection environment. Fitness functions and selection environments (called training datasets in machine learning) are the two facets of fitness and are therefore intricately connected. Indeed, the fitness of a program depends not only on the cost function used to measure its performance but also on the training data chosen to evaluate fitness The selection environment or training data. The selection environment consists of the set of training records, which are also called fitness cases. These fitness cases could be a set of observations or measurements concerning some problem, and they form what is called the training dataset. The quality of the training data is essential for the evolution of good solutions. A good training set should be representative of the problem at hand and also well-balanced, otherwise the algorithm might get stuck at some local optimum. In addition, it is also important to avoid using unnecessarily large datasets for training as this will slow things down unnecessarily. A good rule of thumb is to choose enough records for training to enable a good generalization in the validation data and leave the remaining records for validation and testing. Fitness functions. Broadly speaking, there are essentially three different kinds of problems based on the kind of prediction being made: The first type of problem goes by the name of regression; the second is known as classification, with logistic regression as a special case where, besides the crisp classifications like "Yes" or "No", a probability is also attached to each outcome; and the last one is related to Boolean algebra and logic synthesis. Fitness functions for regression. In regression, the response or dependent variable is numeric (usually continuous) and therefore the output of a regression model is also continuous. So it's quite straightforward to evaluate the fitness of the evolving models by comparing the output of the model to the value of the response in the training data. There are several basic fitness functions for evaluating model performance, with the most common being based on the error or residual between the model output and the actual value. Such functions include the mean squared error, root mean squared error, mean absolute error, relative squared error, root relative squared error, relative absolute error, and others. All these standard measures offer a fine granularity or smoothness to the solution space and therefore work very well for most applications. But some problems might require a coarser evolution, such as determining if a prediction is within a certain interval, for instance less than 10% of the actual value. However, even if one is only interested in counting the hits (that is, a prediction that is within the chosen interval), making populations of models evolve based on just the number of hits each program scores is usually not very efficient due to the coarse granularity of the fitness landscape. Thus the solution usually involves combining these coarse measures with some kind of smooth function such as the standard error measures listed above. Fitness functions based on the correlation coefficient and R-square are also very smooth. For regression problems, these functions work best by combining them with other measures because, by themselves, they only tend to measure correlation, not caring for the range of values of the model output. So by combining them with functions that work at approximating the range of the target values, they form very efficient fitness functions for finding models with good correlation and good fit between predicted and actual values. Fitness functions for classification and logistic regression. The design of fitness functions for classification and logistic regression takes advantage of three different characteristics of classification models. The most obvious is just counting the hits, that is, if a record is classified correctly it is counted as a hit. This fitness function is very simple and works well for simple problems, but for more complex problems or datasets highly unbalanced it gives poor results. One way to improve this type of hits-based fitness function consists of expanding the notion of correct and incorrect classifications. In a binary classification task, correct classifications can be 00 or 11. The "00" representation means that a negative case (represented by "0”) was correctly classified, whereas the "11" means that a positive case (represented by "1”) was correctly classified. Classifications of the type "00" are called true negatives (TN) and "11" true positives (TP). There are also two types of incorrect classifications and they are represented by 01 and 10. They are called false positives (FP) when the actual value is 0 and the model predicts a 1; and false negatives (FN) when the target is 1 and the model predicts a 0. The counts of TP, TN, FP, and FN are usually kept on a table known as the confusion matrix. So by counting the TP, TN, FP, and FN and further assigning different weights to these four types of classifications, it is possible to create smoother and therefore more efficient fitness functions. Some popular fitness functions based on the confusion matrix include sensitivity/specificity, recall/precision, F-measure, Jaccard similarity, Matthews correlation coefficient, and cost/gain matrix which combines the costs and gains assigned to the 4 different types of classifications. These functions based on the confusion matrix are quite sophisticated and are adequate to solve most problems efficiently. But there is another dimension to classification models which is key to exploring more efficiently the solution space and therefore results in the discovery of better classifiers. This new dimension involves exploring the structure of the model itself, which includes not only the domain and range, but also the distribution of the model output and the classifier margin. By exploring this other dimension of classification models and then combining the information about the model with the confusion matrix, it is possible to design very sophisticated fitness functions that allow the smooth exploration of the solution space. For instance, one can combine some measure based on the confusion matrix with the mean squared error evaluated between the raw model outputs and the actual values. Or combine the F-measure with the R-square evaluated for the raw model output and the target; or the cost/gain matrix with the correlation coefficient, and so on. More exotic fitness functions that explore model granularity include the area under the ROC curve and rank measure. Also related to this new dimension of classification models, is the idea of assigning probabilities to the model output, which is what is done in logistic regression. Then it is also possible to use these probabilities and evaluate the mean squared error (or some other similar measure) between the probabilities and the actual values, then combine this with the confusion matrix to create very efficient fitness functions for logistic regression. Popular examples of fitness functions based on the probabilities include maximum likelihood estimation and hinge loss. Fitness functions for Boolean problems. In logic there is no model structure (as defined above for classification and logistic regression) to explore: the domain and range of logical functions comprises only 0's and 1's or false and true. So, the fitness functions available for Boolean algebra can only be based on the hits or on the confusion matrix as explained in the section above. Selection and elitism. Roulette-wheel selection is perhaps the most popular selection scheme used in evolutionary computation. It involves mapping the fitness of each program to a slice of the roulette wheel proportional to its fitness. Then the roulette is spun as many times as there are programs in the population in order to keep the population size constant. So, with roulette-wheel selection programs are selected both according to fitness and the luck of the draw, which means that some times the best traits might be lost. However, by combining roulette-wheel selection with the cloning of the best program of each generation, one guarantees that at least the very best traits are not lost. This technique of cloning the best-of-generation program is known as simple elitism and is used by most stochastic selection schemes. Reproduction with modification. The reproduction of programs involves first the selection and then the reproduction of their genomes. Genome modification is not required for reproduction, but without it adaptation and evolution won't take place. Replication and selection. The selection operator selects the programs for the replication operator to copy. Depending on the selection scheme, the number of copies one program originates may vary, with some programs getting copied more than once while others are copied just once or not at all. In addition, selection is usually set up so that the population size remains constant from one generation to another. The replication of genomes in nature is very complex and it took scientists a long time to discover the DNA double helix and propose a mechanism for its replication. But the replication of strings is trivial in artificial evolutionary systems, where only an instruction to copy strings is required to pass all the information in the genome from generation to generation. The replication of the selected programs is a fundamental piece of all artificial evolutionary systems, but for evolution to occur it needs to be implemented not with the usual precision of a copy instruction, but rather with a few errors thrown in. Indeed, genetic diversity is created with genetic operators such as mutation, recombination, transposition, inversion, and many others. Mutation. In gene expression programming mutation is the most important genetic operator. It changes genomes by changing an element by another. The accumulation of many small changes over time can create great diversity. In gene expression programming mutation is totally unconstrained, which means that in each gene domain any domain symbol can be replaced by another. For example, in the heads of genes any function can be replaced by a terminal or another function, regardless of the number of arguments in this new function; and a terminal can be replaced by a function or another terminal. Recombination. Recombination usually involves two parent chromosomes to create two new chromosomes by combining different parts from the parent chromosomes. And as long as the parent chromosomes are aligned and the exchanged fragments are homologous (that is, occupy the same position in the chromosome), the new chromosomes created by recombination will always encode syntactically correct programs. Different kinds of crossover are easily implemented either by changing the number of parents involved (there's no reason for choosing only two); the number of split points; or the way one chooses to exchange the fragments, for example, either randomly or in some orderly fashion. For example, gene recombination, which is a special case of recombination, can be done by exchanging homologous genes (genes that occupy the same position in the chromosome) or by exchanging genes chosen at random from any position in the chromosome. Transposition. Transposition involves the introduction of an insertion sequence somewhere in a chromosome. In gene expression programming insertion sequences might appear anywhere in the chromosome, but they are only inserted in the heads of genes. This method guarantees that even insertion sequences from the tails result in error-free programs. For transposition to work properly, it must preserve chromosome length and gene structure. So, in gene expression programming transposition can be implemented using two different methods: the first creates a shift at the insertion site, followed by a deletion at the end of the head; the second overwrites the local sequence at the target site and therefore is easier to implement. Both methods can be implemented to operate between chromosomes or within a chromosome or even within a single gene. Inversion. Inversion is an interesting operator, especially powerful for combinatorial optimization. It consists of inverting a small sequence within a chromosome. In gene expression programming it can be easily implemented in all gene domains and, in all cases, the offspring produced is always syntactically correct. For any gene domain, a sequence (ranging from at least two elements to as big as the domain itself) is chosen at random within that domain and then inverted. Other genetic operators. Several other genetic operators exist and in gene expression programming, with its different genes and gene domains, the possibilities are endless. For example, genetic operators such as one-point recombination, two-point recombination, gene recombination, uniform recombination, gene transposition, root transposition, domain-specific mutation, domain-specific inversion, domain-specific transposition, and so on, are easily implemented and widely used. The GEP-RNC algorithm. Numerical constants are essential elements of mathematical and statistical models and therefore it is important to allow their integration in the models designed by evolutionary algorithms. Gene expression programming solves this problem very elegantly through the use of an extra gene domain – the Dc – for handling random numerical constants (RNC). By combining this domain with a special terminal placeholder for the RNCs, a richly expressive system can be created. Structurally, the Dc comes after the tail, has a length equal to the size of the tail "t", and is composed of the symbols used to represent the RNCs. For example, below is shown a simple chromosome composed of only one gene a head size of 7 (the Dc stretches over positions 15–22): codice_8 codice_9 where the terminal "?” represents the placeholder for the RNCs. This kind of chromosome is expressed exactly as shown , giving: Then the ?'s in the expression tree are replaced from left to right and from top to bottom by the symbols (for simplicity represented by numerals) in the Dc, giving: The values corresponding to these symbols are kept in an array. (For simplicity, the number represented by the numeral indicates the order in the array.) For instance, for the following 10 element array of RNCs: C = {0.611, 1.184, 2.449, 2.98, 0.496, 2.286, 0.93, 2.305, 2.737, 0.755} the expression tree above gives: This elegant structure for handling random numerical constants is at the heart of different GEP systems, such as GEP neural networks and GEP decision trees. Like the basic gene expression algorithm, the GEP-RNC algorithm is also multigenic and its chromosomes are decoded as usual by expressing one gene after another and then linking them all together by the same kind of linking process. The genetic operators used in the GEP-RNC system are an extension to the genetic operators of the basic GEP algorithm (see above), and they all can be straightforwardly implemented in these new chromosomes. On the other hand, the basic operators of mutation, inversion, transposition, and recombination are also used in the GEP-RNC algorithm. Furthermore, special Dc-specific operators such as mutation, inversion, and transposition, are also used to aid in a more efficient circulation of the RNCs among individual programs. In addition, there is also a special mutation operator that allows the permanent introduction of variation in the set of RNCs. The initial set of RNCs is randomly created at the beginning of a run, which means that, for each gene in the initial population, a specified number of numerical constants, chosen from a certain range, are randomly generated. Then their circulation and mutation is enabled by the genetic operators. Neural networks. An artificial neural network (ANN or NN) is a computational device that consists of many simple connected units or neurons. The connections between the units are usually weighted by real-valued weights. These weights are the primary means of learning in neural networks and a learning algorithm is usually used to adjust them. Structurally, a neural network has three different classes of units: input units, hidden units, and output units. An activation pattern is presented at the input units and then spreads in a forward direction from the input units through one or more layers of hidden units to the output units. The activation coming into one unit from other unit is multiplied by the weights on the links over which it spreads. All incoming activation is then added together and the unit becomes activated only if the incoming result is above the unit's threshold. In summary, the basic components of a neural network are the units, the connections between the units, the weights, and the thresholds. So, in order to fully simulate an artificial neural network one must somehow encode these components in a linear chromosome and then be able to express them in a meaningful way. In GEP neural networks (GEP-NN or GEP nets), the network architecture is encoded in the usual structure of a head/tail domain. The head contains special functions/neurons that activate the hidden and output units (in the GEP context, all these units are more appropriately called functional units) and terminals that represent the input units. The tail, as usual, contains only terminals/input units. Besides the head and the tail, these neural network genes contain two additional domains, Dw and Dt, for encoding the weights and thresholds of the neural network. Structurally, the Dw comes after the tail and its length "dw" depends on the head size "h" and maximum arity "n"max and is evaluated by the formula: formula_2 The Dt comes after Dw and has a length "dt" equal to "t". Both domains are composed of symbols representing the weights and thresholds of the neural network. For each NN-gene, the weights and thresholds are created at the beginning of each run, but their circulation and adaptation are guaranteed by the usual genetic operators of mutation, transposition, inversion, and recombination. In addition, special operators are also used to allow a constant flow of genetic variation in the set of weights and thresholds. For example, below is shown a neural network with two input units ("i"1 and "i"2), two hidden units ("h"1 and "h"2), and one output unit ("o"1). It has a total of six connections with six corresponding weights represented by the numerals 1–6 (for simplicity, the thresholds are all equal to 1 and are omitted): This representation is the canonical neural network representation, but neural networks can also be represented by a tree, which, in this case, corresponds to: where "a” and "b” represent the two inputs "i"1 and "i"2 and "D” represents a function with connectivity two. This function adds all its weighted arguments and then thresholds this activation in order to determine the forwarded output. This output (zero or one in this simple case) depends on the threshold of each unit, that is, if the total incoming activation is equal to or greater than the threshold, then the output is one, zero otherwise. The above NN-tree can be linearized as follows: codice_10 codice_11 where the structure in positions 7–12 (Dw) encodes the weights. The values of each weight are kept in an array and retrieved as necessary for expression. As a more concrete example, below is shown a neural net gene for the exclusive-or problem. It has a head size of 3 and Dw size of 6: codice_10 codice_13 Its expression results in the following neural network: which, for the set of weights: "W" = {−1.978, 0.514, −0.465, 1.22, −1.686, −1.797, 0.197, 1.606, 0, 1.753} it gives: which is a perfect solution to the exclusive-or function. Besides simple Boolean functions with binary inputs and binary outputs, the GEP-nets algorithm can handle all kinds of functions or neurons (linear neuron, tanh neuron, atan neuron, logistic neuron, limit neuron, radial basis and triangular basis neurons, all kinds of step neurons, and so on). Also interesting is that the GEP-nets algorithm can use all these neurons together and let evolution decide which ones work best to solve the problem at hand. So, GEP-nets can be used not only in Boolean problems but also in logistic regression, classification, and regression. In all cases, GEP-nets can be implemented not only with multigenic systems but also cellular systems, both unicellular and multicellular. Furthermore, multinomial classification problems can also be tackled in one go by GEP-nets both with multigenic systems and multicellular systems. Decision trees. Decision trees (DT) are classification models where a series of questions and answers are mapped using nodes and directed edges. Decision trees have three types of nodes: a root node, internal nodes, and leaf or terminal nodes. The root node and all internal nodes represent test conditions for different attributes or variables in a dataset. Leaf nodes specify the class label for all different paths in the tree. Most decision tree induction algorithms involve selecting an attribute for the root node and then make the same kind of informed decision about all the nodes in a tree. Decision trees can also be created by gene expression programming, with the advantage that all the decisions concerning the growth of the tree are made by the algorithm itself without any kind of human input. There are basically two different types of DT algorithms: one for inducing decision trees with only nominal attributes and another for inducing decision trees with both numeric and nominal attributes. This aspect of decision tree induction also carries to gene expression programming and there are two GEP algorithms for decision tree induction: the evolvable decision trees (EDT) algorithm for dealing exclusively with nominal attributes and the EDT-RNC (EDT with random numerical constants) for handling both nominal and numeric attributes. In the decision trees induced by gene expression programming, the attributes behave as function nodes in the basic gene expression algorithm, whereas the class labels behave as terminals. This means that attribute nodes have also associated with them a specific arity or number of branches that will determine their growth and, ultimately, the growth of the tree. Class labels behave like terminals, which means that for a "k"-class classification task, a terminal set with "k" terminals is used, representing the "k" different classes. The rules for encoding a decision tree in a linear genome are very similar to the rules used to encode mathematical expressions (see above). So, for decision tree induction the genes also have a head and a tail, with the head containing attributes and terminals and the tail containing only terminals. This again ensures that all decision trees designed by GEP are always valid programs. Furthermore, the size of the tail "t" is also dictated by the head size "h" and the number of branches of the attribute with more branches "n"max and is evaluated by the equation: formula_3 For example, consider the decision tree below to decide whether to play outside: It can be linearly encoded as: codice_2 codice_15 where “H” represents the attribute Humidity, “O” the attribute Outlook, “W” represents Windy, and “a” and “b” the class labels "Yes" and "No" respectively. Note that the edges connecting the nodes are properties of the data, specifying the type and number of branches of each attribute, and therefore don't have to be encoded. The process of decision tree induction with gene expression programming starts, as usual, with an initial population of randomly created chromosomes. Then the chromosomes are expressed as decision trees and their fitness evaluated against a training dataset. According to fitness they are then selected to reproduce with modification. The genetic operators are exactly the same that are used in a conventional unigenic system, for example, mutation, inversion, transposition, and recombination. Decision trees with both nominal and numeric attributes are also easily induced with gene expression programming using the framework described above for dealing with random numerical constants. The chromosomal architecture includes an extra domain for encoding random numerical constants, which are used as thresholds for splitting the data at each branching node. For example, the gene below with a head size of 5 (the Dc starts at position 16): codice_16 codice_17 encodes the decision tree shown below: In this system, every node in the head, irrespective of its type (numeric attribute, nominal attribute, or terminal), has associated with it a random numerical constant, which for simplicity in the example above is represented by a numeral 0–9. These random numerical constants are encoded in the Dc domain and their expression follows a very simple scheme: from top to bottom and from left to right, the elements in Dc are assigned one-by-one to the elements in the decision tree. So, for the following array of RNCs: "C" = {62, 51, 68, 83, 86, 41, 43, 44, 9, 67} the decision tree above results in: which can also be represented more colorfully as a conventional decision tree: Criticism. GEP has been criticized for not being a major improvement over other genetic programming techniques. In many experiments, it did not perform better than existing methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{(a-b)(c+d)} \\, " }, { "math_id": 1, "text": "t = h(n_\\max-1)+1" }, { "math_id": 2, "text": "d_{w} = hn_\\max" }, { "math_id": 3, "text": "t = h(n_\\max-1)+1 \\, " } ]
https://en.wikipedia.org/wiki?curid=1098818
10989
Felix Hausdorff
German mathematician Felix Hausdorff ( , ; November 8, 1868 – January 26, 1942) was a German mathematician, pseudonym Paul Mongré ("à mon gré" (Fr.) = "according to my taste"), who is considered to be one of the founders of modern topology and who contributed significantly to set theory, descriptive set theory, measure theory, and functional analysis. Life became difficult for Hausdorff and his family after the Kristallnacht of 1938. The next year he initiated efforts to emigrate to the United States, but was unable to make arrangements to receive a research fellowship. On 26 January 1942, Felix Hausdorff, along with his wife and his sister-in-law, died by suicide by taking an overdose of veronal, rather than comply with German orders to move to the Endenich camp, and there suffer the likely implications, about which he held no illusions. Life. Childhood and youth. Hausdorff's father, the Jewish merchant Louis Hausdorff (1843–1896), moved with his young family to Leipzig in the autumn of 1870, and over time worked at various companies, including a linen-and cotton goods factory. He was an educated man and had become a Morenu at the age of 14. He wrote several treatises, including a long work on the Aramaic translations of the Bible from the perspective of Talmudic law. Hausdorff's mother, Hedwig (1848–1902), who is also referred to in various documents as Johanna, came from the Jewish Tietz family. From another branch of this family came Hermann Tietz, founder of the first department store, and later co-owner of the department store chain called "Hermann Tietz". During the period of Nazi dictatorship the name was "Aryanised" to Hertie. From 1878 to 1887 Felix Hausdorff attended the Nicolai School in Leipzig, a facility that had a reputation as a hotbed of humanistic education. He was an excellent student, class leader for many years and often recited self-written Latin or German poems at school celebrations. In his later years of high school, choosing a main subject of study was not easy for Hausdorff. Magda Dierkesmann, who was often a guest in the home of Hausdorff in the years 1926–1932, reported in 1967 that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;His versatile musical talent was so great that only the insistence of his father made him give up his plan to study music and become a composer. He decided to study the natural sciences, and in his graduating class of 1887 he was the only one who achieved the highest possible grade. Degree, doctorate and Habilitation. From 1887 to 1891 Hausdorff studied mathematics and astronomy, mainly in his native city of Leipzig, interrupted by one semester in Freiburg (summer 1888) and Berlin (winter 1888/1889). Surviving testimony from other students depict him as an extremely versatile and interested young man, who, in addition to the mathematical and astronomical lectures, attended lectures in physics, chemistry and geography, and also lectures on philosophy and history of philosophy, as well as on issues of language, literature and social sciences. In Leipzig he attended lectures on the history of music from musicologist Oscar Paul. His early love of music lasted a lifetime; in Hausdorff's home he held impressive musical evenings with the landlord at the piano, according to witness statements made by various participants. Even as a student in Leipzig, he was an admirer and connoisseur of the music of Richard Wagner. In later semesters of his studies, Hausdorff was close to Heinrich Bruns (1848–1919). Bruns was professor of astronomy and director of the observatory at the University of Leipzig. Under his supervision, Hausdorff graduated in 1891 with a work on the theory of astronomical refraction of light in the atmosphere. Two publications on the same subject followed, and in 1895 his Habilitation also followed with a thesis on the absorbance of light in the atmosphere. These early astronomical works of Hausdorff, despite their excellent mathematical formulation, were ultimately of little importance to the scientific community. For one, the underlying idea of Bruns was later shown to not be viable (there was a need for refraction observations near the astronomical horizon, and as Julius Bauschinger would show, this could not be obtained with the required accuracy). And further, the progress in the direct measurement of atmospheric data (from weather balloon ascents) has since made the painstaking accuracy of this data from refraction observations unnecessary. In the time between defending his PhD and his Habilitation, Hausdorff completed his yearlong military requirement, and worked for two years as a human computer at the observatory in Leipzig. Lecturer in Leipzig. After his Habilitation, Hausdorff became a lecturer at the University of Leipzig where he began extensive teaching in a variety of mathematical areas. In addition to teaching and research in mathematics, he also pursued his literary and philosophical inclinations. A man of varied interests, he often associated with a number of famous writers, artists and publishers such as Hermann Conradi, Richard Dehmel, Otto Erich Hartleben, Gustav Kirstein, Max Klinger, Max Reger and Frank Wedekind. The years of 1897 to 1904 mark the high point of his literary and philosophical creativity, during which time 18 of his 22 pseudonymous works were published, including a book of poetry, a play, an epistemological book and a volume of aphorisms. In 1899 Hausdorff married Charlotte Goldschmidt, the daughter of Jewish doctor Siegismund Goldschmidt. Her stepmother was the famous suffragist and preschool teacher Henriette Goldschmidt. Hausdorff's only child, his daughter Lenore (Nora), was born in 1900; she survived the era of National Socialism and enjoyed a long life, dying in Bonn in 1991. First professorship. In December 1901 Hausdorff was appointed as adjunct associate professor at the University of Leipzig. An often-repeated factoid, that Hausdorff got a call from Göttingen and rejected it, cannot be verified and is most likely wrong. After considering Hausdorff's application to Leipzig, the Dean Kirchner felt compelled to make the following addition to the very positive vote from his colleagues, written by Heinrich Bruns: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The faculty, however, considers itself obliged to report to the Royal Ministry that the above application, considered on November 2nd of this year when a faculty meeting had taken place, was not accepted by all, but with 22 votes to 7. The minority was opposed, because Dr. Hausdorff is of the Mosaic faith. This quote emphasizes the undisguised antisemitism present, which especially took a sharp upturn throughout the German Reich after the stock market crash of 1873. Leipzig was a focus of antisemitic sentiment, especially among the student body, which may well be the reason that Hausdorff did not feel at ease in Leipzig. Another contributing factor may also have been the stresses due to the hierarchical posturing of the Leipzig professors. After his Habilitation, Hausdorff wrote other works on optics, on non-Euclidean geometry, and on hypercomplex number systems, as well as two papers on probability theory. However, his main area of work soon became set theory, especially the theory of ordered sets. Initially, it was only out of philosophical interest that Hausdorff began to study Georg Cantor's work, beginning around 1897, but already in 1901 Hausdorff began lecturing on set theory. His was one of the first ever lectures on set theory; only Ernst Zermelo's lectures in Göttingen College during the winter of 1900/1901 were earlier. That same year, he published his first paper on order types in which he examined a generalization of well-orderings called graded order types, where a linear order is graded if no two of its segments share the same order type. He generalized the Cantor–Bernstein theorem, which said the collection of countable order types has the cardinality of the continuum and showed that the collection of all graded types of an idempotent cardinality m has a cardinality of 2m. For the summer semester of 1910 Hausdorff was appointed as professor to the University of Bonn. There he began a lecture series on set theory, which he substantially revised and expanded for the summer semester of 1912. In the summer of 1912 he also began work on his magnum opus, the book "Basics of set theory". It was completed in Greifswald, where Hausdorff had been appointed for the summer semester as full professor in 1913, and was released in April 1914. The University of Greifswald was the smallest of the Prussian universities. The mathematical institute there was also small; during the summer of 1916 and the winter of 1916/17, Hausdorff was the only mathematician in Greifswald. This meant that he was almost fully occupied in teaching basic courses. It was thus a substantial improvement for his academic career when Hausdorff was appointed in 1921 to Bonn. There he was free to teach about wider ranges of topics, and often lectured on his latest research. He gave a particularly noteworthy lecture on probability theory (NL Hausdorff: Capsule 21: Fasz 64) in the summer semester of 1923, in which he grounded the theory of probability in measure-theoretic axiomatic theory, ten years before A. N. Kolmogorov's "Basic concepts of probability theory" (reprinted in full in the collected works, Volume V). In Bonn, Hausdorff was friends and colleagues with Eduard Study, and later with Otto Toeplitz, who were both outstanding mathematicians. Under the Nazi dictatorship and suicide. After the takeover by the National Socialist party, antisemitism became state doctrine. Hausdorff was not initially concerned by the "Law for the Restoration of the Professional Civil Service", adopted in 1933, because he had been a German public servant since before 1914. However, he was not completely spared, as one of his lectures was interrupted by National Socialist student officials. In the winter semester of 1934/1935, there was a working session of the National Socialist German Student Union (NSDStB) at the University of Bonn, which chose "Race and Ethnicity" as their theme for the semester. Hausdorff cancelled his 1934/1935 winter semester Calculus III course on 20 November, and it is assumed that the choice of theme was related to the cancellation of Hausdorff's class, since in his long career as a university lecturer he had always taught his courses through to their end. On March 31, 1935, after some back and forth, Hausdorff was finally given emeritus status. No words of thanks were given for his 40 years of successful work in the German higher education system. His academic legacy shows that Hausdorff was still working mathematically during these increasingly difficult times, and continued to follow current developments of interest. He wrote, in addition to the expanded edition of his work on set theory, seven works on topology and descriptive set theory. These were published in Polish magazines: one in "Studia Mathematica", the others in "Fundamenta Mathematicae". He was supported at this time by Erich Bessel-Hagen, a loyal friend to the Hausdorff family who obtained books and magazines from the academic library, which Hausdorff was no longer allowed to enter. A great deal is known about the humiliations to which Hausdorff and his family especially were exposed to after Kristallnacht in 1938. There are many sources, including the letters of Bessel-Hagen. In 1939, Hausdorff asked the mathematician Richard Courant, in vain, for a research fellowship to be able to emigrate into the USA. In mid-1941, the Bonn Jews began to be deported to the "Monastery for Eternal Adoration" in Endenich, Bonn, from which the nuns had been expelled. Transports to death camps in the east occurred later. After Hausdorff, his wife, and his wife's sister, Edith Pappenheim (who was living with them), were ordered in January 1942 to move to the Endenich camp, the three died by suicide on 26 January 1942 by taking an overdose of veronal. Their final resting place is located on the cemetery Poppelsdorf in Bonn. In the time between their placement in temporary camps and his suicide, he gave his handwritten "Nachlass" to the Egyptologist and presbyter Hans Bonnet, who saved as much of them as possible, even despite the destruction of his house by a bomb. Some of his fellow Jews may have had illusions about the camp Endenich, but not Hausdorff. In the estate of Bessel-Hagen, E. Neuenschwander discovered the farewell letter that Hausdorff wrote to his lawyer Hans Wollstein, who was also Jewish. Here is the beginning and end of the letter: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Dear friend Wollstein! If you receive these lines, we (three) have solved the problem in a different manner — in the manner of which you have constantly tried to dissuade us. The feeling of security that you have predicted for us once we would overcome the difficulties of the move, is still eluding us; on the contrary, Endenich may not even be the end! What has happened in recent months against the Jews evokes justified fear that they will not let us live to see a more bearable situation. After thanking friends and, in great composure, expressing his last wishes regarding his funeral and his will, Hausdorff writes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I am sorry that we cause you yet more effort beyond death, and I am convinced that you are doing what you can do (which perhaps is not very much). Forgive us our desertion! We wish you and all our friends to experience better times. Your truly devoted Felix Hausdorff Unfortunately, this desire was not fulfilled. Hausdorff's lawyer, Wollstein, was murdered in Auschwitz. Hausdorff's library was sold by his son-in-law and sole heir, Arthur König. The portions of Hausdorff's "Nachlass" which could be saved by Hans Bonnet are now in the university and State Library of Bonn. The "Nachlass" is catalogued. Work and reception. Hausdorff as philosopher and writer (Paul Mongré). Hausdorff's volume of aphorisms, published in 1897, was his first work published under the pseudonym Paul Mongré. It is entitled "Sant' Ilario: Thoughts from the landscape of Zarathustra". The subtitle plays first on the fact that Hausdorff had completed his book during a recovery stay on the Ligurian coast by Genoa and that in this same area, Friedrich Nietzsche wrote the first two parts of "Thus Spoke Zarathustra"; he also alludes to his spiritual closeness to Nietzsche. In an article on Sant' Ilario in the weekly paper Die Zukunft, Hausdorff acknowledged in his debt to Nietzsche. Hausdorff was not trying to copy or even exceed Nietzsche. "Of Nietzsche imitation no trace", says a contemporary review. He follows Nietzsche in an attempt to liberate individual thinking, to take the liberty of questioning outdated standards. Hausdorff maintained critical distance to the late works of Nietzsche. In his essay on the book "The Will to Power" compiled from notes left in the Nietzsche Archive he says: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In Nietzsche glows a fanatic. His morality of breeding, erected on our present biological and physiological foundations of knowledge: that could be a world historical scandal against which the Inquisition and witch trials fade into harmless aberrations. His critical standard he took from Nietzsche himself, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;From the kind, modest, understanding Nietzsche and from the free spirit of the cool, dogma-free, unsystematic skeptic Nietzsche ... In 1898—also under the pseudonym Paul Mongré—Hausdorff published an epistemological experiment titled "Chaos in cosmic selection". The critique of metaphysics put forward in this book had its starting point in Hausdorff's confrontation with Nietzsche's idea of eternal recurrence. Ultimately, it is about destroying "any" kind of metaphysics. Of the world itself, of the "transcendent core of the world"—as Hausdorff puts it—we know nothing and we can know nothing. We must assume "the world itself" as undetermined and undeterminable, as mere chaos. The world of our experience, our cosmos, is the result of the selections that we have made and will always instinctively make according to our capacity for understanding. Seen from that chaos, all other frameworks, other cosmos, are conceivable. That is to say, from the world of our cosmos, one cannot draw any conclusions about the transcendent world. In 1904, in the magazine The New Rundschau, Hausdorff's play appeared, the one-act play "The doctor in his honor". It is a crude satire on the duel and on the traditional concepts of honor and nobility of the Prussian officer corps, which in the developing bourgeois society were increasingly anachronistic. "The doctor in his honor" was Hausdorff's most popular literary work. In 1914–1918 there were numerous performances in more than thirty cities. Hausdorff later wrote an epilogue to the play, but it was not performed at that time. Only in 2006 did this epilogue have its premier at the annual meeting of the German Mathematical Society in Bonn. Besides the works mentioned above, Hausdorff also wrote numerous essays that appeared in some of the leading literary magazines of the time. He also wrote a book of poems, "Ecstasy" (1900). Some of his poems were set to music by Austrian composer Joseph Marx. Theory of ordered sets. Hausdorff's entrance into a thorough study of ordered sets was prompted in part by Cantor's continuum problem: where should the cardinal number formula_0 be placed in the sequence formula_1? In a letter to Hilbert on 29 September 1904, he speaks of this problem, "it has plagued me almost like monomania". Hausdorff saw a new strategy to attack the problem in the set formula_2. Cantor had suspected formula_3, but had only been able to show that formula_4. While formula_5 is the "number" of possible well-orderings of a countable set, formula_6 had now emerged as the "number" of all possible orders of such an amount. It was natural, therefore, to study systems that are more specific than orders, but more general than well-orderings. Hausdorff did just that in his first volume of 1901, with the publication of theoretical studies of "graded sets". However, we know from the results of Kurt Gödel and Paul Cohen that this strategy to solve the continuum problem is just as ineffectual as Cantor's strategy, which was aimed at generalizing the Cantor–Bendixson principle from closed sets to general uncountable sets. In 1904 Hausdorff published the recursion named after him, which states that for each non-limit ordinal formula_7 we have formula_8 This formula was, together with a later notion called cofinality introduced by Hausdorff, the basis for all further results for Aleph exponentiation. Hausdorff's excellent knowledge of recurrence formulas of this kind also empowered him to uncover an error in Julius König's lecture at the International Congress of Mathematicians in 1904 in Heidelberg. There König had argued that the continuum cannot be well-ordered, so its cardinality is not an Aleph at all, and thus caused a great stir. The fact that it was Hausdorff who clarified the mistake carries a special significance, since a false impression of the events in Heidelberg lasted for over 50 years. In the years 1906–1909 Hausdorff did his groundbreaking and fundamental work on ordered sets. Of fundamental importance to the whole theory is the concept of cofinality, which Hausdorff introduced. An ordinal is called regular if it is cofinal with any smaller ordinal; otherwise it is called singular. Hausdorff's question, whether there are regular numbers which index a limit ordinal, was the starting point for the theory of inaccessible cardinals. Hausdorff had already noticed that such numbers, if they exist, must be of "exorbitant size". The following theorem due to Hausdorff is also of fundamental importance: for each unbounded and ordered dense set formula_9 there are two uniquely determined regular initial numbers formula_10 so that formula_9 is cofinal with formula_11 and coinitial with formula_12 (where * denotes the inverse order). This theorem provides, for example, a technique to characterize elements and gaps in ordered sets. If formula_13 is a predetermined set of characters (element and gap characters), the question arises whether there are ordered sets whose character set is exactly formula_13. One can easily find a necessary condition for formula_13, but Hausdorff was also able to show that this condition is sufficient. For this one needs a rich reservoir of ordered sets, which Hausdorff was also able to create with his theory of general products and powers. In this reservoir can be found interesting structures like the Hausdorff formula_14 normal-types, in connection with which Hausdorff first formulated the generalized continuum hypothesis. Hausdorff's formula_14-sets formed the starting point for the study of the important model theory of saturated structure. Hausdorff's general products and powers of cardinalities led him to study the concept of partially ordered set. The question of whether any ordered subset of a partially ordered set is contained in a maximal ordered subset was answered in the positive by Hausdorff using the well-ordering theorem. This is the Hausdorff maximal principle, which follows from either the well-ordering theorem or the axiom of choice, and as it turned out, is also equivalent to the axiom of choice. Writing in 1908, Arthur Moritz Schoenflies found in his report on set theory that the newer theory of ordered sets (i.e., that which occurred after Cantor's extensions) was almost exclusively due to Hausdorff. The "Magnum Opus": "Principles of set theory". According to previous notions, set theory included not only the general set theory and the theory of sets of points, but also dimension and measure theory. Hausdorff's textbook was the first to present all of set theory in this broad sense, systematically and with full proofs. Hausdorff was aware of how easily the human mind can err while also seeking for rigor and truth, so in the preface of his work he promises: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;… to be as economical as possible with the human privilege of error. This book went far beyond its masterful portrayal of already-known concepts. It also contained a series of important original contributions by the author. The first few chapters deal with the basic concepts of general set theory. In the beginning Hausdorff provides a detailed set algebra with some pioneering new concepts (differences chain, set rings and set fields, formula_15- and formula_16-systems). The introductory paragraphs on sets and their connections included, for example, the modern set-theoretic notion of functions. Chapters 3 to 5 discussed the classical theory of cardinal numbers, order types and ordinals, and in the sixth chapter "Relations between ordered and well-ordered sets" Hausdorff presents, among other things, the most important results of his own research on ordered sets. In the chapters on "point sets"—the topological chapters—Hausdorff developed for the first time, based on the known neighborhood axioms, a systematic theory of topological spaces, where in addition he added the separation axiom later named after him. This theory emerges from a comprehensive synthesis of earlier approaches of other mathematicians and Hausdorff's own reflections on the problem of space. The concepts and theorems of classical point set theory formula_17 are—as far as possible—transferred to the general case, and thus become part of the newly created general or set-theoretic topology. But Hausdorff not only performed this "translation work", but he also developed basic construction methods of topology such as core formation (open core, self-dense core) and shell formation (closure), and he works through the fundamental importance of the concept of an open set (called "area" by him) and of the concept of compactness introduced by Fréchet. He also founded and developed the theory of the connected set, particularly through the introduction of the terms "component" and "quasi-component". With the first Hausdorff countability axiom, and eventually the second, the considered spaces were gradually further specialized. A large class of spaces satisfying the countable first axiom are metric spaces. They were introduced in 1906 by Fréchet under the name "classes (E)". The term "metric space" comes from Hausdorff. In "Principles", he developed the theory of metric spaces and systematically enriched it through a series of new concepts: Hausdorff metric, complete, total boundedness, formula_18-connectivity, reducible sets. Fréchet's work is not particularly famous; only through Hausdorff's "Principles" did metric spaces become common knowledge to mathematicians. The chapter on illustrations and the final chapter of "Principles" on measure and integration theory are enriched by the generality of the material and the originality of presentation. Hausdorff's mention of the importance of measure theory for probability had great historical effect, despite its laconic brevity. One finds in this chapter the first correct proof of the strong law of large numbers of Émile Borel. Finally, the appendix contains the single most spectacular result of the whole book, namely Hausdorff's theorem that one cannot define a volume for all bounded subsets of formula_17 for formula_19. The proof is based on Hausdorff's paradoxical ball decomposition, whose production requires the axiom of choice. During the 20th century, it became the standard to build mathematical theories on axiomatic set theory. The creation of axiomatically founded generalized theories, such as general topology, served among other things to single out the common structural core for various specific cases or regions and then set up an abstract theory, which contained all these parts as special cases. This brought a great success in the form of simplification and harmonization, and ultimately brought with itself an economy of thought. Hausdorff himself highlighted this aspect in the "Principles". In the topological chapter, the basic concepts are methodologically a pioneering effort, and they paved the way for the development of modern mathematics. "Principles of set theory" appeared in April 1914, on the eve of the First World War, which dramatically affected scientific life in Europe. Under these circumstances, the effects Hausdorff's book on mathematical thought would not be seen for five to six years after its appearance. After the war, a new generation of young researchers set forth to expand on the abundant suggestions that were included in this work. Undoubtedly, topology was the primary focus of attention. The journal "Fundamenta Mathematicae", founded in Poland in 1920, played a special role in the reception of Hausdorff's ideas. It was one of the first mathematical journals with special emphasis on set theory, topology, the theory of real functions, measure and integration theory, functional analysis, logic, and foundations of mathematics. Across this spectrum, a special focus was placed on topology. Hausdorff's "Principles" was cited in the very first volume of Fundamenta Mathematicae, and through citation counting its influence continued at a remarkable rate. Of the 558 works (Hausdorff's own three works not included), which appeared in the first twenty volumes of Fundamenta Mathematicae from 1920 to 1933, 88 of them cite "Principles". One must also take into account the fact that, as Hausdorff's ideas became increasingly commonplace, so too were they used in a number of works that did not cite them explicitly. The Russian topological school, founded by Paul Alexandroff and Paul Urysohn, was based heavily on Hausdorff's "Principles". This is shown by the surviving correspondence in Hausdorff's Nachlass with Urysohn, and especially Alexandroff and Urysohn's "Mémoire sur les multiplicités Cantoriennes", a work the size of a book, in which Urysohn developed dimension theory and "Principles" is cited no fewer than 60 times. After the Second World War there was a strong demand for Hausdorff's book, and there were three reprints at Chelsea from 1949, 1965 and 1978. Descriptive set theory, measure theory and analysis. In 1916, Alexandroff and Hausdorff independently solved the continuum problem for Borel sets: Every Borel set in a complete separable metric space is either countable or has the cardinality of the continuum. This result generalizes the Cantor–Bendixson theorem that such a statement holds for the closed sets of formula_17. For linear formula_20 sets William Henry Young had proved the result in 1903, for formula_21 sets Hausdorff obtained a corresponding result in 1914 in "Principles". The theorem of Alexandroff and Hausdorff was a strong impetus for further development of descriptive set theory. Among the publications of Hausdorff in his time at Greifswald the work "Dimension and outer measure" from 1919 is particularly outstanding. In this work, the concepts were introduced which are now known as Hausdorff measure and the Hausdorff dimension. It has remained highly topical and in later years has been one of the most cited mathematical works from the decade of 1910 to 1920. The concept of Hausdorff dimension is useful for the characterization and comparison of "highly rugged quantities". The concepts of "Dimension and outer measure" have experienced applications and further developments in many areas such as in the theory of dynamical systems, geometric measure theory, the theory of self-similar sets and fractals, the theory of stochastic processes, harmonic analysis, potential theory, and number theory. Significant analytical work of Hausdorff occurred in his second time at Bonn. In "Summation methods and moment sequences I" in 1921, he developed a whole class of summation methods for divergent series, which today are called Hausdorff methods. In Hardy's classic "Divergent Series", an entire chapter is devoted to the Hausdorff method. The classical methods of Hölder and Cesàro proved to be special cases of the Hausdorff method. Every Hausdorff method is given by a moment sequence; in this context Hausdorff gave an elegant solution of the moment problem for a finite interval, bypassing the theory of continued fractions. In his paper "Moment problems for a finite interval" of 1923 he treated more special moment problems, such as those with certain restrictions for generating density formula_22, for instance formula_23. Criteria for solvability and decidability of moment problems occupied Hausdorff for many years, as hundreds of pages of handwritten notes in his Nachlass attest. A significant contribution to the emerging field of functional analysis in the 1920s was Hausdorff's extension of the Riesz-Fischer theorem to formula_24 spaces in his 1923 work "An extension of Parseval's theorem on Fourier series". He proved the inequalities now named after him and W.H. Young. The Hausdorff–Young inequalities became the starting point of major new developments. Hausdorff's book "Set Theory" appeared in 1927. This was declared as a second Edition of "Principles", but it was actually a completely new book. Since the scale was significantly reduced due to its appearance in Goschen's teaching library, large parts of the theory of ordered sets and measures and integration theory were removed. In its preface, Hausdorff writes, "Perhaps even more than these deletions the reader will regret the most that, to further save space in point set theory, I have abandoned the topological point of view through which the first edition has apparently acquired many friends, and focused on the simpler theory of metric spaces". In fact, this was an explicit regret of some reviewers of the work. As a kind of compensation Hausdorff showed for the first time the then-current state of descriptive set theory. This fact assured the book almost as intense a reception as "Principles", especially in Fundamenta Mathematicae. As a textbook it was very popular. In 1935 there was an expanded edition published, and this was reprinted by Dover in 1944. An English translation appeared in 1957 with reprints in 1962 and 1967. There was also a Russian edition (1937), although it was only partially a faithful translation, and partly a reworking by Alexandroff and Kolmogorov. In this translation the topological point of view again moved to the forefront. In 1928 a review of "Set Theory" was written by Hans Hahn, who perhaps had the danger of German antisemitism in his mind as he closed his discussion with the following sentence: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;An exemplary depiction in every respect of a difficult and thorny area, a work on par with those which have carried the fame of German science throughout the world, and such that all German mathematicians may be proud of. His last works. In 1938, Hausdorff's last work "Extension of a continuous map" showed that a continuous function from a closed subset formula_25 of a metric space formula_26 can be extended to all of formula_26 (although the image may need to be extended). As a special case, every homeomorphism from formula_25 can be extended to a homeomorphism from formula_26. This work continued research from earlier years. In 1919, in "About semi-continuous functions and their generalization", Hausdorff had, among other things, given another proof of the Tietze extension theorem. In 1930, in "Extending a homeomorphism", he showed the following: Let formula_26 be a metric space, formula_27 a closed subset. If formula_25 is given a new metric without changing the topology, this metric can be extended to the entire space without changing the topology. The work "Graded spaces" appeared in 1935, where Hausdorff discussed spaces which fulfilled the Kuratowski closure axioms up to the axiom of idempotence. These spaces are often also called closure spaces, and Hausdorff used them to study relationships between the Fréchet limit spaces and topological spaces. Hausdorff as name-giver. The name Hausdorff is found throughout mathematics. Among others, these concepts were named after him: In the universities of Bonn and Greifswald, these things were named in his honor: Besides these, in Bonn there is the Hausdorffstraße (Hausdorff Street), where he first lived. (Haus-Nr. 61). In Greifswald there is a Felix-Hausdorff–Straße, where the Institutes for Biochemistry and Physics are located, among others. Since 2011, there is a "Hausdorffweg" (Hausdorff-Way) in the middle of Leipziger Ortsteil Gohlis. The Asteroid 24947 Hausdorff was named after him. Writings. As Paul Mongré. Only a selection of the essays that appeared in text are shown here. As Felix Hausdorff. "Hausdorff on Ordered Sets". Trans. and Ed.: Jacob M. Plotkin, American Mathematical Society 2005. Collected works. The "Hausdorff-Edition", edited by E. Brieskorn (†), F. Hirzebruch (†), W. Purkert (all Bonn), R. Remmert (†) (Münster) and E. Scholz (Wuppertal) with the collaboration of over twenty mathematicians, historians, philosophers and scholars, is an ongoing project of the North Rhine-Westphalian Academy of Sciences, Humanities and the Arts to present the works of Hausdorff, with commentary and much additional material. The volumes have been published by Springer-Verlag, Heidelberg. Nine volumes have been published with volume I being split up into volume IA and volume IB. See the website of the Hausdorff Project website of the Hausdorff Edition (German) for further information. The volumes are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\aleph = 2^{\\aleph_0}" }, { "math_id": 1, "text": "\\{\\aleph_{\\alpha}\\}" }, { "math_id": 2, "text": " \\mathrm{card} (T(\\aleph_0)) = \\aleph" }, { "math_id": 3, "text": "\\aleph = \\aleph_1" }, { "math_id": 4, "text": "\\aleph \\geq \\aleph_1" }, { "math_id": 5, "text": "\\aleph_1" }, { "math_id": 6, "text": "\\aleph" }, { "math_id": 7, "text": "\\mu" }, { "math_id": 8, "text": "\\aleph_{\\mu}^{\\aleph_{\\alpha}} = \\aleph_{\\mu} \\; \\aleph_{\\mu -1}^{\\aleph_{\\alpha}}." }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "\\omega_{\\xi}, \\omega_{\\eta}" }, { "math_id": 11, "text": "\\omega_{\\xi}" }, { "math_id": 12, "text": "\\omega_{\\eta}^*" }, { "math_id": 13, "text": "W" }, { "math_id": 14, "text": "\\eta_{\\alpha}" }, { "math_id": 15, "text": "\\delta" }, { "math_id": 16, "text": "\\sigma" }, { "math_id": 17, "text": "\\mathbb{R}^n" }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "n \\geq 3" }, { "math_id": 20, "text": "G_{\\delta}" }, { "math_id": 21, "text": "G_{\\delta\\sigma\\delta}" }, { "math_id": 22, "text": "\\varphi(x)" }, { "math_id": 23, "text": "\\varphi(x) \\in L^p[0,1]" }, { "math_id": 24, "text": "L^p" }, { "math_id": 25, "text": "F" }, { "math_id": 26, "text": "E" }, { "math_id": 27, "text": "F \\subseteq E" } ]
https://en.wikipedia.org/wiki?curid=10989
10989485
Restricted partial quotients
Analytic series In mathematics, and more particularly in the analytic theory of regular continued fractions, an infinite regular continued fraction "x" is said to be "restricted", or composed of restricted partial quotients, if the sequence of denominators of its partial quotients is bounded; that is formula_0 and there is some positive integer "M" such that all the (integral) partial denominators "ai" are less than or equal to "M". Periodic continued fractions. A regular periodic continued fraction consists of a finite initial block of partial denominators followed by a repeating block; if formula_1 then ζ is a quadratic irrational number, and its representation as a regular continued fraction is periodic. Clearly any regular periodic continued fraction consists of restricted partial quotients, since none of the partial denominators can be greater than the largest of "a"0 through "a""k"+"m". Historically, mathematicians studied periodic continued fractions before considering the more general concept of restricted partial quotients. Restricted CFs and the Cantor set. The Cantor set is a set "C" of measure zero from which a complete interval of real numbers can be constructed by simple addition – that is, any real number from the interval can be expressed as the sum of exactly two elements of the set "C". The usual proof of the existence of the Cantor set is based on the idea of punching a "hole" in the middle of an interval, then punching holes in the remaining sub-intervals, and repeating this process "ad infinitum". The process of adding one more partial quotient to a finite continued fraction is in many ways analogous to this process of "punching a hole" in an interval of real numbers. The size of the "hole" is inversely proportional to the next partial denominator chosen – if the next partial denominator is 1, the gap between successive convergents is maximized. To make the following theorems precise we will consider CF("M"), the set of restricted continued fractions whose values lie in the open interval (0, 1) and whose partial denominators are bounded by a positive integer "M" – that is, formula_2 By making an argument parallel to the one used to construct the Cantor set two interesting results can be obtained. formula_3 Zaremba's conjecture. Zaremba has conjectured the existence of an absolute constant "A", such that the rationals with partial quotients restricted by "A" contain at least one for every (positive integer) denominator. The choice "A" = 5 is compatible with the numerical evidence. Further conjectures reduce that value, in the case of all sufficiently large denominators. Jean Bourgain and Alex Kontorovich have shown that "A" can be chosen so that the conclusion holds for a set of denominators of density 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x = [a_0;a_1,a_2,\\dots] = a_0 + \\cfrac{1}{a_1 + \\cfrac{1}{a_2 + \\cfrac{1}{a_3 + \\cfrac{1}{a_4 + \\ddots}}}} = a_0 + \\underset{i=1}{\\overset{\\infty}{K}} \\frac{1}{a_i},\\," }, { "math_id": 1, "text": "\n\\zeta = [a_0;a_1,a_2,\\dots,a_k,\\overline{a_{k+1},a_{k+2},\\dots,a_{k+m}}],\\,\n" }, { "math_id": 2, "text": "\n\\mathrm{CF}(M) = \\{[0;a_1,a_2,a_3,\\dots]: 1 \\leq a_i \\leq M \\}.\\,\n" }, { "math_id": 3, "text": "\n(2\\times[0;\\overline{M,1}], 2\\times[0;\\overline{1,M}]) =\n\\left(\\frac{1}{M} \\left[\\sqrt{M^2 + 4M} - M \\right], \\sqrt{M^2 + 4M} - M \\right).\n" }, { "math_id": 4, "text": "{\\scriptstyle[0;\\overline{1,M}]-[0;\\overline{M,1}]\\ge\\frac{1}{2}}" } ]
https://en.wikipedia.org/wiki?curid=10989485
1099256
Intraocular pressure
Fluid pressure inside the eye Intraocular pressure (IOP) is the fluid pressure inside the eye. Tonometry is the method eye care professionals use to determine this. IOP is an important aspect in the evaluation of patients at risk of glaucoma. Most tonometers are calibrated to measure pressure in millimeters of mercury (mmHg). Physiology. Intraocular pressure is determined by the production and drainage of aqueous humour by the ciliary body and its drainage via the trabecular meshwork and uveoscleral outflow. The reason for this is because the vitreous humour in the posterior segment has a relatively fixed volume and thus does not affect intraocular pressure regulation. An important quantitative relationship (Goldmann's equation) is as follows: formula_0 Where: The above factors are those that drive IOP. Measurement. Palpation is one of the oldest, simplest, and least expensive methods for approximate IOP measurement, however it is very inaccurate unless the pressure is very high. Intraocular pressure is measured with a tonometer as part of a comprehensive eye examination. Contact lens sensors have also been used for continuous intraocular pressure monitoring. Measured values of intraocular pressure are influenced by corneal thickness and rigidity. As a result, some forms of refractive surgery (such as photorefractive keratectomy) can cause traditional intraocular pressure measurements to appear normal when in fact the pressure may be abnormally high. A newer transpalpebral and transscleral tonometry method is not influenced by corneal biomechanics and does not need to be adjusted for corneal irregularities as measurement is done over upper eyelid and sclera. Classification. Current consensus among ophthalmologists and optometrists defines normal intraocular pressure as that between 10 mmHg and 20 mmHg. The average value of intraocular pressure is 15.5 mmHg with fluctuations of about 2.75 mmHg. Ocular hypertension (OHT) is defined by intraocular pressure being higher than normal, in the absence of optic nerve damage or visual field loss. Ocular hypotension, hypotony, or ocular hypotony, is typically defined as intraocular pressure equal to or less than 5 mmHg. Such low intraocular pressure could indicate fluid leakage and deflation of the eyeball. Influencing factors. Daily variation. Intraocular pressure varies throughout the night and day. The diurnal variation for normal eyes is between 3 and 6 mmHg and the variation may increase in glaucomatous eyes. During the night, intraocular pressure may not decrease despite the slower production of aqueous humour. Glaucoma patients' 24-hour IOP profiles may differ from those of healthy individuals. Fitness and exercise. There is some inconclusive research that indicates that exercise could possibly affect IOP (some positively and some negatively). Musical instruments. Playing some musical wind instruments has been linked to increases in intraocular pressure. A 2011 study focused on brass and woodwind instruments observed "temporary and sometimes dramatic elevations and fluctuations in IOP". Another study found that the magnitude of increase in intraocular pressure correlates with the intraoral resistance associated with the instrument, and linked intermittent elevation of intraocular pressure from playing high-resistance wind instruments to incidence of visual field loss. The range of intraoral pressure involved in various classes of ethnic wind instruments, such as Native American flutes, has been shown to be generally lower than Western classical wind instruments. Drugs. Intraocular pressure also varies with a number of other factors such as heart rate, respiration, fluid intake, systemic medication and topical drugs. Alcohol and marijuana consumption leads to a transient decrease in intraocular pressure and caffeine may increase intraocular pressure. Taken orally, glycerol (often mixed with fruit juice to reduce its sweet taste) can cause a rapid, temporary decrease in intraocular pressure. This can be a useful initial emergency treatment of severely elevated pressure. The depolarising muscle relaxant succinylcholine, which is used in anaesthesia, transiently increases IOP by around 10 mmHg for a few minutes. This is significant for example if the patient requires anaesthesia for a trauma and has sustained an eye (globe) perforation. The mechanism is not clear but it is thought to involve contraction of tonic myofibrils and transient dilation of choroidal blood vessels. Ketamine also increases IOP. Significance. Ocular hypertension is the most important risk factor for glaucoma. Intraocular pressure has been measured as an outcome in a systematic review comparing the effect of neuroprotective agents in slowing the progression of open angle glaucoma. Differences in pressure between the two eyes are often clinically significant, and potentially associated with certain types of glaucoma, as well as iritis or retinal detachment. Intraocular pressure may become elevated due to anatomical problems, inflammation of the eye, genetic factors, or as a side-effect from medication. Intraocular pressure laws follow fundamentally from physics. Any kinds of intraocular surgery should be done by considering the intraocular pressure fluctuation. Sudden increase of intraocular pressure can lead to intraocular micro barotrauma and cause ischemic effects and mechanical stress to retinal nerve fiber layer. Sudden intraocular pressure drop can lead to intraocular decompression that generates micro bubbles that potentially cause multiple micro emboli and leading to hypoxia, ischemia and retinal micro structure damage. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_o = \\frac{F - U}{C} + P_v" }, { "math_id": 1, "text": "P_o" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "P_v" } ]
https://en.wikipedia.org/wiki?curid=1099256
10993199
Cebeci–Smith model
The Cebeci–Smith model, developed by Tuncer Cebeci and Apollo M. O. Smith in 1967, is a 0-equation eddy viscosity model used in computational fluid dynamics analysis of turbulence in boundary layer flows. The model gives eddy viscosity, formula_0, as a function of the local boundary layer velocity profile. The model is suitable for high-speed flows with thin attached boundary layers, typically present in aerospace applications. Like the Baldwin-Lomax model, it is not suitable for large regions of flow separation and significant curvature or rotation. Unlike the Baldwin-Lomax model, this model requires the determination of a boundary layer edge. Equations. In a two-layer model, the boundary layer is considered to comprise two layers: inner (close to the surface) and outer. The eddy viscosity is calculated separately for each layer and combined using: formula_1 where formula_2 is the smallest distance from the surface where formula_3 is equal to formula_4. The inner-region eddy viscosity is given by: formula_5 where formula_6 with the von Karman constant formula_7 usually being taken as 0.4, and with formula_8 The eddy viscosity in the outer region is given by: formula_9 where formula_10, formula_11 is the displacement thickness, given by formula_12 and "F""K" is the Klebanoff intermittency function given by formula_13
[ { "math_id": 0, "text": "\\mu_t" }, { "math_id": 1, "text": "\n\\mu_t =\n\\begin{cases}\n{\\mu_t}_\\text{inner} & \\mbox{if } y \\le y_\\text{crossover} \\\\ \n{\\mu_t}_\\text{outer} & \\mbox{if } y > y_\\text{crossover}\n\\end{cases}\n" }, { "math_id": 2, "text": "y_\\text{crossover}" }, { "math_id": 3, "text": "{\\mu_t}_\\text{inner}" }, { "math_id": 4, "text": "{\\mu_t}_\\text{outer}" }, { "math_id": 5, "text": "\n{\\mu_t}_\\text{inner} = \\rho \\ell^2 \\left[\\left(\n \\frac{\\partial U}{\\partial y}\\right)^2 +\n \\left(\\frac{\\partial V}{\\partial x}\\right)^2\n\\right]^{1/2}\n" }, { "math_id": 6, "text": "\n\\ell = \\kappa y \\left( 1 - e^{-y^+/A^+} \\right)\n" }, { "math_id": 7, "text": "\\kappa" }, { "math_id": 8, "text": "\nA^+ = 26\\left[1+y\\frac{dP/dx}{\\rho u_\\tau^2}\\right]^{-1/2}\n" }, { "math_id": 9, "text": "\n{\\mu_t}_\\text{outer} = \\alpha \\rho U_e \\delta_v^* F_K\n" }, { "math_id": 10, "text": "\\alpha=0.0168" }, { "math_id": 11, "text": "\\delta_v^*" }, { "math_id": 12, "text": "\n\\delta_v^* = \\int_0^\\delta \\left(1 - \\frac{U}{U_e}\\right)\\,dy\n" }, { "math_id": 13, "text": "\nF_K = \\left[1 + 5.5 \\left( \\frac{y}{\\delta} \\right)^6\n \\right]^{-1}\n" } ]
https://en.wikipedia.org/wiki?curid=10993199
1099413
Orbital eccentricity
Amount by which an orbit deviates from a perfect circle In astrodynamics, the orbital eccentricity of an astronomical object is a dimensionless parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is a circular orbit, values between 0 and 1 form an elliptic orbit, 1 is a parabolic escape orbit (or capture orbit), and greater than 1 is a hyperbola. The term derives its name from the parameters of conic sections, as every Kepler orbit is a conic section. It is normally used for the isolated two-body problem, but extensions exist for objects following a rosette orbit through the Galaxy. Definition. In a two-body problem with inverse-square-law force, every orbit is a Kepler orbit. The eccentricity of this Kepler orbit is a non-negative number that defines its shape. The eccentricity may take the following values: The eccentricity "e" is given by formula_0 where "E" is the total orbital energy, "L" is the angular momentum, "m"red is the reduced mass, and formula_1 the coefficient of the inverse-square law central force such as in the theory of gravity or electrostatics in classical physics: formula_2 or in the case of a gravitational force:24 formula_3 where "ε" is the specific orbital energy (total energy divided by the reduced mass), "μ" the standard gravitational parameter based on the total mass, and "h" the specific relative angular momentum (angular momentum divided by the reduced mass). For values of "e" from 0 to 1 the orbit's shape is an increasingly elongated (or flatter) ellipse; for values of "e" from 1 to infinity the orbit is a hyperbola branch making a total turn of 2 arccsc("e"), decreasing from 180 to 0 degrees. Here, the total turn is analogous to turning number, but for open curves (an angle covered by velocity vector). The limit case between an ellipse and a hyperbola, when "e" equals 1, is parabola. Radial trajectories are classified as elliptic, parabolic, or hyperbolic based on the energy of the orbit, not the eccentricity. Radial orbits have zero angular momentum and hence eccentricity equal to one. Keeping the energy constant and reducing the angular momentum, elliptic, parabolic, and hyperbolic orbits each tend to the corresponding type of radial trajectory while "e" tends to 1 (or in the parabolic case, remains 1). For a repulsive force only the hyperbolic trajectory, including the radial version, is applicable. For elliptical orbits, a simple proof shows that formula_4 yields the projection angle of a perfect circle to an ellipse of eccentricity "e". For example, to view the eccentricity of the planet Mercury ("e" = 0.2056), one must simply calculate the inverse sine to find the projection angle of 11.86 degrees. Then, tilting any circular object by that angle, the apparent ellipse of that object projected to the viewer's eye will be of the same eccentricity. Etymology. The word "eccentricity" comes from Medieval Latin "eccentricus", derived from Greek "ekkentros" "out of the center", from "ek-", "out of" + "kentron" "center". "Eccentric" first appeared in English in 1551, with the definition "...a circle in which the earth, sun. etc. deviates from its center". In 1556, five years later, an adjectival form of the word had developed. Calculation. The eccentricity of an orbit can be calculated from the orbital state vectors as the magnitude of the eccentricity vector: formula_5 where: For elliptical orbits it can also be calculated from the periapsis and apoapsis since formula_6 and formula_7 where a is the length of the semi-major axis.formula_8 where: The semi-major axis, a, is also the path-averaged distance to the centre of mass, while the time-averaged distance is a(1 + e e / 2). The eccentricity of an elliptical orbit can be used to obtain the ratio of the apoapsis radius to the periapsis radius: formula_9 For Earth, orbital eccentricity "e" ≈, apoapsis is aphelion and periapsis is perihelion, relative to the Sun. For Earth's annual orbit path, the ratio of longest radius (ra) / shortest radius (rp) is formula_10 Examples. The table lists the values for all planets and dwarf planets, and selected asteroids, comets, and moons. Mercury has the greatest orbital eccentricity of any planet in the Solar System ("e" = ), followed by Mars of . Such eccentricity is sufficient for Mercury to receive twice as much solar irradiation at perihelion compared to aphelion. Before its demotion from planet status in 2006, Pluto was considered to be the planet with the most eccentric orbit ("e" = ). Other Trans-Neptunian objects have significant eccentricity, notably the dwarf planet Eris (0.44). Even further out, Sedna has an extremely-high eccentricity of due to its estimated aphelion of 937 AU and perihelion of about 76 AU, possibly under influence of unknown object(s). The eccentricity of Earth's orbit is currently about ; its orbit is nearly circular. Neptune's and Venus's have even lower eccentricities of and respectively, the latter being the least orbital eccentricity of any planet in the Solar System. Over hundreds of thousands of years, the eccentricity of the Earth's orbit varies from nearly to almost 0.058 as a result of gravitational attractions among the planets. Luna's value is , the most eccentric of the large moons in the Solar System. The four Galilean moons (Io, Europa, Ganymede and Callisto) have their eccentricities of less than 0.01. Neptune's largest moon Triton has an eccentricity of (), the smallest eccentricity of any known moon in the Solar System; its orbit is as close to a perfect circle as can be currently measured. Smaller moons, particularly irregular moons, can have significant eccentricities, such as Neptune's third largest moon, Nereid, of . Most of the Solar System's asteroids have orbital eccentricities between 0 and 0.35 with an average value of 0.17. Their comparatively high eccentricities are probably due to under influence of Jupiter and to past collisions. Comets have very different values of eccentricities. Periodic comets have eccentricities mostly between 0.2 and 0.7, but some of them have highly eccentric elliptical orbits with eccentricities just below 1; for example, Halley's Comet has a value of 0.967. Non-periodic comets follow near-parabolic orbits and thus have eccentricities even closer to 1. Examples include Comet Hale–Bopp with a value of , Comet Ikeya-Seki with a value of and Comet McNaught (C/2006 P1) with a value of . As first two's values are less than 1, their orbit are elliptical and they will return. McNaught has a hyperbolic orbit but within the influence of the planets, is still bound to the Sun with an orbital period of about 105 years. Comet C/1980 E1 has the largest eccentricity of any known hyperbolic comet of solar origin with an eccentricity of 1.057,&lt;ref name="C/1980E1-jpl"&gt;&lt;/ref&gt; and will eventually leave the Solar System. ʻOumuamua is the first interstellar object to be found passing through the Solar System. Its orbital eccentricity of 1.20 indicates that ʻOumuamua has never been gravitationally bound to the Sun. It was discovered 0.2 AU ( km;  mi) from Earth and is roughly 200 meters in diameter. It has an interstellar speed (velocity at infinity) of 26.33 km/s ( mph). Mean average. The mean eccentricity of an object is the average eccentricity as a result of perturbations over a given time period. Neptune currently has an instant (current epoch) eccentricity of , but from 1800 to 2050 has a mean eccentricity of . Climatic effect. Orbital mechanics require that the duration of the seasons be proportional to the area of Earth's orbit swept between the solstices and equinoxes, so when the orbital eccentricity is extreme, the seasons that occur on the far side of the orbit (aphelion) can be substantially longer in duration. Northern hemisphere autumn and winter occur at closest approach (perihelion), when Earth is moving at its maximum velocity—while the opposite occurs in the southern hemisphere. As a result, in the northern hemisphere, autumn and winter are slightly shorter than spring and summer—but in global terms this is balanced with them being longer below the equator. In 2006, the northern hemisphere summer was 4.66 days longer than winter, and spring was 2.9 days longer than autumn due to orbital eccentricity. Apsidal precession also slowly changes the place in Earth's orbit where the solstices and equinoxes occur. This is a slow change in the orbit of Earth, not the axis of rotation, which is referred to as axial precession. The climatic effects of this change are part of the Milankovitch cycles. Over the next years, the northern hemisphere winters will become gradually longer and summers will become shorter. Any cooling effect in one hemisphere is balanced by warming in the other, and any overall change will be counteracted by the fact that the eccentricity of Earth's orbit will be almost halved. This will reduce the mean orbital radius and raise temperatures in both hemispheres closer to the mid-interglacial peak. Exoplanets. Of the many exoplanets discovered, most have a higher orbital eccentricity than planets in the Solar System. Exoplanets found with low orbital eccentricity (near-circular orbits) are very close to their star and are tidally-locked to the star. All eight planets in the Solar System have near-circular orbits. The exoplanets discovered show that the Solar System, with its unusually-low eccentricity, is rare and unique. One theory attributes this low eccentricity to the high number of planets in the Solar System; another suggests it arose because of its unique asteroid belts. A few other multiplanetary systems have been found, but none resemble the Solar System. The Solar System has unique planetesimal systems, which led the planets to have near-circular orbits. Solar planetesimal systems include the asteroid belt, Hilda family, Kuiper belt, Hills cloud, and the Oort cloud. The exoplanet systems discovered have either no planetesimal systems or a very large one. Low eccentricity is needed for habitability, especially advanced life. High multiplicity planet systems are much more likely to have habitable exoplanets. The grand tack hypothesis of the Solar System also helps understand its near-circular orbits and other unique features. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e = \\sqrt{1 + \\frac{2 E L^2}{m_\\text{red}\\, \\alpha ^2}}" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "F = \\frac{\\alpha}{r^2}" }, { "math_id": 3, "text": "e = \\sqrt{1 + \\frac{2 \\varepsilon h^{2}}{\\mu^2}}" }, { "math_id": 4, "text": "\\arcsin(e)" }, { "math_id": 5, "text": "e = \\left | \\mathbf{e} \\right |" }, { "math_id": 6, "text": "r_\\text{p} = a \\, (1 - e )" }, { "math_id": 7, "text": "r_\\text{a} = a \\, (1 + e )\\,," }, { "math_id": 8, "text": "\n\\begin{align} \ne &= \\frac{ r_\\text{a} - r_\\text{p} }{ r_\\text{a} + r_\\text{p} } \\\\ \\, \\\\\n &= \\frac{ r_\\text{a} / r_\\text{p} - 1 }{ r_\\text{a} / r_\\text{p} + 1 } \\\\ \\, \\\\\n &= 1 - \\frac{2}{\\; \\frac{ r_\\text{a} }{ r_\\text{p} } + 1 \\;}\n\\end{align}\n" }, { "math_id": 9, "text": "\\frac{r_\\text{a} }{ r_\\text{p} } = \\frac{\\,a\\,(1 + e)\\,}{\\,a\\,(1 - e)\\, } = \\frac{1 + e }{ 1 - e } " }, { "math_id": 10, "text": " \\frac{\\, r_\\text{a} \\,}{ r_\\text{p} } = \\frac{\\, 1 + e \\,}{ 1 - e } \\text{ ≈ 1.03399 .}" } ]
https://en.wikipedia.org/wiki?curid=1099413
10995628
Tempered representation
Type of representation of a linear semisimple Lie group In mathematics, a tempered representation of a linear semisimple Lie group is a representation that has a basis whose matrix coefficients lie in the L"p" space "L"2+ε("G") for any ε &gt; 0. Formulation. This condition, as just given, is slightly weaker than the condition that the matrix coefficients are square-integrable, in other words lie in "L"2("G"), which would be the definition of a discrete series representation. If "G" is a linear semisimple Lie group with a maximal compact subgroup "K", an admissible representation ρ of "G" is tempered if the above condition holds for the "K"-finite matrix coefficients of ρ. The definition above is also used for more general groups, such as "p"-adic Lie groups and finite central extensions of semisimple real algebraic groups. The definition of "tempered representation" makes sense for arbitrary unimodular locally compact groups, but on groups with infinite centers such as infinite central extensions of semisimple Lie groups it does not behave well and is usually replaced by a slightly different definition. More precisely, an irreducible representation is called tempered if it is unitary when restricted to the center "Z", and the absolute values of the matrix coefficients are in "L"2+ε("G"/"Z"). Tempered representations on semisimple Lie groups were first defined and studied by Harish-Chandra (using a different but equivalent definition), who showed that they are exactly the representations needed for the Plancherel theorem. They were classified by Knapp and Zuckerman, and used by Langlands in the Langlands classification of irreducible representations of a reductive Lie group "G" in terms of the tempered representations of smaller groups. History. Irreducible tempered representations were identified by Harish-Chandra in his work on harmonic analysis on a semisimple Lie group as those representations that contribute to the Plancherel measure. The original definition of a tempered representation, which has certain technical advantages, is that its Harish-Chandra character should be a "tempered distribution" (see the section about this below). It follows from Harish-Chandra's results that it is equivalent to the more elementary definition given above. Tempered representations also seem to play a fundamental role in the theory of automorphic forms. This connection was probably first realized by Satake (in the context of the Ramanujan-Petersson conjecture) and Robert Langlands and served as a motivation for Langlands to develop his classification scheme for irreducible admissible representations of real and "p"-adic reductive algebraic groups in terms of the tempered representations of smaller groups. The precise conjectures identifying the place of tempered representations in the automorphic spectrum were formulated later by James Arthur and constitute one of the most actively developing parts of the modern theory of automorphic forms. Harmonic analysis. Tempered representations play an important role in the harmonic analysis on semisimple Lie groups. An irreducible unitary representation of a semisimple Lie group "G" is tempered if and only if it is in the support of the Plancherel measure of "G". In other words, tempered representations are precisely the class of representations of "G" appearing in the spectral decomposition of L2 functions on the group (while discrete series representations have a stronger property that an individual representation has a positive spectral measure). This stands in contrast with the situation for abelian and more general solvable Lie groups, where a different class of representations is needed to fully account for the spectral decomposition. This can be seen already in the simplest example of the additive group R of the real numbers, for which the matrix elements of the irreducible representations do not fall off to 0 at infinity. In the Langlands program, tempered representations of real Lie groups are those coming from unitary characters of tori by Langlands functoriality. Classification. The irreducible tempered representations of a semisimple Lie group were classified by Knapp and Zuckerman (1976, 1982). In fact they classified a more general class of representations called basic representations. If "P=MAN" is the Langlands decomposition of a cuspidal parabolic subgroup, then a basic representation is defined to be the parabolically induced representation associated to a limit of discrete series representation of "M" and a unitary representation of the abelian group "A". If the limit of discrete series representation is in fact a discrete series representation, then the basic representation is called an induced discrete series representation. Any irreducible tempered representation is a basic representation, and conversely any basic representation is the sum of a finite number of irreducible tempered representations. More precisely, it is a direct sum of 2"r" irreducible tempered representations indexed by the characters of an elementary abelian group "R" of order 2"r" (called the R-group). Any basic representation, and consequently any irreducible tempered representation, is a summand of an induced discrete series representation. However it is not always possible to represent an irreducible tempered representation as an induced discrete series representation, which is why one considers the more general class of basic representations. So the irreducible tempered representations are just the irreducible basic representations, and can be classified by listing all basic representations and picking out those that are irreducible, in other words those that have trivial R-group. Tempered distributions. Fix a semisimple Lie group "G" with maximal compact subgroup "K". defined a distribution on "G" to be tempered if it is defined on the Schwartz space of "G". The Schwartz space is in turn defined to be the space of smooth functions "f" on "G" such that for any real "r" and any function "g" obtained from "f" by acting on the left or right by elements of the universal enveloping algebra of the Lie algebra of "G", the function formula_0 is bounded. Here Ξ is a certain spherical function on "G", invariant under left and right multiplication by "K", and σ is the norm of the log of "p", where an element "g" of "G" is written as : "g"="kp" for "k" in "K" and "p" in "P".
[ { "math_id": 0, "text": "(1+\\sigma)^rg/\\Xi" } ]
https://en.wikipedia.org/wiki?curid=10995628
10995827
Lewin's equation
Heuristic formula to explain determinants of behavior Lewin's equation, "B" = "f"("P", "E"), is a heuristic formula proposed by psychologist Kurt Lewin as an explanation of what determines behavior. Description. The formula states that behavior is a function of the person and their environment: formula_0 Where formula_1 is behavior, formula_2 is person, and formula_3 is the environment. This equation was first presented in Lewin's book, "Principles of Topological Psychology", published in 1936. The equation was proposed as an attempt to unify the different branches of psychology (e.g. child psychology, animal psychology, psychopathology) with a flexible theory applicable to all distinct branches of psychology. This equation is directly related to Lewin's field theory. Field theory is centered around the idea that a person's life space determines their behavior. Thus, the equation was also expressed as "B = f"("L"), where "L" is the life space. In Lewin's book, he first presents the equation as "B" = "f"("S"), where behavior is a function of the whole situation ("S"). He then extended this original equation by suggesting that the whole situation could be roughly split into two parts: the person ("P") and the environment ("E"). According to Lewin, s"ocial" behavior, in particular, was the most psychologically interesting and relevant behavior. Lewin held that the variables in the equation (e.g. "P" and "E") could be replaced with the specific, unique situational and personal characteristics of the individual. As a result, he also believed that his formula, while seemingly abstract and theoretical, had distinct concrete applications for psychology. Gestalt influence. Many scholars (and even Lewin himself) have acknowledged the influence of Gestalt psychology on Lewin's work. Lewin's field theory holds that a number of different and competing forces combine to result in the totality of the situation. A single person's behavior may be different in unique situations, as he or she is acting partly in response to these differential forces and factors (e.g. the environment, or "E"):"A physically identical environment can be psychologically different even for the same man in different conditions."Similarly, two different individuals placed in exactly the same situation will not necessarily engage in the same behavior. "Even when from the standpoint of the physicist the environment is identical or nearly identical for a child and or an adult, the psychological situation can be fundamentally different."For this reason, Lewin holds that the person (e.g. "P") must be considered in conjunction with the environment. "P" consists of the entirety of a person (e.g. his or her past, present, future, personality, motivations, desires). All elements within "P" are contained within the life space, and all elements within "P" interact with each other. Lewin emphasizes that the desires and motivations within the person and the situation in its entirety, the sum of all these competing forces, combine to form something larger: the life space. This notion speaks directly to the gestalt idea that the "whole is greater than the sum of its parts." The idea that the parts (e.g. "P" and "E") of the whole (e.g. "S") combine to form an interactive system has been called Lewin's 'dynamic approach,' a term that specifically refers to regarding "the elements of any situation...as parts of a system." Interaction of person and environment. Relative importance of "P" and "E". Lewin explicitly stated that either the person or the environment may be more important in particular situations:"Every psychological event depends upon the state of the person and at the same time on the environment, although their relative importance is different in different cases."Thus, Lewin believed he succeeded in creating an applicable theory that was also "flexible enough to do justice to the enormous differences between the various events and organisms." In a sense, he held that it was inappropriate to pick a side on the classic psychological debate of nature versus nurture, as he held that "every scientific psychology must take into account whole situations, "i.e.", the state of both person and environment." Further, Lewin stated that:"The question whether heredity or environment plays the greater part also belongs to this kind of thinking. The transition of the Galilean thinking involved a recognition of the general validity of the thesis: An event is always the result of the interaction of several facts." Specific function linking "P" and "E". Lewin defined an empirical law as "the functional relationship between various facts," where facts are the "different characteristics of an event or situation." In Lewin's original proposal of his equation, he did not specify "how" exactly the person and the environment interact to produce behavior. Some scholars have noted that Lewin's use of the comma in his equation between the "P" and "E" represents Lewin's flexibility and receptiveness to multiple ways that these two may interact. Lewin indeed held that the importance of the person or of the environment may vary on a case-by-case basis. The use of the comma may provide the flexibility to support this assertion. Psychological reality. Lewin differentiates between multiple realities. For example, the psychological reality encompasses everything that an individual perceives and believes to be true. Only what is contained within the psychological reality can affect behavior. In contrast, things that may be outside the psychological reality, such as bits of the physical reality or social reality, has no direct relation to behavior. Lewin states:"The psychological reality...does not depend upon whether or not the content...exists in a physical or social sense...The existence or nonexistence...of a psychological fact are independent of the existence or nonexistence to which its content refers."As a result, the only reality that is contained within the life space is the psychological reality, as this is the reality that has direct consequences for behavior. For example, in "Principles of Topological Psychology", Lewin continually reiterates the sentiment that "the physical reality of the object concerned is not decisive for the degree of psychological reality." Lewin refers to the example of a "child living in a 'magic world.'" Lewin asserts that, for this child, the realities of the 'magic world' are a psychological reality, and thus must be considered as an influence on their subsequent behavior, even though this 'magic world' does not exist within the physical reality. Likewise, scholars familiar with Lewin's work have emphasized that the psychological situation, as defined by Lewin, is strictly composed of those facts which the individual perceives or believes. Principle of contemporaneity. In Lewin's theoretical framework, the whole situation—or the life space, which contains both the person and the environment—is dynamic. In order to accurately determine behavior, Lewin's equation holds that one must consider and examine the life space at the exact moment when the behavior occurred. The life space, even moments after such behavior has occurred, is no longer exactly the same as it was when behavior occurred and thus may not accurately represent the whole situation that led to the behavior in the first place. This focus on the present situation represented a departure from many other theories at the time. Most theories tended to focus on looking at an individual's past in order to explain their present behavior, such as Sigmund Freud's psychoanalysis. Lewin's emphasis on the present state of the life space did not preclude the idea that an individual's past may impact the present state of the life space:"[The] influence of the previous history is to be thought of as indirect in dynamic psychology: From the point of view of systematic causation, past events cannot influence present events. Past events can only have a position in the historical causal chains whose interweavings create the present situation."Lewin referred to this concept as the principle of contemporaneity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B=f(P,E)" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "E" } ]
https://en.wikipedia.org/wiki?curid=10995827
10997
Hyphanet
Peer-to-peer Internet platform for censorship-resistant communication Hyphanet (until mid-2023: Freenet) is a peer-to-peer platform for censorship-resistant, anonymous communication. It uses a decentralized distributed data store to keep and deliver information, and has a suite of free software for publishing and communicating on the Web without fear of censorship.&lt;ref name="Peers in a Client/Server World, 2005"&gt;Taylor, Ian J. "From P2P to Web Services and Grids: Peers in a Client/Server World". London: Springer, 2005.&lt;/ref&gt; Both Freenet and some of its associated tools were originally designed by Ian Clarke, who defined Freenet's goal as providing freedom of speech on the Internet with strong anonymity protection. The distributed data store of Freenet is used by many third-party programs and plugins to provide microblogging and media sharing, anonymous and decentralised version tracking, blogging, a generic web of trust for decentralized spam resistance, Shoeshop for using Freenet over sneakernet, and many more. History. The origin of Freenet can be traced to Ian Clarke's student project at the University of Edinburgh, which he completed as a graduation requirement in the summer of 1999. Ian Clarke's resulting unpublished report "A distributed decentralized information storage and retrieval system" (1999) provided foundation for the seminal paper written in collaboration with other researchers, "Freenet: A Distributed Anonymous Information Storage and Retrieval System" (2001). According to CiteSeer, it became one of the most frequently cited computer science articles in 2002. Freenet can provide anonymity on the Internet by storing small encrypted snippets of content distributed on the computers of its users and connecting only through intermediate computers which pass on requests for content and sending them back without knowing the contents of the full file. This is similar to how routers on the Internet route packets without knowing anything about files‍— except Freenet has caching, a layer of strong encryption, and no reliance on centralized structures. This allows users to publish anonymously or retrieve various kinds of information. Release history. Freenet has been under continuous development since 2000. Freenet 0.7, released on 8 May 2008, is a major re-write incorporating a number of fundamental changes. The most fundamental change is support for darknet operation. Version 0.7 offered two modes of operation: a mode in which it connects only to friends, and an opennet-mode in which it connects to any other Freenet user. Both modes can be run simultaneously. When a user switches to pure darknet operation, Freenet becomes very difficult to detect from the outside. The transport layer created for the darknet mode allows communication over restricted routes as commonly found in mesh networks, as long as these connections follow a small-world structure. Other modifications include switching from TCP to UDP, which allows UDP hole punching along with faster transmission of messages between peers in the network. Freenet 0.7.5, released on 12 June 2009, offers a variety of improvements over 0.7. These include reduced memory usage, faster insert and retrieval of content, significant improvements to the FProxy web interface used for browsing freesites, and a large number of smaller bugfixes, performance enhancements, and usability improvements. Version 0.7.5 also shipped with a new version of the Windows installer. As of build 1226, released on 30 July 2009, features that have been written include significant security improvements against both attackers acting on the network and physical seizure of the computer running the node. As of build 1468, released on 11 July 2015, the Freenet core stopped using the db4o database and laid the foundation for an efficient interface to the Web of Trust plugin which provides spam resistance. Freenet has always been free software, but until 2011 it required users to install Java. This problem was solved by making Freenet compatible with OpenJDK, a free and open source implementation of the Java Platform. On 11 February 2015, Freenet received the SUMA-Award for "protection against total surveillance". Features and user interface. Freenet served as the model for the Japanese peer to peer file-sharing programs Winny, Share and Perfect Dark, but this model differs from p2p networks such as Bittorrent and emule. Freenet separates the underlying network structure and protocol from how users interact with the network; as a result, there are a variety of ways to access content on the Freenet network. The simplest is via FProxy, which is integrated with the node software and provides a web interface to content on the network. Using FProxy, a user can browse freesites (websites that use normal HTML and related tools, but whose content is stored within Freenet rather than on a traditional web server). The web interface is also used for most configuration and node management tasks. Through the use of separate applications or plugins loaded into the node software, users can interact with the network in other ways, such as forums similar to web forums or Usenet or interfaces more similar to traditional P2P "filesharing" interfaces. While Freenet provides an HTTP interface for browsing freesites, it is not a proxy for the World Wide Web; Freenet can be used to access only the content that has been previously inserted into the Freenet network. In this way, it is more similar to Tor's onion services than to anonymous proxy software like Tor's proxy. Freenet's focus lies on free speech and anonymity. Because of that, Freenet acts differently at certain points that are (directly or indirectly) related to the anonymity part. Freenet attempts to protect the anonymity of both people inserting data into the network (uploading) and those retrieving data from the network (downloading). Unlike file sharing systems, there is no need for the uploader to remain on the network after uploading a file or group of files. Instead, during the upload process, the files are broken into chunks and stored on a variety of other computers on the network. When downloading, those chunks are found and reassembled. Every node on the Freenet network contributes storage space to hold files and bandwidth that it uses to route requests from its peers. As a direct result of the anonymity requirements, the node requesting content does not normally connect directly to the node that has it; instead, the request is routed across several intermediaries, none of which know which node made the request or which one had it. As a result, the total bandwidth required by the network to transfer a file is higher than in other systems, which can result in slower transfers, especially for infrequently accessed content. Since version 0.7, Freenet offers two different levels of security: opennet and darknet. With opennet, users connect to arbitrary other users. With darknet, users connect only to "friends" with whom they previously exchanged public keys, named node-references. Both modes can be used together. Content. Freenet's founders argue that true freedom of speech comes only with true anonymity and that the beneficial uses of Freenet outweigh its negative uses. Their view is that free speech, in itself, is not in contradiction with any other consideration—the information is not the crime. Freenet attempts to remove the possibility of any group imposing its beliefs or values on any data. Although many states censor communications to different extents, they all share one commonality in that a body must decide what information to censor and what information to allow. What may be acceptable to one group of people may be considered offensive or even dangerous to another. In essence, the purpose of Freenet is to ensure that no one is allowed to decide what is acceptable. Reports of Freenet's use in authoritarian nations is difficult to track due to the very nature of Freenet's goals. One group, "Freenet China", used to introduce the Freenet software to Chinese users starting from 2001 and distribute it within China through e-mails and on disks after the group's website was blocked by the Chinese authorities on the mainland. It was reported that in 2002 "Freenet China" had several thousand dedicated users. However, Freenet opennet traffic was blocked in China around the 2010s. Technical design. The Freenet file sharing network stores documents and allows them to be retrieved later by an associated key, as is now possible with protocols such as HTTP. The network is designed to be highly survivable. The system has no central servers and is not subject to the control of any one individual or organization, including the designers of Freenet. The codebase size is over 192.000 lines of code. Information stored on Freenet is distributed around the network and stored on several different nodes. Encryption of data and relaying of requests makes it difficult to determine who inserted content into Freenet, who requested that content, or where the content was stored. This protects the anonymity of participants, and also makes it very difficult to censor specific content. Content is stored encrypted, making it difficult for even the operator of a node to determine what is stored on that node. This provides plausible deniability; which, in combination with request relaying, means that safe harbor laws that protect service providers may also protect Freenet node operators. When asked about the topic, Freenet developers defer to the EFF discussion which says that not being able to filter anything is a safe choice. Distributed storage and caching of data. Like Winny, Share and Perfect Dark, Freenet not only transmits data between nodes but actually stores them, working as a huge distributed cache. To achieve this, each node allocates some amount of disk space to store data; this is configurable by the node operator, but is typically several GB (or more). Files on Freenet are typically split into multiple small blocks, with duplicate blocks created to provide redundancy. Each block is handled independently, meaning that a single file may have parts stored on many different nodes. Information flow in Freenet is different from networks like eMule or BitTorrent; in Freenet: Two advantages of this design are high reliability and anonymity. Information remains available even if the publisher node goes offline, and is anonymously spread over many hosting nodes as encrypted blocks, not entire files. The key disadvantage of the storage method is that no one node is responsible for any chunk of data. If a piece of data is not retrieved for some time and a node keeps getting new data, it will drop the old data sometime when its allocated disk space is fully used. In this way Freenet tends to 'forget' data which is not retrieved regularly (see also Effect). While users can insert data into the network, there is no way to delete data. Due to Freenet's anonymous nature the original publishing node or owner of any piece of data is unknown. The only way data can be removed is if users don't request it. Network. Typically, a host computer on the network runs the software that acts as a node, and it connects to other hosts running that same software to form a large distributed, variable-size network of peer nodes. Some nodes are end user nodes, from which documents are requested and presented to human users. Other nodes serve only to route data. All nodes communicate with each other identically – there are no dedicated "clients" or "servers". It is not possible for a node to rate another node except by its capacity to insert and fetch data associated with a key. This is unlike most other P2P networks where node administrators can employ a ratio system, where users have to share a certain amount of content before they can download. Freenet may also be considered a small world network. The Freenet protocol is intended to be used on a network of complex topology, such as the Internet (Internet Protocol). Each node knows only about some number of other nodes that it can reach directly (its conceptual "neighbors"), but any node can be a neighbor to any other; no hierarchy or other structure is intended. Each message is routed through the network by passing from neighbor to neighbor until it reaches its destination. As each node passes a message to a neighbor, it does not know whether the neighbor will forward the message to another node, or is the final destination or original source of the message. This is intended to protect the anonymity of users and publishers. Each node maintains a data store containing documents associated with keys, and a routing table associating nodes with records of their performance in retrieving different keys. Protocol. The Freenet protocol uses a key-based routing protocol, similar to distributed hash tables. The routing algorithm changed significantly in version 0.7. Prior to version 0.7, Freenet used a heuristic routing algorithm where each node had no fixed location, and routing was based on which node had served a key closest to the key being fetched (in version 0.3) or which is estimated to serve it faster (in version 0.5). In either case, new connections were sometimes added to downstream nodes (i.e. the node that answered the request) when requests succeeded, and old nodes were discarded in least recently used order (or something close to it). Oskar Sandberg's research (during the development of version 0.7) shows that this "path folding" is critical, and that a very simple routing algorithm will suffice provided there is path folding. The disadvantage of this is that it is very easy for an attacker to find Freenet nodes, and connect to them, because every node is continually attempting to find new connections. In version 0.7, Freenet supports both "opennet" (similar to the old algorithms, but simpler), and "darknet" (all node connections are set up manually, so only your friends know your node's IP address). Darknet is less convenient, but much more secure against a distant attacker. This change required major changes in the routing algorithm. Every node has a location, which is a number between 0 and 1. When a key is requested, first the node checks the local data store. If it's not found, the key's hash is turned into another number in the same range, and the request is routed to the node whose location is closest to the key. This goes on until some number of hops is exceeded, there are no more nodes to search, or the data is found. If the data is found, it is cached on each node along the path. So there is no one source node for a key, and attempting to find where it is currently stored will result in it being cached more widely. Essentially the same process is used to insert a document into the network: the data is routed according to the key until it runs out of hops, and if no existing document is found with the same key, it is stored on each node. If older data is found, the older data is propagated and returned to the originator, and the insert "collides". But this works only if the locations are clustered in the right way. Freenet assumes that the darknet (a subset of the global social network) is a small-world network, and nodes constantly attempt to swap locations (using the Metropolis–Hastings algorithm) in order to minimize their distance to their neighbors. If the network actually is a small-world network, Freenet should find data reasonably quickly; ideally on the order of formula_0 hops in Big O notation. However, it does not guarantee that data will be found at all. Eventually, either the document is found or the hop limit is exceeded. The terminal node sends a reply that makes its way back to the originator along the route specified by the intermediate nodes' records of pending requests. The intermediate nodes may choose to cache the document along the way. Besides saving bandwidth, this also makes documents harder to censor as there is no one "source node". Effect. Initially, the locations in darknet are distributed randomly. This means that routing of requests is essentially random. In opennet connections are established by a join request which provides an optimized network structure if the existing network is already optimized. So the data in a newly started Freenet will be distributed somewhat randomly. As location swapping (on darknet) and path folding (on opennet) progress, nodes which are close to one another will increasingly have close locations, and nodes which are far away will have distant locations. Data with similar keys will be stored on the same node. The result is that the network will self-organize into a distributed, clustered structure where nodes tend to hold data items that are close together in key space. There will probably be multiple such clusters throughout the network, any given document being replicated numerous times, depending on how much it is used. This is a kind of "spontaneous symmetry breaking", in which an initially symmetric state (all nodes being the same, with random initial keys for each other) leads to a highly asymmetric situation, with nodes coming to specialize in data that has closely related keys. There are forces which tend to cause clustering (shared closeness data spreads throughout the network), and forces that tend to break up clusters (local caching of commonly used data). These forces will be different depending on how often data is used, so that seldom-used data will tend to be on just a few nodes which specialize in providing that data, and frequently used items will be spread widely throughout the network. This automatic mirroring counteracts the times when web traffic becomes overloaded, and due to a mature network's intelligent routing, a network of size "n" should require only log("n") time to retrieve a document on average. Keys. Keys are hashes: there is no notion of semantic closeness when speaking of key closeness. Therefore, there will be no correlation between key closeness and similar popularity of data as there might be if keys did exhibit some semantic meaning, thus avoiding bottlenecks caused by popular subjects. There are two main varieties of keys in use on Freenet, the Content Hash Key (CHK) and the Signed Subspace Key (SSK). A subtype of SSKs is the Updatable Subspace Key (USK) which adds versioning to allow secure updating of content. A CHK is a SHA-256 hash of a document (after encryption, which itself depends on the hash of the plaintext) and thus a node can check that the document returned is correct by hashing it and checking the digest against the key. This key contains the meat of the data on Freenet. It carries all the binary data building blocks for the content to be delivered to the client for reassembly and decryption. The CHK is unique by nature and provides tamperproof content. A hostile node altering the data under a CHK will immediately be detected by the next node or the client. CHKs also reduce the redundancy of data since the same data will have the same CHK and when multiple sites reference the same large files, they can reference to the same CHK. SSKs are based on public-key cryptography. Currently Freenet uses the DSA algorithm. Documents inserted under SSKs are signed by the inserter, and this signature can be verified by every node to ensure that the data is not tampered with. SSKs can be used to establish a verifiable pseudonymous identity on Freenet, and allow for multiple documents to be inserted securely by a single person. Files inserted with an SSK are effectively immutable, since inserting a second file with the same name can cause collisions. USKs resolve this by adding a version number to the keys which is also used for providing update notification for keys registered as bookmarks in the web interface. Another subtype of the SSK is the Keyword Signed Key, or KSK, in which the key pair is generated in a standard way from a simple human-readable string. Inserting a document using a KSK allows the document to be retrieved and decrypted if and only if the requester knows the human-readable string; this allows for more convenient (but less secure) URIs for users to refer to. Scalability. A network is said to be scalable if its performance does not deteriorate even if the network is very large. The scalability of Freenet is being evaluated, but similar architectures have been shown to scale logarithmically. This work indicates that Freenet can find data in formula_1 hops on a small-world network (which includes both opennet and darknet style Freenet networks), when ignoring the caching which could improve the scalability for popular content. However, this scalability is difficult to test without a very large network. Furthermore, the security features inherent to Freenet make detailed performance analysis (including things as simple as determining the size of the network) difficult to do accurately. As of now, the scalability of Freenet has yet to be tested. Darknet versus opennet. As of version 0.7, Freenet supports both "darknet" and "opennet" connections. Opennet connections are made automatically by nodes with opennet enabled, while darknet connections are manually established between users that know and trust each other. Freenet developers describe the trust needed as "will not crack their Freenet node". Opennet connections are easy to use, but darknet connections are more secure against attackers on the network, and can make it difficult for an attacker (such as an oppressive government) to even determine that a user is running Freenet in the first place. The core innovation in Freenet 0.7 is to allow a globally scalable darknet, capable (at least in theory) of supporting millions of users. Previous darknets, such as WASTE, have been limited to relatively small disconnected networks. The scalability of Freenet is made possible by the fact that human relationships tend to form small-world networks, a property that can be exploited to find short paths between any two people. The work is based on a speech given at DEF CON 13 by Ian Clarke and Swedish mathematician Oskar Sandberg. Furthermore, the routing algorithm is capable of routing over a mixture of opennet and darknet connections, allowing people who have only a few friends using the network to get the performance from having sufficient connections while still receiving some of the security benefits of darknet connections. This also means that small darknets where some users also have opennet connections are fully integrated into the whole Freenet network, allowing all users access to all content, whether they run opennet, darknet, or a hybrid of the two, except for darknet pockets connected only by a single hybrid node. Tools and applications. Unlike many other P2P applications Freenet does not provide comprehensive functionality itself. Freenet is modular and features an API called Freenet Client Protocol (FCP) for other programs to use to implement services such as message boards, file sharing, or online chat. Communication. Freenet Messaging System (FMS) FMS was designed to address problems with Frost such as denial of service attacks and spam. Users publish trust lists, and each user downloads messages only from identities they trust and identities trusted by identities they trust. FMS is developed anonymously and can be downloaded from "the FMS freesite" within Freenet. It does not have an official site on the normal Internet. It features random post delay, support for many identities, and a distinction between trusting a user's posts and trusting their trust list. It is written in C++ and is a separate application from Freenet which uses the Freenet Client Protocol (FCP) to interface with Freenet. Frost Frost includes support for convenient file sharing, but its design is inherently vulnerable to spam and denial of service attacks. Frost can be downloaded from the Frost home page on SourceForge, or from "the Frost freesite" within Freenet. It is not endorsed by the Freenet developers. Frost is written in Java and is a separate application from Freenet. Sone Sone provides a simpler interface inspired by Facebook with public anonymous discussions and image galleries. It provides an API for control from other programs is also used to implement a comment system for static websites in the regular internet. Utilities. jSite jSite is a tool to upload websites. It handles keys and manages uploading files. Infocalypse Infocalypse is an extension for the distributed revision control system Mercurial. It uses an optimized structure to minimize the number of requests to retrieve new data, and allows supporting a repository by securely reuploading most parts of the data without requiring the owner's private keys. Libraries. FCPLib FCPLib (Freenet Client Protocol Library) aims to be a cross-platform natively compiled set of C++-based functions for storing and retrieving information to and from Freenet. FCPLib supports Windows NT/2K/XP, Debian, BSD, Solaris, and macOS. lib-pyFreenet lib-pyFreenet exposes Freenet functionality to Python programs. Infocalypse uses it. Vulnerabilities. Law enforcement agencies have claimed to have successfully infiltrated Freenet opennet in order to deanonymize users but no technical details have been given to support these allegations. One report stated that, "A child-porn investigation focused on ... [the suspect] when the authorities were monitoring the online network, Freenet." A different report indicated arrests may have been based on the BlackICE project leaks, that are debunked for using bad math and for using an incorrectly calculated false positives rate and a false model. A court case in the Peel Region of Ontario, "Canada R. v. Owen", 2017 ONCJ 729 (CanLII), illustrated that law enforcement do in fact have a presence, after Peel Regional Police located who had been downloading illegal material on the Freenet network. The court decision indicates that a Canadian Law Enforcement agency operates nodes running modified Freenet software in the hope of determining who is requesting illegal material. Notability. Freenet has had significant publicity in the mainstream press, including articles in "The New York Times", and coverage on CNN, "60 Minutes II", the BBC, "The Guardian", and elsewhere. Freenet received the SUMA-Award 2014 for "protection against total surveillance". Freesite. A "freesite" is a site hosted on the Freenet network. Because it contains only static content, it cannot contain any active content like server-side scripts or databases. Freesites are coded in HTML and support as many features as the browser viewing the page allows; however, there are some exceptions where the Freenet software will remove parts of the code that may be used to reveal the identity of the person viewing the page (making a page access something on the internet, for example). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O\\left(\\left[log\\left(n\\right)\\right]^2\\right)" }, { "math_id": 1, "text": "O(\\log^2 n)" } ]
https://en.wikipedia.org/wiki?curid=10997
10997054
Kravchuk polynomials
Kravchuk polynomials or Krawtchouk polynomials (also written using several other transliterations of the Ukrainian surname ) are discrete orthogonal polynomials associated with the binomial distribution, introduced by Mykhailo Kravchuk (1929). The first few polynomials are (for "q" = 2): formula_0 formula_1 formula_2 formula_3 The Kravchuk polynomials are a special case of the Meixner polynomials of the first kind. Definition. For any prime power "q" and positive integer "n", define the Kravchuk polynomial formula_4 Properties. The Kravchuk polynomial has the following alternative expressions: formula_5 formula_6 Symmetry relations. For integers formula_7, we have that formula_8 Orthogonality relations. For non-negative integers "r", "s", formula_9 Generating function. The generating series of Kravchuk polynomials is given as below. Here formula_10 is a formal variable. formula_11 Three term recurrence. The Kravchuk polynomials satisfy the three-term recurrence relation formula_12
[ { "math_id": 0, "text": "\\mathcal{K}_0(x; n) = 1," }, { "math_id": 1, "text": "\\mathcal{K}_1(x; n) = -2x + n," }, { "math_id": 2, "text": "\\mathcal{K}_2(x; n) = 2x^2 - 2nx + \\binom{n}{2}," }, { "math_id": 3, "text": "\\mathcal{K}_3(x; n) = -\\frac{4}{3}x^3 + 2nx^2 - (n^2 - n + \\frac{2}{3})x + \\binom{n}{3}." }, { "math_id": 4, "text": "\\mathcal{K}_k(x; n,q) = \\mathcal{K}_k(x) = \\sum_{j=0}^{k}(-1)^j (q-1)^{k-j} \\binom {x}{j} \\binom{n-x}{k-j}, \\quad k=0,1, \\ldots, n." }, { "math_id": 5, "text": "\\mathcal{K}_k(x; n,q) = \\sum_{j=0}^{k}(-q)^j (q-1)^{k-j} \\binom {n-j}{k-j} \\binom{x}{j}. " }, { "math_id": 6, "text": "\\mathcal{K}_k(x; n,q) = \\sum_{j=0}^{k}(-1)^j q^{k-j} \\binom {n-k+j}{j} \\binom{n-x}{k-j}. " }, { "math_id": 7, "text": "i,k \\ge 0" }, { "math_id": 8, "text": "\\begin{align}\n(q-1)^{i} {n \\choose i} \\mathcal{K}_k(i;n,q) = (q-1)^{k}{n \\choose k} \\mathcal{K}_i(k;n,q).\n\\end{align}" }, { "math_id": 9, "text": "\\sum_{i=0}^n\\binom{n}{i}(q-1)^i\\mathcal{K}_r(i; n,q)\\mathcal{K}_s(i; n,q) = q^n(q-1)^r\\binom{n}{r}\\delta_{r,s}. " }, { "math_id": 10, "text": "z" }, { "math_id": 11, "text": "\\begin{align}\n(1+(q-1)z)^{n-x}(1-z)^x &= \\sum_{k=0}^\\infty \\mathcal{K}_k(x;n,q) {z^k}.\n\\end{align}" }, { "math_id": 12, "text": "\\begin{align}\nx \\mathcal{K}_k(x;n,q) = - q(n-k) \\mathcal{K}_{k+1}(x;n,q) + (q(n-k) + k(1-q)) \\mathcal{K}_{k}(x;n,q) - k(1-q)\\mathcal{K}_{k-1}(x;n,q).\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=10997054
1099709
Hilbert's Theorem 90
Result due to Kummer on cyclic extensions of fields that leads to Kummer theory In abstract algebra, Hilbert's Theorem 90 (or Satz 90) is an important result on cyclic extensions of fields (or to one of its generalizations) that leads to Kummer theory. In its most basic form, it states that if "L"/"K" is an extension of fields with cyclic Galois group "G" = Gal("L"/"K") generated by an element formula_0 and if formula_1 is an element of "L" of relative norm 1, that isformula_2then there exists formula_3 in "L" such thatformula_4The theorem takes its name from the fact that it is the 90th theorem in David Hilbert's Zahlbericht (Hilbert 1897, 1998), although it is originally due to Kummer (1855, p.213, 1861). Often a more general theorem due to Emmy Noether (1933) is given the name, stating that if "L"/"K" is a finite Galois extension of fields with arbitrary Galois group "G" = Gal("L"/"K"), then the first cohomology group of "G", with coefficients in the multiplicative group of "L", is trivial: formula_5 Examples. Let formula_6 be the quadratic extension formula_7. The Galois group is cyclic of order 2, its generator formula_8 acting via conjugation: formula_9 An element formula_10 in formula_11 has norm formula_12. An element of norm one thus corresponds to a rational solution of the equation formula_13 or in other words, a point with rational coordinates on the unit circle. Hilbert's Theorem 90 then states that every such element "a" of norm one can be written as formula_14 where formula_15 is as in the conclusion of the theorem, and "c" and "d" are both integers. This may be viewed as a rational parametrization of the rational points on the unit circle. Rational points formula_16 on the unit circle formula_13 correspond to Pythagorean triples, i.e. triples formula_17 of integers satisfying formula_18. Cohomology. The theorem can be stated in terms of group cohomology: if "L"× is the multiplicative group of any (not necessarily finite) Galois extension "L" of a field "K" with corresponding Galois group "G", then formula_5 Specifically, group cohomology is the cohomology of the complex whose "i-"cochains are arbitrary functions from "i"-tuples of group elements to the multiplicative coefficient group, formula_19, with differentials formula_20 defined in dimensions formula_21 by: formula_22 where formula_23 denotes the image of the formula_24-module element formula_25 under the action of the group element formula_26. Note that in the first of these we have identified a 0-cochain formula_27, with its unique image value formula_28. The triviality of the first cohomology group is then equivalent to the 1-cocycles formula_29 being equal to the 1-coboundaries formula_30, viz.: formula_31 For cyclic formula_32, a 1-cocycle is determined by formula_33, with formula_34 and:formula_35On the other hand, a 1-coboundary is determined by formula_36. Equating these gives the original version of the Theorem. A further generalization is to cohomology with non-abelian coefficients: that if "H" is either the general or special linear group over "L", including formula_37, then formula_38 Another generalization is to a scheme "X": formula_39 where formula_40 is the group of isomorphism classes of locally free sheaves of formula_41-modules of rank 1 for the Zariski topology, and formula_42 is the sheaf defined by the affine line without the origin considered as a group under multiplication. There is yet another generalization to Milnor K-theory which plays a role in Voevodsky's proof of the Milnor conjecture. Proof. Let formula_6 be cyclic of degree formula_43 and formula_8 generate formula_44. Pick any formula_45 of norm formula_46 By clearing denominators, solving formula_47 is the same as showing that formula_48 has formula_49 as an eigenvalue. We extend this to a map of formula_50-vector spaces via formula_51 The primitive element theorem gives formula_52 for some formula_53. Since formula_53 has minimal polynomial formula_54 we can identify formula_55 via formula_56 Here we wrote the second factor as a formula_57-polynomial in formula_53. Under this identification, our map becomes formula_58 That is to say under this map formula_59 formula_60 is an eigenvector with eigenvalue formula_49 iff formula_1 has norm formula_49. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma," }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "N(a):=a\\, \\sigma(a)\\, \\sigma^2(a)\\cdots \\sigma^{n-1}(a)=1," }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "a=b/\\sigma(b)." }, { "math_id": 5, "text": "H^1(G,L^\\times)=\\{1\\}." }, { "math_id": 6, "text": "L/K" }, { "math_id": 7, "text": "\\Q(i)/\\Q" }, { "math_id": 8, "text": "\\sigma" }, { "math_id": 9, "text": " \\sigma: c + di\\mapsto c - di." }, { "math_id": 10, "text": "a=x+yi" }, { "math_id": 11, "text": "\\Q(i)" }, { "math_id": 12, "text": "a\\sigma(a)=x^2+y^2" }, { "math_id": 13, "text": "x^2+y^2=1" }, { "math_id": 14, "text": " a=\\frac{c-di}{c+di}=\\frac{c^2-d^2}{c^2+d^2} - \\frac{2cd}{c^2+d^2} i," }, { "math_id": 15, "text": "b = c+di" }, { "math_id": 16, "text": "(x,y)=(p/r,q/r)" }, { "math_id": 17, "text": "(p,q,r)" }, { "math_id": 18, "text": "p^2+q^2=r^2" }, { "math_id": 19, "text": "C^i(G,L^\\times) = \\{\\phi:G^i\\to L^\\times\\}" }, { "math_id": 20, "text": "d^i : C^i\\to C^{i+1}" }, { "math_id": 21, "text": "i = 0,1" }, { "math_id": 22, "text": "(d^0(b))(\\sigma) = b / b^\\sigma, \\quad \\text{ and } \n\\quad (d^1(\\phi))(\\sigma,\\tau)\n\\,=\\,\\phi(\\sigma) \\phi(\\tau)^\\sigma / \\phi(\\sigma\\tau) , " }, { "math_id": 23, "text": "x^g" }, { "math_id": 24, "text": "G" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": "g \\in G" }, { "math_id": 27, "text": "\\gamma = \\gamma_b : G^0 = id_G \\to L^\\times" }, { "math_id": 28, "text": "b \\in L^\\times" }, { "math_id": 29, "text": "Z^1" }, { "math_id": 30, "text": "B^1" }, { "math_id": 31, "text": "\\begin{array}{rcl}\nZ^1 &=& \\ker d^1 &=& \\{\\phi\\in C^1\\text{ satisfying }\\,\\,\\forall \\sigma , \\tau \\in G \\, \\colon\\,\\, \\phi(\\sigma\\tau) = \\phi(\\sigma)\\,\\phi(\\tau)^\\sigma \\} \\\\\n\\text{ is equal to }\\\\\nB^1 &=& \\text{im } d^0 &=& \\{\\phi\\in C^1\\ \\, \\colon \\,\\, \\exists\\, b\\in L^\\times \\text{ such that } \\phi(\\sigma)=b / b^\\sigma \\ \\ \\forall \\sigma \\in G \\}.\n\\end{array}" }, { "math_id": 32, "text": "G =\\{1,\\sigma,\\ldots,\\sigma^{n-1}\\}" }, { "math_id": 33, "text": "\\phi(\\sigma)=a\\in L^\\times " }, { "math_id": 34, "text": "\\phi(\\sigma^i) = a\\,\\sigma(a)\\cdots\\sigma^{i-1}(a)" }, { "math_id": 35, "text": "1=\\phi(1)=\\phi(\\sigma^n)=a\\,\\sigma(a)\\cdots\\sigma^{n-1}(a)=N(a)." }, { "math_id": 36, "text": "\\phi(\\sigma)=b / b^\\sigma " }, { "math_id": 37, "text": "\\operatorname{GL}_1(L)=L^\\times" }, { "math_id": 38, "text": "H^1(G,H)=\\{1\\}." }, { "math_id": 39, "text": "H^1_{\\text{et}}(X,\\mathbb{G}_m) = H^1(X,\\mathcal{O}_X^\\times) = \\operatorname{Pic}(X)," }, { "math_id": 40, "text": "\\operatorname{Pic}(X)" }, { "math_id": 41, "text": "\\mathcal{O}_X^\\times" }, { "math_id": 42, "text": "\\mathbb{G}_m" }, { "math_id": 43, "text": "n," }, { "math_id": 44, "text": "\\operatorname{Gal}(L/K)" }, { "math_id": 45, "text": "a\\in L" }, { "math_id": 46, "text": "N(a):=a \\sigma(a) \\sigma^2(a)\\cdots \\sigma^{n-1}(a)=1." }, { "math_id": 47, "text": "a=x/\\sigma^{-1}(x) \\in L" }, { "math_id": 48, "text": "a\\sigma^{-1}(\\cdot) : L \\to L" }, { "math_id": 49, "text": "1" }, { "math_id": 50, "text": "L" }, { "math_id": 51, "text": "\\begin{cases} 1_L\\otimes a\\sigma^{-1}(\\cdot) : L\\otimes_KL \\to L\\otimes_K L \\\\ \\ell \\otimes\\ell'\\mapsto \\ell\\otimes a\\sigma^{-1}(\\ell').\\end{cases}" }, { "math_id": 52, "text": "L=K(\\alpha)" }, { "math_id": 53, "text": "\\alpha" }, { "math_id": 54, "text": "f(t)=(t-\\alpha)(t-\\sigma(\\alpha))\\cdots \\left (t-\\sigma^{n-1}(\\alpha) \\right ) \\in K[t]," }, { "math_id": 55, "text": "L\\otimes_KL\\stackrel{\\sim}{\\to} L\\otimes_K K[t] /f(t) \\stackrel{\\sim}{\\to} L[t]/f(t) \\stackrel{\\sim}{\\to} L^{n}" }, { "math_id": 56, "text": "\\ell\\otimes p(\\alpha) \\mapsto \\ell \\left (p(\\alpha), p(\\sigma \\alpha ),\\ldots , p(\\sigma^{n-1} \\alpha ) \\right )." }, { "math_id": 57, "text": "K" }, { "math_id": 58, "text": "\\begin{cases} a\\sigma^{-1}(\\cdot) : L^n\\to L^n \\\\ \\ell \\left(p(\\alpha),\\ldots, p(\\sigma^{n-1}\\alpha)) \\mapsto \\ell(ap(\\sigma^{n-1}\\alpha), \\sigma a p(\\alpha), \\ldots, \\sigma^{n-1} a p(\\sigma^{n-2}\\alpha) \\right ). \\end{cases}" }, { "math_id": 59, "text": "(\\ell_1, \\ldots ,\\ell_n)\\mapsto (a \\ell_n, \\sigma a \\ell_1, \\ldots, \\sigma^{n-1} a \\ell_{n-1})." }, { "math_id": 60, "text": " (1, \\sigma a, \\sigma a \\sigma^2 a, \\ldots, \\sigma a \\cdots \\sigma^{n-1}a)" } ]
https://en.wikipedia.org/wiki?curid=1099709
10997586
Binary tetrahedral group
In mathematics, the binary tetrahedral group, denoted 2T or ⟨2,3,3⟩, is a certain nonabelian group of order 24. It is an extension of the tetrahedral group T or (2,3,3) of order 12 by a cyclic group of order 2, and is the preimage of the tetrahedral group under the 2:1 covering homomorphism Spin(3) → SO(3) of the special orthogonal group by the spin group. It follows that the binary tetrahedral group is a discrete subgroup of Spin(3) of order 24. The complex reflection group named 3(24)3 by G.C. Shephard or 3[3]3 and by Coxeter, is isomorphic to the binary tetrahedral group. The binary tetrahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism Spin(3) ≅ Sp(1), where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Elements. Explicitly, the binary tetrahedral group is given as the group of units in the ring of Hurwitz integers. There are 24 such units given by formula_0 with all possible sign combinations. All 24 units have absolute value 1 and therefore lie in the unit quaternion group Sp(1). The convex hull of these 24 elements in 4-dimensional space form a convex regular 4-polytope called the 24-cell. Properties. The binary tetrahedral group, denoted by 2T, fits into the short exact sequence formula_1 This sequence does not split, meaning that 2T is "not" a semidirect product of {±1} by T. In fact, there is no subgroup of 2T isomorphic to T. The binary tetrahedral group is the covering group of the tetrahedral group. Thinking of the tetrahedral group as the alternating group on four letters, T ≅ A4, we thus have the binary tetrahedral group as the covering group, 2T ≅ formula_2. The center of 2T is the subgroup {±1}. The inner automorphism group is isomorphic to A4, and the full automorphism group is isomorphic to S4. The binary tetrahedral group can be written as a semidirect product formula_3 where Q is the quaternion group consisting of the 8 Lipschitz units and C3 is the cyclic group of order 3 generated by "ω" = −(1 + "i" + "j" + "k"). The group Z3 acts on the normal subgroup Q by conjugation. Conjugation by "ω" is the automorphism of Q that cyclically rotates i, j, and k. One can show that the binary tetrahedral group is isomorphic to the special linear group SL(2,3) – the group of all 2 × 2 matrices over the finite field F3 with unit determinant, with this isomorphism covering the isomorphism of the projective special linear group PSL(2,3) with the alternating group A4. Presentation. The group 2T has a presentation given by formula_4 or equivalently, formula_5 Generators with these relations are given by formula_6 with formula_7. A Cayley Table with these properties, elements ordered by GAP, is There is 1 element of order 1 (element 1), one element of order 2 (formula_8), 8 elements of order 3, 6 elements of order 4 (including formula_9), 8 elements of order 6 (which include formula_10 and formula_11). Subgroups. The quaternion group consisting of the 8 Lipschitz units forms a normal subgroup of 2T of index 3. This group and the center {±1} are the only nontrivial normal subgroups. All other subgroups of 2T are cyclic groups generated by the various elements, with orders 3, 4, and 6. Higher dimensions. Just as the tetrahedral group generalizes to the rotational symmetry group of the "n"-simplex (as a subgroup of SO("n")), there is a corresponding higher binary group which is a 2-fold cover, coming from the cover Spin("n") → SO("n"). The rotational symmetry group of the "n"-simplex can be considered as the alternating group on "n" + 1 points, A"n"+1, and the corresponding binary group is a 2-fold covering group. For all higher dimensions except A6 and A7 (corresponding to the 5-dimensional and 6-dimensional simplexes), this binary group is the covering group (maximal cover) and is superperfect, but for dimensional 5 and 6 there is an additional exceptional 3-fold cover, and the binary groups are not superperfect. Usage in theoretical physics. The binary tetrahedral group was used in the context of Yang–Mills theory in 1956 by Chen Ning Yang and others. It was first used in flavor physics model building by Paul Frampton and Thomas Kephart in 1994. In 2012 it was shown that a relation between two neutrino mixing angles, derived by using this binary tetrahedral flavor symmetry, agrees with experiment. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{\\pm 1,\\pm i,\\pm j,\\pm k,\\tfrac{1}{2}(\\pm 1 \\pm i \\pm j \\pm k)\\}" }, { "math_id": 1, "text": "1\\to\\{\\pm 1\\}\\to 2\\mathrm{T}\\to \\mathrm{T} \\to 1." }, { "math_id": 2, "text": "\\widehat{\\mathrm{A}_4}" }, { "math_id": 3, "text": "2\\mathrm{T}=\\mathrm{Q}\\rtimes\\mathrm{C}_3" }, { "math_id": 4, "text": "\\langle r,s,t \\mid r^2 = s^3 = t^3 = rst \\rangle" }, { "math_id": 5, "text": "\\langle s,t \\mid (st)^2 = s^3 = t^3 \\rangle." }, { "math_id": 6, "text": "r = i \\qquad s = \\tfrac{1}{2}(1+i+j+k) \\qquad t = \\tfrac{1}{2}(1+i+j-k)," }, { "math_id": 7, "text": "r^2 = s^3 = t^3 = -1" }, { "math_id": 8, "text": "e_5=-1" }, { "math_id": 9, "text": "e_3=r" }, { "math_id": 10, "text": "e_{14}=s" }, { "math_id": 11, "text": "e_{18}=t" } ]
https://en.wikipedia.org/wiki?curid=10997586
10997598
Binary octahedral group
In mathematics, the binary octahedral group, name as 2O or ⟨2,3,4⟩ is a certain nonabelian group of order 48. It is an extension of the chiral octahedral group "O" or (2,3,4) of order 24 by a cyclic group of order 2, and is the preimage of the octahedral group under the 2:1 covering homomorphism formula_0 of the special orthogonal group by the spin group. It follows that the binary octahedral group is a discrete subgroup of Spin(3) of order 48. The binary octahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism formula_1 where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Elements. Explicitly, the binary octahedral group is given as the union of the 24 Hurwitz units formula_2 with all 24 quaternions obtained from formula_3 by a permutation of coordinates and all possible sign combinations. All 48 elements have absolute value 1 and therefore lie in the unit quaternion group Sp(1). Properties. The binary octahedral group, denoted by 2"O", fits into the short exact sequence formula_4 This sequence does not split, meaning that 2"O" is "not" a semidirect product of {±1} by "O". In fact, there is no subgroup of 2"O" isomorphic to "O". The center of 2"O" is the subgroup {±1}, so that the inner automorphism group is isomorphic to "O". The full automorphism group is isomorphic to "O" × Z2. Presentation. The group 2"O" has a presentation given by formula_5 or equivalently, formula_6 Quaternion generators with these relations are given by formula_7 with formula_8 Subgroups. The binary tetrahedral group, 2"T", consisting of the 24 Hurwitz units, forms a normal subgroup of index 2. The quaternion group, "Q"8, consisting of the 8 Lipschitz units forms a normal subgroup of 2"O" of index 6. The quotient group is isomorphic to "S"3 (the symmetric group on 3 letters). These two groups, together with the center {±1}, are the only nontrivial normal subgroups of 2"O". The generalized quaternion group, "Q"16, also forms a subgroup of 2"O", index 3. This subgroup is self-normalizing so its conjugacy class has 3 members. There are also isomorphic copies of the binary dihedral groups "Q"8 and "Q"12 in 2"O". All other subgroups are cyclic groups generated by the various elements (with orders 3, 4, 6, and 8). Higher dimensions. The "binary octahedral group" generalizes to higher dimensions: just as the octahedron generalizes to the orthoplex, the octahedral group in SO(3) generalizes to the hyperoctahedral group in SO("n"), which has a binary cover under the map formula_9 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Spin}(3) \\to \\operatorname{SO}(3)" }, { "math_id": 1, "text": "\\operatorname{Spin}(3) \\cong \\operatorname{Sp}(1)" }, { "math_id": 2, "text": "\\{\\pm 1,\\pm i,\\pm j,\\pm k,\\tfrac{1}{2}(\\pm 1 \\pm i \\pm j \\pm k)\\}" }, { "math_id": 3, "text": "\\tfrac{1}{\\sqrt 2}(\\pm 1 \\pm 1i + 0j + 0k)" }, { "math_id": 4, "text": "1\\to\\{\\pm 1\\}\\to 2O\\to O \\to 1.\\," }, { "math_id": 5, "text": "\\langle r,s,t \\mid r^2 = s^3 = t^4 = rst \\rangle" }, { "math_id": 6, "text": "\\langle s,t \\mid (st)^2 = s^3 = t^4 \\rangle." }, { "math_id": 7, "text": "r = \\tfrac{1}{\\sqrt 2}(i+j) \\qquad s = \\tfrac{1}{2}(1+i+j+k) \\qquad t = \\tfrac{1}{\\sqrt 2}(1+i)," }, { "math_id": 8, "text": " r^2 = s^3 = t^4 = rst = -1." }, { "math_id": 9, "text": "\\operatorname{Spin}(n) \\to SO(n)." } ]
https://en.wikipedia.org/wiki?curid=10997598
10999436
Process performance index
In process improvement efforts, the process performance index is an estimate of the process capability of a process during its initial set-up, "before" it has been brought into a state of statistical control. Formally, if the upper and lower specifications of the process are USL and LSL, the estimated mean of the process is formula_0, and the estimated variability of the process (expressed as a standard deviation) is formula_1, then the process performance index is defined as: formula_2 formula_1 is estimated using the sample standard deviation. Ppk may be negative if the process mean falls outside the specification limits (because the process is producing a large proportion of defective output). Some specifications may only be one sided (for example, strength). For specifications that only have a lower limit, formula_3; for those that only have an upper limit, formula_4. Practitioners may also encounter formula_5, a metric that does not account for process performance not exactly centered between the specification limits, and therefore is interpreted as what the process would be capable of achieving if it could be centered and stabilized. Interpretation. Larger values of Ppk may be interpreted to indicate that a process is more capable of producing output within the specification limits, though this interpretation is controversial. Strictly speaking, from a statistical standpoint, Ppk is meaningless if the process under study is not in control because one cannot reliably estimate the process underlying probability distribution, let alone parameters like formula_0 and formula_1. Furthermore, using this metric of past process performance to predict future performance is highly suspect. From a management standpoint, when an organization is under pressure to set up a new process quickly and economically, Ppk is a convenient metric to gauge how set-up is progressing (increasing Ppk being interpreted as "the process capability is improving"). The risk is that Ppk is taken to mean a process is ready for production before all the kinks have been worked out of it. Once a process is put into a state of statistical control, process capability is described using process capability indices, which are formulaically identical to Ppk (and Pp). The indices are named differently in order to call attention to whether the process under study is believed to be in control or not. Example. Consider a quality characteristic with a target of 100.00 μm and upper and lower specification limits of 106.00 μm and 94.00 μm, respectively. If, after carefully monitoring the process for a while, it appears that the process is out of control and producing output unpredictably (as depicted in the run chart below), one can't meaningfully estimate its mean and standard deviation. In the example below, the process mean appears to drift upward, settle for a while, and then drift downward. If formula_0 and formula_1 are estimated to be 99.61 μm and 1.84 μm, respectively, then That the process mean appears to be unstable is reflected in the relatively low values for Pp and Ppk. The process is producing a significant number of defectives, and, until the cause of the unstable process mean is identified and eliminated, one really can't meaningfully quantify how this process will perform. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "<MATH>\\hat{\\mu}</MATH>" }, { "math_id": 1, "text": "<MATH>\\hat{\\sigma}</MATH>" }, { "math_id": 2, "text": "<MATH>\\hat{P}_{pk} = \\min \\Bigg[ {USL - \\hat{\\mu} \\over 3 \\hat{\\sigma}}, { \\hat{\\mu} - LSL \\over 3 \\hat{\\sigma}} \\Bigg]</MATH>" }, { "math_id": 3, "text": "<MATH>\\hat{P}_{p,lower} = {\\hat{\\mu} - LSL \\over 3 \\hat{\\sigma}}</MATH>" }, { "math_id": 4, "text": "<MATH>\\hat{P}_{p,upper} = {USL - \\hat{\\mu} \\over 3 \\hat{\\sigma}}</MATH>" }, { "math_id": 5, "text": "<MATH>\\hat{P}_{p} = \\frac{USL - LSL} {6 \\hat{\\sigma}}</MATH>" } ]
https://en.wikipedia.org/wiki?curid=10999436
10999922
Mean shift
Mathematical technique &lt;templatestyles src="Machine learning/styles.css"/&gt; Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing. History. The mean shift procedure is usually credited to work by Fukunaga and Hostetler in 1975. It is, however, reminiscent of earlier work by Schnell in 1964. Overview. Mean shift is a procedure for locating the maxima—the modes—of a density function given discrete data sampled from that function. This is an iterative method, and we start with an initial estimate formula_0. Let a kernel function formula_1 be given. This function determines the weight of nearby points for re-estimation of the mean. Typically a Gaussian kernel on the distance to the current estimate is used, formula_2. The weighted mean of the density in the window determined by formula_3 is formula_4 where formula_5 is the neighborhood of formula_0, a set of points for which formula_6. The difference formula_7 is called "mean shift" in Fukunaga and Hostetler. The "mean-shift algorithm" now sets formula_8, and repeats the estimation until formula_9 converges. Although the mean shift algorithm has been widely used in many applications, a rigid proof for the convergence of the algorithm using a general kernel in a high dimensional space is still not known. Aliyari Ghassabeh showed the convergence of the mean shift algorithm in one dimension with a differentiable, convex, and strictly decreasing profile function. However, the one-dimensional case has limited real world applications. Also, the convergence of the algorithm in higher dimensions with a finite number of the stationary (or isolated) points has been proved. However, sufficient conditions for a general kernel function to have finite stationary (or isolated) points have not been provided. Gaussian Mean-Shift is an Expectation–maximization algorithm. Details. Let data be a finite set formula_10 embedded in the formula_11-dimensional Euclidean space, formula_12. Let formula_13 be a flat kernel that is the characteristic function of the formula_14-ball in formula_12, formula_15 In each iteration of the algorithm, formula_16 is performed for all formula_17 simultaneously. The first question, then, is how to estimate the density function given a sparse set of samples. One of the simplest approaches is to just smooth the data, e.g., by convolving it with a fixed kernel of width formula_18, formula_19 where formula_20 are the input samples and formula_21 is the kernel function (or "Parzen window"). formula_18 is the only parameter in the algorithm and is called the bandwidth. This approach is known as "kernel density estimation" or the Parzen window technique. Once we have computed formula_22 from the equation above, we can find its local maxima using gradient ascent or some other optimization technique. The problem with this "brute force" approach is that, for higher dimensions, it becomes computationally prohibitive to evaluate formula_22 over the complete search space. Instead, mean shift uses a variant of what is known in the optimization literature as "multiple restart gradient descent". Starting at some guess for a local maximum, formula_23, which can be a random input data point formula_24, mean shift computes the gradient of the density estimate formula_22 at formula_23 and takes an uphill step in that direction. Types of kernels. Kernel definition: Let formula_12 be the formula_11-dimensional Euclidean space, formula_25. The norm of formula_26 is a non-negative number, formula_27. A function formula_28 is said to be a kernel if there exists a "profile", formula_29 , such that formula_30 and The two most frequently used kernel profiles for mean shift are: formula_34 formula_35 where the standard deviation parameter formula_36 works as the bandwidth parameter, formula_37. Applications. Clustering. Consider a set of points in two-dimensional space. Assume a circular window centered at formula_38 and having radius formula_39 as the kernel. Mean-shift is a hill climbing algorithm which involves shifting this kernel iteratively to a higher density region until convergence. Every shift is defined by a mean shift vector. The mean shift vector always points toward the direction of the maximum increase in the density. At every iteration the kernel is shifted to the centroid or the mean of the points within it. The method of calculating this mean depends on the choice of the kernel. In this case if a Gaussian kernel is chosen instead of a flat kernel, then every point will first be assigned a weight which will decay exponentially as the distance from the kernel's center increases. At convergence, there will be no direction at which a shift can accommodate more points inside the kernel. Tracking. The mean shift algorithm can be used for visual tracking. The simplest such algorithm would create a confidence map in the new image based on the color histogram of the object in the previous image, and use mean shift to find the peak of a confidence map near the object's old position. The confidence map is a probability density function on the new image, assigning each pixel of the new image a probability, which is the probability of the pixel color occurring in the object in the previous image. A few algorithms, such as kernel-based object tracking, ensemble tracking, CAMshift expand on this idea. Smoothing. Let formula_20 and formula_40 be the formula_41-dimensional input and filtered image pixels in the joint spatial-range domain. For each pixel, Availability. Variants of the algorithm can be found in machine learning and image processing packages: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " x " }, { "math_id": 1, "text": " K(x_i - x) " }, { "math_id": 2, "text": " K(x_i - x) = e^{-c||x_i - x||^2} " }, { "math_id": 3, "text": " K " }, { "math_id": 4, "text": " m(x) = \\frac{ \\sum_{x_i \\in N(x)} K(x_i - x) x_i } {\\sum_{x_i \\in N(x)} K(x_i - x)} " }, { "math_id": 5, "text": " N(x) " }, { "math_id": 6, "text": " K(x_i - x) \\neq 0 " }, { "math_id": 7, "text": "m(x) - x" }, { "math_id": 8, "text": " x \\leftarrow m(x) " }, { "math_id": 9, "text": " m(x) " }, { "math_id": 10, "text": "S" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "X" }, { "math_id": 13, "text": "K" }, { "math_id": 14, "text": "\\lambda" }, { "math_id": 15, "text": " \nK(x) =\n\\begin{cases} \n1 & \\text{if}\\ \\|x\\| \\leq \\lambda\\\\\n0 & \\text{if}\\ \\|x\\| > \\lambda\\\\\n\\end{cases} \n" }, { "math_id": 16, "text": "s \\leftarrow m(s)" }, { "math_id": 17, "text": "s \\in S" }, { "math_id": 18, "text": "h" }, { "math_id": 19, "text": " \nf(x) = \\sum_{i}K(x - x_i) = \\sum_{i}k \\left(\\frac{\\|x - x_i\\|^2}{h^2}\\right)\n" }, { "math_id": 20, "text": "x_i" }, { "math_id": 21, "text": "k(r)" }, { "math_id": 22, "text": "f(x)" }, { "math_id": 23, "text": "y_k" }, { "math_id": 24, "text": "x_1" }, { "math_id": 25, "text": " \\mathbb{R}^n " }, { "math_id": 26, "text": "x" }, { "math_id": 27, "text": " \\|x\\|^2=x^{\\top}x \\geq 0 " }, { "math_id": 28, "text": " K: X\\rightarrow \\mathbb{R} " }, { "math_id": 29, "text": " k: [0, \\infty]\\rightarrow \\mathbb{R} " }, { "math_id": 30, "text": "\nK(x) = k(\\|x\\|^2)\n" }, { "math_id": 31, "text": " k(a)\\ge k(b) " }, { "math_id": 32, "text": " a < b " }, { "math_id": 33, "text": " \\int_0^\\infty k(r)\\,dr < \\infty\\ " }, { "math_id": 34, "text": "\nk(x) =\n\\begin{cases} \n1 & \\text{if}\\ x \\le \\lambda\\\\\n0 & \\text{if}\\ x > \\lambda\\\\\n\\end{cases} \n" }, { "math_id": 35, "text": "\nk(x) = e^{-\\frac{x}{2 \\sigma^2}},\n" }, { "math_id": 36, "text": "\\sigma" }, { "math_id": 37, "text": " h " }, { "math_id": 38, "text": "C" }, { "math_id": 39, "text": "r" }, { "math_id": 40, "text": "z_i, i = 1,...,n," }, { "math_id": 41, "text": "d" }, { "math_id": 42, "text": "j = 1" }, { "math_id": 43, "text": "y_{i,1} = x_i" }, { "math_id": 44, "text": "y_{i,j+1}" }, { "math_id": 45, "text": "m(\\cdot)" }, { "math_id": 46, "text": "y = y_{i,c}" }, { "math_id": 47, "text": "z_i =(x_i^s,y_{i,c}^r)" }, { "math_id": 48, "text": "y_{i,c}^r" } ]
https://en.wikipedia.org/wiki?curid=10999922
1100001
Einselection
In quantum mechanics, einselections, short for "environment-induced superselection", is a name coined by Wojciech H. Zurek for a process which is claimed to explain the appearance of wavefunction collapse and the emergence of classical descriptions of reality from quantum descriptions. In this approach, classicality is described as an emergent property induced in open quantum systems by their environments. Due to the interaction with the environment, the vast majority of states in the Hilbert space of a quantum open system become highly unstable due to entangling interaction with the environment, which in effect monitors selected observables of the system. After a decoherence time, which for macroscopic objects is typically many orders of magnitude shorter than any other dynamical timescale, a generic quantum state decays into an uncertain state which can be expressed as a mixture of simple pointer states. In this way the environment induces effective superselection rules. Thus, einselection precludes stable existence of pure superpositions of pointer states. These 'pointer states' are stable despite environmental interaction. The einselected states lack coherence, and therefore do not exhibit the quantum behaviours of entanglement and superposition. Advocates of this approach argue that since only quasi-local, essentially classical states survive the decoherence process, einselection can in many ways explain the emergence of a (seemingly) classical reality in a fundamentally quantum universe (at least to local observers). However, the basic program has been criticized as relying on a circular argument (e.g. by Ruth Kastner). So the question of whether the 'einselection' account can really explain the phenomenon of wave function collapse remains unsettled. Definition. Zurek has defined einselection as follows: "Decoherence leads to einselection when the states of the environment formula_0 corresponding to different pointer states become orthogonal: formula_1", Details. Einselected pointer states are distinguished by their ability to persist in spite of the environmental monitoring and therefore are the ones in which quantum open systems are observed. Understanding the nature of these states and the process of their dynamical selection is of fundamental importance. This process has been studied first in a measurement situation: When the system is an apparatus whose intrinsic dynamics can be neglected, pointer states turn out to be eigenstates of the interaction Hamiltonian between the apparatus and its environment. In more general situations, when the system's dynamics is relevant, einselection is more complicated. Pointer states result from the interplay between self-evolution and environmental monitoring. To study einselection, an operational definition of pointer states has been introduced. This is the "predictability sieve" criterion, based on an intuitive idea: "Pointer states" can be defined as the ones which become minimally entangled with the environment in the course of their evolution. The predictability sieve criterion is a way to quantify this idea by using the following algorithmic procedure: For every initial pure state formula_2, one measures the entanglement generated dynamically between the system and the environment by computing the entropy: formula_3 or some other measure of predictability from the reduced density matrix of the system formula_4 (which is initially formula_5). The entropy is a function of time and a functional of the initial state formula_6. Pointer states are obtained by minimizing formula_7 over formula_6 and demanding that the answer be robust when varying the time formula_8. The nature of pointer states has been investigated using the predictability sieve criterion only for a limited number of examples. Apart from the already mentioned case of the measurement situation (where pointer states are simply eigenstates of the interaction Hamiltonian) the most notable example is that of a quantum Brownian particle coupled through its position with a bath of independent harmonic oscillators. In such case pointer states are localized in phase space, even though the interaction Hamiltonian involves the position of the particle. Pointer states are the result of the interplay between self-evolution and interaction with the environment and turn out to be coherent states. There is also a quantum limit of decoherence: When the spacing between energy levels of the system is large compared to the frequencies present in the environment, energy eigenstates are einselected nearly independently of the nature of the system-environment coupling. Collisional decoherence. There has been significant work on correctly identifying the pointer states in the case of a massive particle decohered by collisions with a fluid environment, often known as "collisional decoherence". In particular, Busse and Hornberger have identified certain solitonic wavepackets as being unusually stable in the presence of such decoherence. See also. Mott problem References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\epsilon_i \\rangle" }, { "math_id": 1, "text": "\\langle \\epsilon_i|\\epsilon_j \\rangle = \\delta_{ij}" }, { "math_id": 2, "text": "|\\psi\\rangle" }, { "math_id": 3, "text": " \\mathcal {H}_\\Psi (t)= -\\operatorname{Tr} \\left ( \\rho_\\Psi(t) \\log \\rho_\\Psi(t) \\right ) " }, { "math_id": 4, "text": "\\rho_\\Psi \\left ( t \\right )" }, { "math_id": 5, "text": "\\rho_\\Psi(0)=|\\Psi\\rangle\\langle\\Psi|" }, { "math_id": 6, "text": "\\left | \\Psi \\right \\rangle" }, { "math_id": 7, "text": "\\mathcal {H}_\\Psi\\," }, { "math_id": 8, "text": " t\\ " } ]
https://en.wikipedia.org/wiki?curid=1100001
1100094
Branching fraction
Fraction of total particles in a sample which decay by a given mode In particle physics and nuclear physics, the branching fraction (or branching ratio) for a decay is the fraction of particles which decay by an individual decay mode or with respect to the total number of particles which decay. It applies to either the radioactive decay of atoms or the decay of elementary particles. It is equal to the ratio of the partial decay constant to the overall decay constant. Sometimes a partial half-life is given, but this term is misleading; due to competing modes, it is not true that half of the particles will decay through a particular decay mode after its partial half-life. The partial half-life is merely an alternate way to specify the partial decay constant λ, the two being related through: formula_0 For example, for decays of 132Cs, 98.1% are ε (electron capture) or β+ (positron) decays, and 1.9% are β− (electron) decays. The partial decay constants can be calculated from the branching fraction and the half-life of 132Cs (6.479 d), they are: 0.10 d−1 (ε + β+) and 0.0020 d−1 (β−). The partial half-lives are 6.60 d (ε + β+) and 341 d (β−). Here the problem with the term partial half-life is evident: after (341+6.60) days almost all the nuclei will have decayed, not only half as one may initially think. Isotopes with significant branching of decay modes include copper-64, arsenic-74, rhodium-102, indium-112, iodine-126 and holmium-164. Branching fractions of atomic states. In the field of atomic, molecular, and optical physics, a branching fraction refers to the probability of decay to a specific lower-lying energy states from some excited state. Suppose we drive a transition in an atomic system to an excited state , which can decay into either the ground state or a long-lived state . If the probability to decay (the branching fraction) into the state is &amp;NoBreak;&amp;NoBreak;, then the probability to decay into the other state would be &amp;NoBreak;&amp;NoBreak;. Further possible decays would split appropriately, with their probabilities summing to 1. In some instances, instead of a branching fraction, a branching ratio is used. In this case, the branching ratio is just the ratio of the branching fractions between two states. To use our example from before, if the branching fraction to state is &amp;NoBreak;&amp;NoBreak;, then the branching ratio comparing the transition rates to and would be &amp;NoBreak;}&amp;NoBreak;. Measurement. Branching fractions can be measured in a variety of ways, including time-resolved recording of the atom's fluorescence during a series of population transfers in the relevant states. A sample measuring procedure for a three state Λ-system that includes ground state , excited state , and long-lived state , is as follows: First, prepare all atoms in the ground state. Pump laser, which drives transition between ground state and the excited state is then turned on, and a photomultiplier (PMT) is used to count the "blue photon" emitted during the transition. Record the counted blue photon as N. Every time an atom is driven to the excited state, it has probability &amp;NoBreak;&amp;NoBreak; of decaying to long-lived state. Therefore, while the pump laser is on, more and more atom would end up in the long-lived state, where they cannot be addressed by the cooling laser. After all the atoms are in the state, apply repump laser, which drives the transition between and . During this process each atom emits one blue photon. Denote number of emitted blue photon during this process as n. Then the branching fraction for decaying to is formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_{1/2} = \\frac{\\ln 2}{\\lambda}." }, { "math_id": 1, "text": "p = \\frac{N}{N+n}." } ]
https://en.wikipedia.org/wiki?curid=1100094
11001950
Łukasiewicz logic
In mathematics and philosophy, Łukasiewicz logic ( , ) is a non-classical, many-valued logic. It was originally defined in the early 20th century by Jan Łukasiewicz as a three-valued modal logic; it was later generalized to "n"-valued (for all finite "n") as well as infinitely-many-valued (ℵ0-valued) variants, both propositional and first order. The ℵ0-valued version was published in 1930 by Łukasiewicz and Alfred Tarski; consequently it is sometimes called the Łukasiewicz–Tarski logic. It belongs to the classes of t-norm fuzzy logics and substructural logics. Łukasiewicz logic was motivated by Aristotle's suggestion that bivalent logic was not applicable to future contingents, e.g. the statement "There will be a sea battle tomorrow". In other words, statements about the future were neither true nor false, but an intermediate value could be assigned to them, to represent their possibility of becoming true in the future. This article presents the Łukasiewicz(–Tarski) logic in its full generality, i.e. as an infinite-valued logic. For an elementary introduction to the three-valued instantiation Ł3, see three-valued logic. Language. The propositional connectives of Łukasiewicz logic are formula_0 ("implication"), and the constant formula_1 ("false"). Additional connectives can be defined in terms of these: formula_2 The formula_3 and formula_4 connectives are called "weak" disjunction and conjunction, because they are non-classical, as the law of excluded middle does not hold for them. In the context of substructural logics, they are called "additive" connectives. They also correspond to lattice min/max connectives. In terms of substructural logics, there are also "strong" or "multiplicative" disjunction and conjunction connectives, although these are not part of Łukasiewicz's original presentation: formula_5 There are also defined modal operators, using the "Tarskian Möglichkeit": formula_6 Axioms. The original system of axioms for propositional infinite-valued Łukasiewicz logic used implication and negation as the primitive connectives, along with modus ponens: formula_7 Propositional infinite-valued Łukasiewicz logic can also be axiomatized by adding the following axioms to the axiomatic system of monoidal t-norm logic: That is, infinite-valued Łukasiewicz logic arises by adding the axiom of double negation to basic fuzzy logic (BL), or by adding the axiom of divisibility to the logic IMTL. Finite-valued Łukasiewicz logics require additional axioms. Proof Theory. A hypersequent calculus for three-valued Łukasiewicz logic was introduced by Arnon Avron in 1991. Sequent calculi for finite and infinite-valued Łukasiewicz logics as an extension of linear logic were introduced by A. Prijatelj in 1994. However, these are not cut-free systems. Hypersequent calculi for Łukasiewicz logics were introduced by A. Ciabattoni et al in 1999. However, these are not cut-free for formula_10 finite-valued logics. A labelled tableaux system was introduced by Nicola Olivetti in 2003. Real-valued semantics. Infinite-valued Łukasiewicz logic is a real-valued logic in which sentences from sentential calculus may be assigned a truth value of not only 0 or 1 but also any real number in between (e.g. 0.25). Valuations have a recursive definition where: and where the definitions of the operations hold as follows: The truth function formula_24 of strong conjunction is the Łukasiewicz t-norm and the truth function formula_25 of strong disjunction is its dual t-conorm. Obviously, formula_26 and formula_27, so if formula_28, then formula_29 while the respective logically-equivalent propositions have formula_30. The truth function formula_31 is the residuum of the Łukasiewicz t-norm. All truth functions of the basic connectives are continuous. By definition, a formula is a tautology of infinite-valued Łukasiewicz logic if it evaluates to 1 under each valuation of propositional variables by real numbers in the interval [0, 1]. Finite-valued and countable-valued semantics. Using exactly the same valuation formulas as for real-valued semantics Łukasiewicz (1922) also defined (up to isomorphism) semantics over General algebraic semantics. The standard real-valued semantics determined by the Łukasiewicz t-norm is not the only possible semantics of Łukasiewicz logic. General algebraic semantics of propositional infinite-valued Łukasiewicz logic is formed by the class of all MV-algebras. The standard real-valued semantics is a special MV-algebra, called the "standard MV-algebra". Like other t-norm fuzzy logics, propositional infinite-valued Łukasiewicz logic enjoys completeness with respect to the class of all algebras for which the logic is sound (that is, MV-algebras) as well as with respect to only linear ones. This is expressed by the general, linear, and standard completeness theorems: The following conditions are equivalent: * formula_32 is provable in propositional infinite-valued Łukasiewicz logic * formula_32 is valid in all MV-algebras ("general completeness") * formula_32 is valid in all linearly ordered MV-algebras ("linear completeness") * formula_32 is valid in the standard MV-algebra ("standard completeness"). Here "valid" means "necessarily evaluates to 1". Font, Rodriguez and Torrens introduced in 1984 the Wajsberg algebra as an alternative model for the infinite-valued Łukasiewicz logic. A 1940s attempt by Grigore Moisil to provide algebraic semantics for the "n"-valued Łukasiewicz logic by means of his Łukasiewicz–Moisil (LM) algebra (which Moisil called "Łukasiewicz algebras") turned out to be an incorrect model for "n" ≥ 5. This issue was made public by Alan Rose in 1956. C. C. Chang's MV-algebra, which is a model for the ℵ0-valued (infinitely-many-valued) Łukasiewicz–Tarski logic, was published in 1958. For the axiomatically more complicated (finite) "n"-valued Łukasiewicz logics, suitable algebras were published in 1977 by Revaz Grigolia and called MV"n"-algebras. MV"n"-algebras are a subclass of LM"n"-algebras, and the inclusion is strict for "n" ≥ 5. In 1982 Roberto Cignoli published some additional constraints that added to LM"n"-algebras produce proper models for "n"-valued Łukasiewicz logic; Cignoli called his discovery "proper Łukasiewicz algebras". Complexity. Łukasiewicz logics are co-NP complete. Modal Logic. Łukasiewicz logics can be seen as modal logics, a type of logic that addresses possibility, using the defined operators, formula_33 A third "doubtful" operator has been proposed, formula_34. From these we can prove the following theorems, which are common axioms in many modal logics: formula_35 We can also prove distribution theorems on the strong connectives: formula_36 However, the following distribution theorems also hold: formula_37 In other words, if formula_38, then formula_39, which is counter-intuitive. However, these controversial theorems have been defended as a modal logic about future contingents by A. N. Prior. Notably, formula_40. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightarrow" }, { "math_id": 1, "text": "\\bot" }, { "math_id": 2, "text": "\n\\begin{align}\n\\neg A & =_{def} A \\rightarrow \\bot \\\\\nA \\vee B & =_{def} (A \\rightarrow B) \\rightarrow B \\\\\nA \\wedge B & =_{def} \\neg( \\neg A \\vee \\neg B) \\\\\nA \\leftrightarrow B &=_{def} (A \\rightarrow B) \\wedge (B \\rightarrow A) \\\\\n\\top & =_{def} \\bot \\rightarrow \\bot\n\\end{align}\n" }, { "math_id": 3, "text": "\\vee" }, { "math_id": 4, "text": "\\wedge" }, { "math_id": 5, "text": "\\begin{align}\nA \\oplus B &=_{def} \\neg A \\rightarrow B \\\\\nA \\otimes B &=_{def} \\neg (A \\rightarrow \\neg B)\n\\end{align}\n" }, { "math_id": 6, "text": "\\begin{align}\n\\Diamond A &=_{def} \\neg A \\rightarrow A \\\\\n\\Box A &=_{def} \\neg \\Diamond \\neg A\n\\end{align}\n" }, { "math_id": 7, "text": "\\begin{align}\n A &\\rightarrow (B \\rightarrow A) \\\\\n (A \\rightarrow B) &\\rightarrow ((B \\rightarrow C) \\rightarrow (A \\rightarrow C)) \\\\\n ((A \\rightarrow B) \\rightarrow B) &\\rightarrow ((B \\rightarrow A) \\rightarrow A) \\\\\n (\\neg B \\rightarrow \\neg A) &\\rightarrow (A \\rightarrow B).\n\\end{align}" }, { "math_id": 8, "text": "(A \\wedge B) \\rightarrow (A \\otimes (A \\rightarrow B))" }, { "math_id": 9, "text": "\\neg\\neg A \\rightarrow A." }, { "math_id": 10, "text": "n > 3" }, { "math_id": 11, "text": "w(\\theta \\circ \\phi) = F_\\circ(w(\\theta), w(\\phi))" }, { "math_id": 12, "text": "\\circ," }, { "math_id": 13, "text": "w(\\neg\\theta) = F_\\neg(w(\\theta))," }, { "math_id": 14, "text": "w\\left(\\overline{0}\\right) = 0" }, { "math_id": 15, "text": "w\\left(\\overline{1}\\right) = 1," }, { "math_id": 16, "text": "F_\\rightarrow(x,y) = \\min\\{1, 1-x+y\\}" }, { "math_id": 17, "text": "F_\\leftrightarrow(x, y) = 1-|x-y|" }, { "math_id": 18, "text": "F_\\neg(x) = 1-x" }, { "math_id": 19, "text": "F_\\wedge(x, y) = \\min\\{x, y\\}" }, { "math_id": 20, "text": "F_\\vee(x, y) = \\max\\{x, y\\}" }, { "math_id": 21, "text": "F_\\otimes(x, y) = \\max\\{0, x+y-1\\}" }, { "math_id": 22, "text": "F_\\oplus(x, y) = \\min\\{1, x+y\\}." }, { "math_id": 23, "text": "F_\\Diamond(x) = \\min\\{1,2x\\}, F_\\Box(x) = \\max\\{0, 2x-1\\}" }, { "math_id": 24, "text": "F_\\otimes" }, { "math_id": 25, "text": "F_\\oplus" }, { "math_id": 26, "text": "F_\\otimes(.5,.5) = 0" }, { "math_id": 27, "text": "F_\\oplus(.5,.5)=1" }, { "math_id": 28, "text": "T(p)=.5" }, { "math_id": 29, "text": "T(p\\wedge p)=T(\\neg p \\wedge \\neg p) = 0" }, { "math_id": 30, "text": "T(p\\vee p)= T(\\neg p\\vee \\neg p) = 1" }, { "math_id": 31, "text": "F_\\rightarrow" }, { "math_id": 32, "text": "A" }, { "math_id": 33, "text": "\\begin{align}\n\n\\Diamond A &=_{def} \\neg A \\rightarrow A \\\\\n\\Box A &=_{def} \\neg \\Diamond \\neg A \\\\\n\\end{align}\n" }, { "math_id": 34, "text": "\\odot A =_{def} A \\leftrightarrow \\neg A " }, { "math_id": 35, "text": "\\begin{align}\nA & \\rightarrow \\Diamond A \\\\\n\\Box A & \\rightarrow A \\\\\nA & \\rightarrow (A \\rightarrow \\Box A) \\\\\n\\Box (A \\rightarrow B) & \\rightarrow (\\Box A \\rightarrow \\Box B) \\\\\n\\Box (A \\rightarrow B) & \\rightarrow (\\Diamond A \\rightarrow \\Diamond B) \\\\\n\\end{align}\n" }, { "math_id": 36, "text": "\\begin{align}\n\\Box (A \\otimes B) & \\leftrightarrow \\Box A \\otimes \\Box B \\\\\n\\Diamond (A \\oplus B) & \\leftrightarrow \\Diamond A \\oplus \\Diamond B \\\\\n\\Diamond (A \\otimes B) & \\rightarrow \\Diamond A \\otimes \\Diamond B \\\\\n\\Box A \\oplus \\Box B & \\rightarrow \\Box (A \\oplus B)\n\\end{align}\n" }, { "math_id": 37, "text": "\\begin{align}\n\\Box A \\vee \\Box B & \\leftrightarrow \\Box (A \\vee B) \\\\\n\\Box A \\wedge \\Box B & \\leftrightarrow \\Box (A \\wedge B) \\\\\n\\Diamond A \\vee \\Diamond B & \\leftrightarrow \\Diamond (A \\vee B) \\\\\n\\Diamond A \\wedge \\Diamond B & \\leftrightarrow \\Diamond (A \\wedge B) \n\\end{align}\n" }, { "math_id": 38, "text": "\\Diamond A \\wedge \\Diamond \\neg A" }, { "math_id": 39, "text": "\\Diamond (A \\wedge \\neg A)" }, { "math_id": 40, "text": "\\Diamond A \\wedge \\Diamond \\neg A \\leftrightarrow \\odot A" } ]
https://en.wikipedia.org/wiki?curid=11001950
11002752
Krogmann's salt
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Krogmann's salt is a linear chain compound consisting of stacks of tetracyanoplatinate. Sometimes described as molecular wires, Krogmann's salt exhibits highly anisotropic electrical conductivity. For this reason, Krogmann's salt and related materials are of some interest in nanotechnology. History and nomenclature. Krogmann's salt was first synthesized by Klaus Krogmann in the late 1960s. Krogmann's salt most commonly refers to a platinum metal complex of the formula K2[Pt(CN)4X0.3] where X is usually bromine (or sometimes chlorine). Many other non-stoichiometric metal salts containing the anionic complex [Pt(CN)4]n− can also be characterized. Structure and physical properties. Krogmann's salt is a series of partially oxidized tetracyanoplatinate complexes linked by the platinum-platinum bonds on the top and bottom faces of the planar [Pt(CN)4]n− anions. This salt forms infinite stacks in the solid state based on the overlap of the dz2 orbitals. Krogmann's salt has a tetragonal crystal structure with a Pt-Pt distance of 2.880 angstroms, which is much shorter than the metal-metal bond distances in other planar platinum complexes such as Ca[Pt(CN)4]·5H2O (3.36 angstroms), Sr[Pt(CN)4]·5H2O (3.58 angstroms), and Mg[Pt(CN)4]·7H2O (3.16 angstroms). The Pt-Pt distance in Krogmann's salt is only 0.1 angstroms longer than in platinum metal. Each unit cell contains a site for Cl−, corresponding to 0.5 Cl− per Pt. However, this site is only filled 64% of the time, giving 0.32 Cl− per Pt in the actual compound. Because of this, the oxidation number of Pt does not rise above +2.32. Krogmann's salt has no recognizable phase range and is characterized by broad and intense intervalence bands in its electronic spectra. Chemical properties. One of the most widely researched properties of Krogmann's salt is its unusual electric conductance. Because of its linear chain structure and overlap of the platinum formula_0 orbitals, Krogmann's salt is an excellent conductor of electricity. This property makes it an attractive material for nanotechnology. Preparation. The usual preparation of Krogmann's salt involves the evaporation of a 5:1 molar ratio mixture of the salts K2[Pt(CN)4] and K2[Pt(CN)4Br2] in water to give copper-colored needles of K2[Pt(CN)4]Br0.32·2.6 H2O. 5K2[Pt(CN)4] + K2[Pt(CN)4Br2] + 15.6 H2O → 6K2[Pt(CN)4]Br0.32·2.6 H2O Because excess PtII or PtIV complex crystallizes out with the product when the reactant ratio is changed, the product is therefore well defined, although non-stoichiometric. Uses. Krogmann's salt nor any related material has found any commercial applications.
[ { "math_id": 0, "text": "d_{z^2}" } ]
https://en.wikipedia.org/wiki?curid=11002752
11004
Fundamental group
Mathematical group of the homotopy classes of loops in a topological space In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, of the topological space. The fundamental group is the first and simplest homotopy group. The fundamental group is a homotopy invariant—topological spaces that are homotopy equivalent (or the stronger case of homeomorphic) have isomorphic fundamental groups. The fundamental group of a topological space formula_0 is denoted by formula_1. Intuition. Start with a space (for example, a surface), and some point in it, and all the loops both starting and ending at this point—paths that start at this point, wander around and eventually return to the starting point. Two loops can be combined in an obvious way: travel along the first loop, then along the second. Two loops are considered equivalent if one can be deformed into the other without breaking. The set of all such loops with this method of combining and this equivalence between them is the fundamental group for that particular space. History. Henri Poincaré defined the fundamental group in 1895 in his paper "Analysis situs". The concept emerged in the theory of Riemann surfaces, in the work of Bernhard Riemann, Poincaré, and Felix Klein. It describes the monodromy properties of complex-valued functions, as well as providing a complete topological classification of closed surfaces. Definition. Throughout this article, "X" is a topological space. A typical example is a surface such as the one depicted at the right. Moreover, formula_2 is a point in "X" called the "base-point". (As is explained below, its role is rather auxiliary.) The idea of the definition of the homotopy group is to measure how many (broadly speaking) curves on "X" can be deformed into each other. The precise definition depends on the notion of the homotopy of loops, which is explained first. Homotopy of loops. Given a topological space "X", a "loop based at formula_2" is defined to be a continuous function (also known as a continuous map) formula_3 such that the starting point formula_4 and the end point formula_5 are both equal to formula_2. A "homotopy" is a continuous interpolation between two loops. More precisely, a homotopy between two loops formula_6 (based at the same point formula_2) is a continuous map formula_7 such that If such a homotopy "h" exists, formula_13 and formula_14 are said to be "homotopic". The relation "formula_13 is homotopic to formula_14" is an equivalence relation so that the set of equivalence classes can be considered: formula_15. This set (with the group structure described below) is called the "fundamental group" of the topological space "X" at the base point formula_2. The purpose of considering the equivalence classes of loops up to homotopy, as opposed to the set of all loops (the so-called loop space of "X") is that the latter, while being useful for various purposes, is a rather big and unwieldy object. By contrast the above quotient is, in many cases, more manageable and computable. Group structure. By the above definition, formula_16 is just a set. It becomes a group (and therefore deserves the name fundamental "group") using the concatenation of loops. More precisely, given two loops formula_17, their product is defined as the loop formula_18 formula_19 Thus the loop formula_20 first follows the loop formula_21 with "twice the speed" and then follows formula_22 with "twice the speed". The product of two homotopy classes of loops formula_23 and formula_24 is then defined as formula_25. It can be shown that this product does not depend on the choice of representatives and therefore gives a well-defined operation on the set formula_16. This operation turns formula_16 into a group. Its neutral element is the constant loop, which stays at formula_2 for all times "t". The inverse of a (homotopy class of a) loop is the same loop, but traversed in the opposite direction. More formally, formula_26. Given three based loops formula_27 the product formula_28 is the concatenation of these loops, traversing formula_21 and then formula_22 with quadruple speed, and then formula_29 with double speed. By comparison, formula_30 traverses the same paths (in the same order), but formula_21 with double speed, and formula_31 with quadruple speed. Thus, because of the differing speeds, the two paths are not identical. The associativity axiom formula_32 therefore crucially depends on the fact that paths are considered up to homotopy. Indeed, both above composites are homotopic, for example, to the loop that traverses all three loops formula_33 with triple speed. The set of based loops up to homotopy, equipped with the above operation therefore does turn formula_16 into a group. Dependence of the base point. Although the fundamental group in general depends on the choice of base point, it turns out that, up to isomorphism (actually, even up to "inner" isomorphism), this choice makes no difference as long as the space "X" is path-connected. For path-connected spaces, therefore, many authors write formula_1 instead of formula_34 Concrete examples. This section lists some basic examples of fundamental groups. To begin with, in Euclidean space (formula_35) or any convex subset of formula_36 there is only one homotopy class of loops, and the fundamental group is therefore the trivial group with one element. More generally, any star domain – and yet more generally, any contractible space – has a trivial fundamental group. Thus, the fundamental group does not distinguish between such spaces. The 2-sphere. A path-connected space whose fundamental group is trivial is called simply connected. For example, the 2-sphere formula_37 depicted on the right, and also all the higher-dimensional spheres, are simply-connected. The figure illustrates a homotopy contracting one particular loop to the constant loop. This idea can be adapted to all loops formula_13 such that there is a point formula_38 that is not in the image of formula_39 However, since there are loops such that formula_40 (constructed from the Peano curve, for example), a complete proof requires more careful analysis with tools from algebraic topology, such as the Seifert–van Kampen theorem or the cellular approximation theorem. The circle. The circle (also known as the 1-sphere) formula_41 is not simply connected. Instead, each homotopy class consists of all loops that wind around the circle a given number of times (which can be positive or negative, depending on the direction of winding). The product of a loop that winds around "m" times and another that winds around "n" times is a loop that winds around "m" + "n" times. Therefore, the fundamental group of the circle is isomorphic to formula_42 the additive group of integers. This fact can be used to give proofs of the Brouwer fixed point theorem and the Borsuk–Ulam theorem in dimension 2. The figure eight. The fundamental group of the figure eight is the free group on two letters. The idea to prove this is as follows: choosing the base point to be the point where the two circles meet (dotted in black in the picture at the right), any loop formula_13 can be decomposed as formula_43 where "a" and "b" are the two loops winding around each half of the figure as depicted, and the exponents formula_44 are integers. Unlike formula_45 the fundamental group of the figure eight is "not" abelian: the two ways of composing "a" and "b" are not homotopic to each other: formula_46 More generally, the fundamental group of a bouquet of "r" circles is the free group on "r" letters. The fundamental group of a wedge sum of two path connected spaces "X" and "Y" can be computed as the free product of the individual fundamental groups: formula_47 This generalizes the above observations since the figure eight is the wedge sum of two circles. The fundamental group of the plane punctured at "n" points is also the free group with "n" generators. The "i"-th generator is the class of the loop that goes around the "i"-th puncture without going around any other punctures. Graphs. The fundamental group can be defined for discrete structures too. In particular, consider a connected graph "G" ("V", "E"), with a designated vertex "v"0 in "V". The loops in "G" are the cycles that start and end at "v"0. Let "T" be a spanning tree of "G". Every simple loop in "G" contains exactly one edge in "E" \ "T"; every loop in "G" is a concatenation of such simple loops. Therefore, the fundamental group of a graph is a free group, in which the number of generators is exactly the number of edges in "E" \ "T". This number equals |"E"| − |"V"| + 1. For example, suppose "G" has 16 vertices arranged in 4 rows of 4 vertices each, with edges connecting vertices that are adjacent horizontally or vertically. Then "G" has 24 edges overall, and the number of edges in each spanning tree is 16 − 1 15, so the fundamental group of "G" is the free group with 9 generators. Note that "G" has 9 "holes", similarly to a bouquet of 9 circles, which has the same fundamental group. Knot groups. "Knot groups" are by definition the fundamental group of the complement of a knot formula_48 embedded in formula_49 For example, the knot group of the trefoil knot is known to be the braid group formula_50 which gives another example of a non-abelian fundamental group. The Wirtinger presentation explicitly describes knot groups in terms of generators and relations based on a diagram of the knot. Therefore, knot groups have some usage in knot theory to distinguish between knots: if formula_51 is not isomorphic to some other knot group formula_52 of another knot formula_53, then formula_48 can not be transformed into formula_53. Thus the trefoil knot can not be continuously transformed into the circle (also known as the unknot), since the latter has knot group formula_54. There are, however, knots that can not be deformed into each other, but have isomorphic knot groups. Oriented surfaces. The fundamental group of a genus-"n" orientable surface can be computed in terms of generators and relations as formula_55 This includes the torus, being the case of genus 1, whose fundamental group is formula_56 Topological groups. The fundamental group of a topological group "X" (with respect to the base point being the neutral element) is always commutative. In particular, the fundamental group of a Lie group is commutative. In fact, the group structure on "X" endows formula_1 with another group structure: given two loops formula_13 and formula_14 in "X", another loop formula_57 can defined by using the group multiplication in "X": formula_58 This binary operation formula_59 on the set of all loops is "a priori" independent from the one described above. However, the Eckmann–Hilton argument shows that it does in fact agree with the above concatenation of loops, and moreover that the resulting group structure is abelian. An inspection of the proof shows that, more generally, formula_1 is abelian for any H-space "X", i.e., the multiplication need not have an inverse, nor does it have to be associative. For example, this shows that the fundamental group of a loop space of another topological space "Y", formula_60 is abelian. Related ideas lead to Heinz Hopf's computation of the cohomology of a Lie group. Functoriality. If formula_61 is a continuous map, formula_62 and formula_63 with formula_64 then every loop in formula_0 with base point formula_2 can be composed with formula_65 to yield a loop in formula_66 with base point formula_67 This operation is compatible with the homotopy equivalence relation and with composition of loops. The resulting group homomorphism, called the induced homomorphism, is written as formula_68 or, more commonly, formula_69 This mapping from continuous maps to group homomorphisms is compatible with composition of maps and identity morphisms. In the parlance of category theory, the formation of associating to a topological space its fundamental group is therefore a functor formula_70 from the category of topological spaces together with a base point to the category of groups. It turns out that this functor does not distinguish maps that are homotopic relative to the base point: if formula_71 are continuous maps with formula_72, and "f" and "g" are homotopic relative to {"x"0}, then "f"∗ = "g"∗. As a consequence, two homotopy equivalent path-connected spaces have isomorphic fundamental groups: formula_73 For example, the inclusion of the circle in the punctured plane formula_74 is a homotopy equivalence and therefore yields an isomorphism of their fundamental groups. The fundamental group functor takes products to products and coproducts to coproducts. That is, if "X" and "Y" are path connected, then formula_75 and if they are also locally contractible, then formula_76 (In the latter formula, formula_77 denotes the wedge sum of pointed topological spaces, and formula_78 the free product of groups.) The latter formula is a special case of the Seifert–van Kampen theorem, which states that the fundamental group functor takes pushouts along inclusions to pushouts. Abstract results. As was mentioned above, computing the fundamental group of even relatively simple topological spaces tends to be not entirely trivial, but requires some methods of algebraic topology. Relationship to first homology group. The abelianization of the fundamental group can be identified with the first homology group of the space. A special case of the Hurewicz theorem asserts that the first singular homology group formula_79 is, colloquially speaking, the closest approximation to the fundamental group by means of an abelian group. In more detail, mapping the homotopy class of each loop to the homology class of the loop gives a group homomorphism formula_80 from the fundamental group of a topological space "X" to its first singular homology group formula_81 This homomorphism is not in general an isomorphism since the fundamental group may be non-abelian, but the homology group is, by definition, always abelian. This difference is, however, the only one: if "X" is path-connected, this homomorphism is surjective and its kernel is the commutator subgroup of the fundamental group, so that formula_79 is isomorphic to the abelianization of the fundamental group. Gluing topological spaces. Generalizing the statement above, for a family of path connected spaces formula_82 the fundamental group formula_83 is the free product of the fundamental groups of the formula_84 This fact is a special case of the Seifert–van Kampen theorem, which allows to compute, more generally, fundamental groups of spaces that are glued together from other spaces. For example, the 2-sphere formula_85 can be obtained by gluing two copies of slightly overlapping half-spheres along a neighborhood of the equator. In this case the theorem yields formula_86 is trivial, since the two half-spheres are contractible and therefore have trivial fundamental group. The fundamental groups of surfaces, as mentioned above, can also be computed using this theorem. In the parlance of category theory, the theorem can be concisely stated by saying that the fundamental group functor takes pushouts (in the category of topological spaces) along inclusions to pushouts (in the category of groups). Coverings. Given a topological space "B", a continuous map formula_87 is called a "covering" or "E" is called a "covering space" of "B" if every point "b" in "B" admits an open neighborhood "U" such that there is a homeomorphism between the preimage of "U" and a disjoint union of copies of "U" (indexed by some set "I"), formula_88 in such a way that formula_89 is the standard projection map formula_90 Universal covering. A covering is called a universal covering if "E" is, in addition to the preceding condition, simply connected. It is universal in the sense that all other coverings can be constructed by suitably identifying points in "E". Knowing a universal covering formula_91 of a topological space "X" is helpful in understanding its fundamental group in several ways: first, formula_1 identifies with the group of deck transformations, i.e., the group of homeomorphisms formula_92 that commute with the map to "X", i.e., formula_93 Another relation to the fundamental group is that formula_94 can be identified with the fiber formula_95 For example, the map formula_96 (or, equivalently, formula_97) is a universal covering. The deck transformations are the maps formula_98 for formula_99 This is in line with the identification formula_100 in particular this proves the above claim formula_101 Any path connected, locally path connected and locally simply connected topological space "X" admits a universal covering. An abstract construction proceeds analogously to the fundamental group by taking pairs ("x", γ), where "x" is a point in "X" and γ is a homotopy class of paths from "x"0 to "x". The passage from a topological space to its universal covering can be used in understanding the geometry of "X". For example, the uniformization theorem shows that any simply connected Riemann surface is (isomorphic to) either formula_102 formula_103 or the upper half plane. General Riemann surfaces then arise as quotients of group actions on these three surfaces. The quotient of a free action of a discrete group "G" on a simply connected space "Y" has fundamental group formula_104 As an example, the real "n"-dimensional real projective space formula_105 is obtained as the quotient of the "n"-dimensional unit sphere formula_106 by the antipodal action of the group formula_107 sending formula_108 to formula_109 As formula_106 is simply connected for "n" ≥ 2, it is a universal cover of formula_105 in these cases, which implies formula_110 for "n" ≥ 2. Lie groups. Let "G" be a connected, simply connected compact Lie group, for example, the special unitary group SU("n"), and let Γ be a finite subgroup of "G". Then the homogeneous space "X" = "G"/Γ has fundamental group Γ, which acts by right multiplication on the universal covering space "G". Among the many variants of this construction, one of the most important is given by locally symmetric spaces "X" = Γ&amp;hairsp;\"G"/"K", where In this case the fundamental group is Γ and the universal covering space "G"/"K" is actually contractible (by the Cartan decomposition for Lie groups). As an example take "G" = SL(2, R), "K" = SO(2) and Γ any torsion-free congruence subgroup of the modular group SL(2, Z). From the explicit realization, it also follows that the universal covering space of a path connected topological group "H" is again a path connected topological group "G". Moreover, the covering map is a continuous open homomorphism of "G" onto "H" with kernel Γ, a closed discrete normal subgroup of "G": formula_111 Since "G" is a connected group with a continuous action by conjugation on a discrete group Γ, it must act trivially, so that Γ has to be a subgroup of the center of "G". In particular π1("H") = Γ is an abelian group; this can also easily be seen directly without using covering spaces. The group "G" is called the "universal covering group" of "H". As the universal covering group suggests, there is an analogy between the fundamental group of a topological group and the center of a group; this is elaborated at Lattice of covering groups. Fibrations. "Fibrations" provide a very powerful means to compute homotopy groups. A fibration "f" the so-called "total space", and the base space "B" has, in particular, the property that all its fibers formula_112 are homotopy equivalent and therefore can not be distinguished using fundamental groups (and higher homotopy groups), provided that "B" is path-connected. Therefore, the space "E" can be regarded as a "twisted product" of the base space "B" and the fiber formula_113 The great importance of fibrations to the computation of homotopy groups stems from a long exact sequence formula_114 provided that "B" is path-connected. The term formula_115 is the second homotopy group of "B", which is defined to be the set of homotopy classes of maps from formula_85 to "B", in direct analogy with the definition of formula_116 If "E" happens to be path-connected and simply connected, this sequence reduces to an isomorphism formula_117 which generalizes the above fact about the universal covering (which amounts to the case where the fiber "F" is also discrete). If instead "F" happens to be connected and simply connected, it reduces to an isomorphism formula_118 What is more, the sequence can be continued at the left with the higher homotopy groups formula_119 of the three spaces, which gives some access to computing such groups in the same vein. Classical Lie groups. Such fiber sequences can be used to inductively compute fundamental groups of compact classical Lie groups such as the special unitary group formula_120 with formula_121 This group acts transitively on the unit sphere formula_122 inside formula_123 The stabilizer of a point in the sphere is isomorphic to formula_124 It then can be shown that this yields a fiber sequence formula_125 Since formula_126 the sphere formula_122 has dimension at least 3, which implies formula_127 The long exact sequence then shows an isomorphism formula_128 Since formula_129 is a single point, so that formula_130 is trivial, this shows that formula_131 is simply connected for all formula_132 The fundamental group of noncompact Lie groups can be reduced to the compact case, since such a group is homotopic to its maximal compact subgroup. These methods give the following results: A second method of computing fundamental groups applies to all connected compact Lie groups and uses the machinery of the maximal torus and the associated root system. Specifically, let formula_134 be a maximal torus in a connected compact Lie group formula_135 and let formula_136 be the Lie algebra of formula_137 The exponential map formula_138 is a fibration and therefore its kernel formula_139 identifies with formula_140 The map formula_141 can be shown to be surjective with kernel given by the set "I" of integer linear combination of coroots. This leads to the computation formula_142 This method shows, for example, that any connected compact Lie group for which the associated root system is of type formula_143 is simply connected. Thus, there is (up to isomorphism) only one connected compact Lie group having Lie algebra of type formula_143; this group is simply connected and has trivial center. Edge-path group of a simplicial complex. When the topological space is homeomorphic to a simplicial complex, its fundamental group can be described explicitly in terms of generators and relations. If "X" is a connected simplicial complex, an "edge-path" in "X" is defined to be a chain of vertices connected by edges in "X". Two edge-paths are said to be "edge-equivalent" if one can be obtained from the other by successively switching between an edge and the two opposite edges of a triangle in "X". If "v" is a fixed vertex in "X", an "edge-loop" at "v" is an edge-path starting and ending at "v". The edge-path group "E"("X", "v") is defined to be the set of edge-equivalence classes of edge-loops at "v", with product and inverse defined by concatenation and reversal of edge-loops. The edge-path group is naturally isomorphic to π1(|"X"&amp;hairsp;|, "v"), the fundamental group of the geometric realisation |"X"&amp;hairsp;| of "X". Since it depends only on the 2-skeleton "X" 2 of "X" (that is, the vertices, edges, and triangles of "X"), the groups π1(|"X"&amp;hairsp;|,"v") and π1(|"X" 2|, "v") are isomorphic. The edge-path group can be described explicitly in terms of generators and relations. If "T" is a maximal spanning tree in the 1-skeleton of "X", then "E"("X", "v") is canonically isomorphic to the group with generators (the oriented edge-paths of "X" not occurring in "T") and relations (the edge-equivalences corresponding to triangles in "X"). A similar result holds if "T" is replaced by any simply connected—in particular contractible—subcomplex of "X". This often gives a practical way of computing fundamental groups and can be used to show that every finitely presented group arises as the fundamental group of a finite simplicial complex. It is also one of the classical methods used for topological surfaces, which are classified by their fundamental groups. The "universal covering space" of a finite connected simplicial complex "X" can also be described directly as a simplicial complex using edge-paths. Its vertices are pairs ("w",γ) where "w" is a vertex of "X" and γ is an edge-equivalence class of paths from "v" to "w". The "k"-simplices containing ("w",γ) correspond naturally to the "k"-simplices containing "w". Each new vertex "u" of the "k"-simplex gives an edge "wu" and hence, by concatenation, a new path γ"u" from "v" to "u". The points ("w",γ) and ("u", γ"u") are the vertices of the "transported" simplex in the universal covering space. The edge-path group acts naturally by concatenation, preserving the simplicial structure, and the quotient space is just "X". It is well known that this method can also be used to compute the fundamental group of an arbitrary topological space. This was doubtless known to Eduard Čech and Jean Leray and explicitly appeared as a remark in a paper by André Weil; various other authors such as Lorenzo Calabi, Wu Wen-tsün, and Nodar Berikashvili have also published proofs. In the simplest case of a compact space "X" with a finite open covering in which all non-empty finite intersections of open sets in the covering are contractible, the fundamental group can be identified with the edge-path group of the simplicial complex corresponding to the nerve of the covering. Related concepts. Higher homotopy groups. Roughly speaking, the fundamental group detects the 1-dimensional hole structure of a space, but not higher-dimensional holes such as for the 2-sphere. Such "higher-dimensional holes" can be detected using the higher homotopy groups formula_144, which are defined to consist of homotopy classes of (basepoint-preserving) maps from formula_106 to "X". For example, the Hurewicz theorem implies that for all formula_145 the "n"-th homotopy group of the "n"-sphere is formula_146 As was mentioned in the above computation of formula_133 of classical Lie groups, higher homotopy groups can be relevant even for computing fundamental groups. Loop space. The set of based loops (as is, i.e. not taken up to homotopy) in a pointed space "X", endowed with the compact open topology, is known as the loop space, denoted formula_147 The fundamental group of "X" is in bijection with the set of path components of its loop space: formula_148 Fundamental groupoid. The "fundamental groupoid" is a variant of the fundamental group that is useful in situations where the choice of a base point formula_62 is undesirable. It is defined by first considering the category of paths in formula_149 i.e., continuous functions formula_150, where "r" is an arbitrary non-negative real number. Since the length "r" is variable in this approach, such paths can be concatenated as is (i.e., not up to homotopy) and therefore yield a category. Two such paths formula_151 with the same endpoints and length "r", resp. "r"' are considered equivalent if there exist real numbers formula_152 such that formula_153 and formula_154 are homotopic relative to their end points, where formula_155 The category of paths up to this equivalence relation is denoted formula_156 Each morphism in formula_157 is an isomorphism, with inverse given by the same path traversed in the opposite direction. Such a category is called a groupoid. It reproduces the fundamental group since formula_158. More generally, one can consider the fundamental groupoid on a set "A" of base points, chosen according to the geometry of the situation; for example, in the case of the circle, which can be represented as the union of two connected open sets whose intersection has two components, one can choose one base point in each component. The van Kampen theorem admits a version for fundamental groupoids which gives, for example, another way to compute the fundamental group(oid) of formula_159 Local systems. Generally speaking, representations may serve to exhibit features of a group by its actions on other mathematical objects, often vector spaces. Representations of the fundamental group have a very geometric significance: any "local system" (i.e., a sheaf formula_160 on "X" with the property that locally in a sufficiently small neighborhood "U" of any point on "X", the restriction of "F" is a constant sheaf of the form formula_161) gives rise to the so-called monodromy representation, a representation of the fundamental group on an "n"-dimensional formula_162-vector space. Conversely, any such representation on a path-connected space "X" arises in this manner. This equivalence of categories between representations of formula_1 and local systems is used, for example, in the study of differential equations, such as the Knizhnik–Zamolodchikov equations. Étale fundamental group. In algebraic geometry, the so-called étale fundamental group is used as a replacement for the fundamental group. Since the Zariski topology on an algebraic variety or scheme "X" is much coarser than, say, the topology of open subsets in formula_36 it is no longer meaningful to consider continuous maps from an interval to "X". Instead, the approach developed by Grothendieck consists in constructing formula_163 by considering all finite étale covers of "X". These serve as an algebro-geometric analogue of coverings with finite fibers. This yields a theory applicable in situation where no great generality classical topological intuition whatsoever is available, for example for varieties defined over a finite field. Also, the étale fundamental group of a field is its (absolute) Galois group. On the other hand, for smooth varieties "X" over the complex numbers, the étale fundamental group retains much of the information inherent in the classical fundamental group: the former is the profinite completion of the latter. Fundamental group of algebraic groups. The fundamental group of a root system is defined, in analogy to the computation for Lie groups. This allows to define and use the fundamental group of a semisimple linear algebraic group "G", which is a useful basic tool in the classification of linear algebraic groups. Fundamental group of simplicial sets. The homotopy relation between 1-simplices of a simplicial set "X" is an equivalence relation if "X" is a Kan complex but not necessarily so in general. Thus, formula_133 of a Kan complex can be defined as the set of homotopy classes of 1-simplices. The fundamental group of an arbitrary simplicial set "X" are defined to be the homotopy group of its topological realization, formula_164 i.e., the topological space obtained by gluing topological simplices as prescribed by the simplicial set structure of "X". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\pi_1(X)" }, { "math_id": 2, "text": "x_0" }, { "math_id": 3, "text": "\\gamma \\colon [0, 1] \\to X" }, { "math_id": 4, "text": "\\gamma(0)" }, { "math_id": 5, "text": "\\gamma(1)" }, { "math_id": 6, "text": "\\gamma, \\gamma' \\colon [0, 1] \\to X" }, { "math_id": 7, "text": "h \\colon [0, 1] \\times [0, 1] \\to X," }, { "math_id": 8, "text": "h(0, t) = x_0" }, { "math_id": 9, "text": "t \\in [0, 1]," }, { "math_id": 10, "text": "h(1, t) = x_0" }, { "math_id": 11, "text": "h(r, 0) = \\gamma(r),\\, h(r, 1) = \\gamma'(r)" }, { "math_id": 12, "text": "r \\in [0, 1]" }, { "math_id": 13, "text": "\\gamma" }, { "math_id": 14, "text": "\\gamma'" }, { "math_id": 15, "text": "\\pi_1(X, x_0) := \\{ \\text{all loops }\\gamma \\text{ based at }x_0 \\} / \\text{homotopy}" }, { "math_id": 16, "text": "\\pi_1(X, x_0)" }, { "math_id": 17, "text": "\\gamma_0, \\gamma_1" }, { "math_id": 18, "text": "\\gamma_0 \\cdot \\gamma_1 \\colon [0, 1] \\to X" }, { "math_id": 19, "text": "(\\gamma_0 \\cdot \\gamma_1)(t) = \\begin{cases}\n \\gamma_0(2t) & 0 \\leq t \\leq \\tfrac{1}{2} \\\\\n \\gamma_1(2t - 1) & \\tfrac{1}{2} \\leq t \\leq 1.\n \\end{cases}" }, { "math_id": 20, "text": "\\gamma_0 \\cdot \\gamma_1" }, { "math_id": 21, "text": "\\gamma_0" }, { "math_id": 22, "text": "\\gamma_1" }, { "math_id": 23, "text": "[\\gamma_0]" }, { "math_id": 24, "text": "[\\gamma_1]" }, { "math_id": 25, "text": "[\\gamma_0 \\cdot \\gamma_1]" }, { "math_id": 26, "text": "\\gamma^{-1}(t) := \\gamma(1-t)" }, { "math_id": 27, "text": "\\gamma_0, \\gamma_1, \\gamma_2," }, { "math_id": 28, "text": "(\\gamma_0 \\cdot \\gamma_1) \\cdot \\gamma_2" }, { "math_id": 29, "text": "\\gamma_2" }, { "math_id": 30, "text": "\\gamma_0 \\cdot (\\gamma_1 \\cdot \\gamma_2)" }, { "math_id": 31, "text": "\\gamma_1, \\gamma_2" }, { "math_id": 32, "text": "[\\gamma_0] \\cdot \\left([\\gamma_1] \\cdot [\\gamma_2]\\right) = \\left([\\gamma_0] \\cdot [\\gamma_1]\\right) \\cdot [\\gamma_2]" }, { "math_id": 33, "text": "\\gamma_0, \\gamma_1, \\gamma_2" }, { "math_id": 34, "text": "\\pi_1(X, x_0)." }, { "math_id": 35, "text": "\\R^n" }, { "math_id": 36, "text": "\\R^n," }, { "math_id": 37, "text": "S^2 = \\left\\{(x, y, z) \\in \\R^3 \\mid x^2 + y^2 + z^2 = 1\\right\\}" }, { "math_id": 38, "text": "(x, y, z) \\in S^2" }, { "math_id": 39, "text": "\\gamma." }, { "math_id": 40, "text": "\\gamma([0, 1]) = S^2" }, { "math_id": 41, "text": "S^1 = \\left\\{(x, y) \\in \\R^2 \\mid x^2 + y^2 = 1\\right\\}" }, { "math_id": 42, "text": "(\\Z, +)," }, { "math_id": 43, "text": "\\gamma = a^{n_1} b^{m_1} \\cdots a^{n_k} b^{m_k}" }, { "math_id": 44, "text": "n_1, \\dots, n_k, m_1, \\dots, m_k" }, { "math_id": 45, "text": "\\pi_1(S^1)," }, { "math_id": 46, "text": "[a] \\cdot [b] \\ne [b] \\cdot [a]." }, { "math_id": 47, "text": "\\pi_1(X \\vee Y) \\cong \\pi_1(X) * \\pi_1(Y)." }, { "math_id": 48, "text": "K" }, { "math_id": 49, "text": "\\R^3." }, { "math_id": 50, "text": "B_3," }, { "math_id": 51, "text": "\\pi_1(\\R^3 \\setminus K)" }, { "math_id": 52, "text": "\\pi_1(\\R^3 \\setminus K')" }, { "math_id": 53, "text": "K'" }, { "math_id": 54, "text": "\\Z" }, { "math_id": 55, "text": "\\left\\langle A_1, B_1, \\ldots, A_n, B_n \\left| A_1 B_1 A_1^{-1} B_1^{-1} \\cdots A_n B_n A_n^{-1} B_n^{-1} \\right. \\right\\rangle." }, { "math_id": 56, "text": "\\left\\langle A_1, B_1 \\left| A_1 B_1 A_1^{-1} B_1^{-1} \\right. \\right\\rangle \\cong \\Z^2." }, { "math_id": 57, "text": "\\gamma \\star \\gamma'" }, { "math_id": 58, "text": "(\\gamma \\star \\gamma')(x) = \\gamma(x) \\cdot \\gamma'(x)." }, { "math_id": 59, "text": "\\star" }, { "math_id": 60, "text": "X = \\Omega(Y)," }, { "math_id": 61, "text": "f\\colon X \\to Y" }, { "math_id": 62, "text": "x_0 \\in X" }, { "math_id": 63, "text": "y_0 \\in Y" }, { "math_id": 64, "text": "f(x_0) = y_0," }, { "math_id": 65, "text": "f" }, { "math_id": 66, "text": "Y" }, { "math_id": 67, "text": "y_0." }, { "math_id": 68, "text": "\\pi(f)" }, { "math_id": 69, "text": "f_* \\colon \\pi_1(X, x_0) \\to \\pi_1(Y, y_0)." }, { "math_id": 70, "text": "\\begin{align}\n \\pi_1 \\colon \\mathbf{Top}_* &\\to \\mathbf{Grp} \\\\\n (X, x_0) &\\mapsto \\pi_1(X, x_0)\n \\end{align}" }, { "math_id": 71, "text": "f,g:X\\to Y" }, { "math_id": 72, "text": "f(x_0) = g(x_0) = y_0" }, { "math_id": 73, "text": "X \\simeq Y \\implies \\pi_1(X, x_0) \\cong \\pi_1(Y, y_0)." }, { "math_id": 74, "text": "S^1 \\subset \\mathbb{R}^2 \\setminus \\{0\\}" }, { "math_id": 75, "text": "\\pi_1 (X \\times Y, (x_0, y_0)) \\cong \\pi_1(X, x_0) \\times \\pi_1(Y, y_0)" }, { "math_id": 76, "text": "\\pi_1(X \\vee Y) \\cong \\pi_1(X)*\\pi_1(Y)." }, { "math_id": 77, "text": "\\vee" }, { "math_id": 78, "text": "*" }, { "math_id": 79, "text": "H_1(X)" }, { "math_id": 80, "text": "\\pi_1(X) \\to H_1(X)" }, { "math_id": 81, "text": "H_1(X)." }, { "math_id": 82, "text": "X_i," }, { "math_id": 83, "text": "\\pi_1 \\left(\\bigvee_{i \\in I} X_i\\right)" }, { "math_id": 84, "text": "X_i." }, { "math_id": 85, "text": "S^2" }, { "math_id": 86, "text": "\\pi_1(S^2)" }, { "math_id": 87, "text": "f: E \\to B" }, { "math_id": 88, "text": "\\varphi: \\bigsqcup_{i \\in I} U \\to f^{-1}(U)" }, { "math_id": 89, "text": "\\pi \\circ \\varphi" }, { "math_id": 90, "text": "\\bigsqcup_{i \\in I} U \\to U." }, { "math_id": 91, "text": "p: \\widetilde{X} \\to X" }, { "math_id": 92, "text": "\\varphi : \\widetilde{X} \\to \\widetilde{X}" }, { "math_id": 93, "text": "p \\circ \\varphi = p." }, { "math_id": 94, "text": "\\pi_1(X, x)" }, { "math_id": 95, "text": "p^{-1}(x)." }, { "math_id": 96, "text": "p: \\mathbb{R} \\to S^1,\\, t \\mapsto \\exp(2 \\pi i t)" }, { "math_id": 97, "text": "\\pi: \\mathbb{R} \\to \\mathbb{R} / \\mathbb{Z},\\ t \\mapsto [t]" }, { "math_id": 98, "text": "t \\mapsto t + n" }, { "math_id": 99, "text": "n \\in \\mathbb{Z}." }, { "math_id": 100, "text": "p^{-1}(1) = \\mathbb{Z}," }, { "math_id": 101, "text": "\\pi_1(S^1) \\cong \\mathbb{Z}." }, { "math_id": 102, "text": "S^2," }, { "math_id": 103, "text": "\\mathbb{C}," }, { "math_id": 104, "text": "\\pi_1(Y/G) \\cong G." }, { "math_id": 105, "text": "\\mathbb{R}\\mathrm{P}^n" }, { "math_id": 106, "text": "S^n" }, { "math_id": 107, "text": "\\mathbb{Z}/2" }, { "math_id": 108, "text": "x \\in S^n" }, { "math_id": 109, "text": "-x." }, { "math_id": 110, "text": "\\pi_1(\\mathbb{R}\\mathrm{P}^n) \\cong \\mathbb{Z}/2" }, { "math_id": 111, "text": "1 \\to \\Gamma \\to G \\to H \\to 1." }, { "math_id": 112, "text": "f^{-1}(b)" }, { "math_id": 113, "text": "F = f^{-1}(b)." }, { "math_id": 114, "text": "\\dots \\to \\pi_2(B) \\to \\pi_1(F) \\to \\pi_1(E) \\to \\pi_1(B) \\to \\pi_0(F) \\to \\pi_0(E)" }, { "math_id": 115, "text": "\\pi_2(B)" }, { "math_id": 116, "text": "\\pi_1." }, { "math_id": 117, "text": "\\pi_1(B) \\cong \\pi_0(F)" }, { "math_id": 118, "text": "\\pi_1(E) \\cong \\pi_1(B)." }, { "math_id": 119, "text": "\\pi_n" }, { "math_id": 120, "text": "\\mathrm{SU}(n)," }, { "math_id": 121, "text": "n \\geq 2." }, { "math_id": 122, "text": "S^{2n-1}" }, { "math_id": 123, "text": "\\mathbb C^n = \\mathbb R^{2n}." }, { "math_id": 124, "text": "\\mathrm{SU}(n-1)." }, { "math_id": 125, "text": "\\mathrm{SU}(n-1) \\to \\mathrm{SU}(n) \\to S^{2n-1}." }, { "math_id": 126, "text": "n \\geq 2," }, { "math_id": 127, "text": "\\pi_1(S^{2n-1}) \\cong \\pi_2(S^{2n-1}) = 1." }, { "math_id": 128, "text": "\\pi_1(\\mathrm{SU}(n)) \\cong \\pi_1(\\mathrm{SU}(n - 1))." }, { "math_id": 129, "text": "\\mathrm{SU}(1)" }, { "math_id": 130, "text": "\\pi_1(\\mathrm{SU}(1))" }, { "math_id": 131, "text": "\\mathrm{SU}(n)" }, { "math_id": 132, "text": "n." }, { "math_id": 133, "text": "\\pi_1" }, { "math_id": 134, "text": "T" }, { "math_id": 135, "text": "K," }, { "math_id": 136, "text": "\\mathfrak t" }, { "math_id": 137, "text": "T." }, { "math_id": 138, "text": "\\exp : \\mathfrak t \\to T" }, { "math_id": 139, "text": "\\Gamma \\subset \\mathfrak t" }, { "math_id": 140, "text": "\\pi_1(T)." }, { "math_id": 141, "text": "\\pi_1(T) \\to \\pi_1(K)" }, { "math_id": 142, "text": "\\pi_1(K) \\cong \\Gamma / I." }, { "math_id": 143, "text": "G_2" }, { "math_id": 144, "text": "\\pi_n(X)" }, { "math_id": 145, "text": "n \\ge 1" }, { "math_id": 146, "text": "\\pi_n(S^n) = \\Z." }, { "math_id": 147, "text": "\\Omega X." }, { "math_id": 148, "text": "\\pi_1(X) \\cong \\pi_0(\\Omega X)." }, { "math_id": 149, "text": "X," }, { "math_id": 150, "text": "\\gamma \\colon [0, r] \\to X" }, { "math_id": 151, "text": "\\gamma, \\gamma'" }, { "math_id": 152, "text": "u,v \\geqslant 0" }, { "math_id": 153, "text": "r + u = r' + v" }, { "math_id": 154, "text": " \\gamma_u, \\gamma'_v \\colon [0, r + u] \\to X" }, { "math_id": 155, "text": " \\gamma_u (t) = \\begin{cases} \\gamma(t), & t \\in [0, r] \\\\ \\gamma(r), & t \\in [r, r + u]. \\end{cases} " }, { "math_id": 156, "text": "\\Pi (X)." }, { "math_id": 157, "text": "\\Pi (X)" }, { "math_id": 158, "text": "\\pi_1(X, x_0) = \\mathrm{Hom}_{\\Pi(X)}(x_0, x_0)" }, { "math_id": 159, "text": "S^1." }, { "math_id": 160, "text": "\\mathcal F" }, { "math_id": 161, "text": "\\mathcal F|_U = \\Q^n" }, { "math_id": 162, "text": "\\Q" }, { "math_id": 163, "text": "\\pi_1^\\text{et}" }, { "math_id": 164, "text": "|X|," } ]
https://en.wikipedia.org/wiki?curid=11004
1100422
Strong pseudoprime
A strong pseudoprime is a composite number that passes the Miller–Rabin primality test. All prime numbers pass this test, but a small fraction of composites also pass, making them "pseudoprimes". Unlike the Fermat pseudoprimes, for which there exist numbers that are pseudoprimes to all coprime bases (the Carmichael numbers), there are no composites that are strong pseudoprimes to all bases. Motivation and first examples. Let us say we want to investigate if "n" = 31697 is a probable prime (PRP). We pick base "a" = 3 and, inspired by Fermat's little theorem, calculate: formula_0 This shows 31697 is a Fermat PRP (base 3), so we may suspect it is a prime. We now repeatedly halve the exponent: formula_1 formula_2 formula_3 The first couple of times do not yield anything interesting (the result was still 1 modulo 31697), but at exponent 3962 we see a result that is neither 1 nor minus 1 (i.e. 31696) modulo 31697. This proves 31697 is in fact composite (it equals 29×1093). Modulo a prime, the residue 1 can have no other square roots than 1 and minus 1. This shows that 31697 is not a strong pseudoprime to base 3. For another example, pick "n" = 47197 and calculate in the same manner: formula_4 formula_5 formula_6 In this case, the result continues to be 1 (mod 47197) until we reach an odd exponent. In this situation, we say that 47197 is a strong probable prime to base 3. Because it turns out this PRP is in fact composite (can be seen by picking other bases than 3), we have that 47197 is a strong pseudoprime to base 3. Finally, consider "n" = 74593 where we get: formula_7 formula_8 formula_9 Here, we reach minus 1 modulo 74593, a situation that is perfectly possible with a prime. When this occurs, we stop the calculation (even though the exponent is not odd yet) and say that 74593 is} a strong probable prime (and, as it turns out, a strong pseudoprime) to base 3. Formal definition. An odd composite number "n" = "d" · 2"s" + 1 where "d" is odd is called a strong (Fermat) pseudoprime to base "a" if: formula_10 or formula_11 The definition is trivially met if a ≡ ±1 (mod n) so these trivial bases are often excluded. Guy mistakenly gives a definition with only the first condition, which is not satisfied by all primes. Properties of strong pseudoprimes. A strong pseudoprime to base "a" is always an Euler–Jacobi pseudoprime, an Euler pseudoprime and a Fermat pseudoprime to that base, but not all Euler and Fermat pseudoprimes are strong pseudoprimes. Carmichael numbers may be strong pseudoprimes to some bases—for example, 561 is a strong pseudoprime to base 50—but not to all bases. A composite number "n" is a strong pseudoprime to at most one quarter of all bases below "n"; thus, there are no "strong Carmichael numbers", numbers that are strong pseudoprimes to all bases. Thus given a random base, the probability that a number is a strong pseudoprime to that base is less than 1/4, forming the basis of the widely used Miller–Rabin primality test. The true probability of a failure is generally vastly smaller. Paul Erdős and Carl Pomerance showed in 1986 that if a random integer n passes the Miller–Rabin primality test to a random base b, then n is almost surely a prime. For example, of the first 25,000,000,000 positive integers, there are 1,091,987,405 integers that are probable primes to base 2, but only 21,853 of them are pseudoprimes, and even fewer of them are strong pseudoprimes, as the latter is a subset of the former. However, Arnault gives a 397-digit Carmichael number that is a strong pseudoprime to every base less than 307. One way to reduce the chance that such a number is wrongfully declared probably prime is to combine a strong probable prime test with a Lucas probable prime test, as in the Baillie–PSW primality test. There are infinitely many strong pseudoprimes to any base. Examples. The first strong pseudoprimes to base 2 are 2047, 3277, 4033, 4681, 8321, 15841, 29341, 42799, 49141, 52633, 65281, 74665, 80581, 85489, 88357, 90751, ... (sequence in the OEIS). The first to base 3 are 121, 703, 1891, 3281, 8401, 8911, 10585, 12403, 16531, 18721, 19345, 23521, 31621, 44287, 47197, 55969, 63139, 74593, 79003, 82513, 87913, 88573, 97567, ... (sequence in the OEIS). The first to base 5 are 781, 1541, 5461, 5611, 7813, 13021, 14981, 15751, 24211, 25351, 29539, 38081, 40501, 44801, 53971, 79381, ... (sequence in the OEIS). For base 4, see OEIS: , and for base 6 to 100, see OEIS:  to OEIS: . By testing the above conditions to several bases, one gets somewhat more powerful primality tests than by using one base alone. For example, there are only 13 numbers less than 25·109 that are strong pseudoprimes to bases 2, 3, and 5 simultaneously. They are listed in Table 7 of. The smallest such number is 25326001. This means that, if "n" is less than 25326001 and "n" is a strong probable prime to bases 2, 3, and 5, then "n" is prime. Carrying this further, 3825123056546413051 is the smallest number that is a strong pseudoprime to the 9 bases 2, 3, 5, 7, 11, 13, 17, 19, and 23. So, if "n" is less than 3825123056546413051 and "n" is a strong probable prime to these 9 bases, then "n" is prime. By judicious choice of bases that are not necessarily prime, even better tests can be constructed. For example, there is no composite formula_12 that is a strong pseudoprime to all of the seven bases 2, 325, 9375, 28178, 450775, 9780504, and 1795265022. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3^{31696} \\equiv 1 \\pmod {31697}" }, { "math_id": 1, "text": "3^{15848} \\equiv 1 \\pmod {31697}" }, { "math_id": 2, "text": "3^{7924} \\equiv 1 \\pmod {31697}" }, { "math_id": 3, "text": "3^{3962} \\equiv 28419 \\pmod {31697}" }, { "math_id": 4, "text": "3^{47196} \\equiv 1 \\pmod {47197}" }, { "math_id": 5, "text": "3^{23598} \\equiv 1 \\pmod {47197}" }, { "math_id": 6, "text": "3^{11799} \\equiv 1 \\pmod {47197}" }, { "math_id": 7, "text": "3^{74592} \\equiv 1 \\pmod {74593}" }, { "math_id": 8, "text": "3^{37296} \\equiv 1 \\pmod {74593}" }, { "math_id": 9, "text": "3^{18648} \\equiv 74592 \\pmod {74593}" }, { "math_id": 10, "text": "a^d\\equiv 1\\pmod n" }, { "math_id": 11, "text": "a^{d\\cdot 2^r}\\equiv -1\\pmod n\\quad\\mbox{ for some }0 \\leq r < s ." }, { "math_id": 12, "text": "< 2^{64}" } ]
https://en.wikipedia.org/wiki?curid=1100422
11005104
HPN
HPN may refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "Hp_n" }, { "math_id": 1, "text": "\\mathbb{H}\\mathrm{P}^n" } ]
https://en.wikipedia.org/wiki?curid=11005104
1100516
Model predictive control
Advanced method of process control Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry. Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC. Overview. The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics. MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints. MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required. While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust. When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC. An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers. Theory behind MPC. MPC is based on iterative, finite-horizon optimization of a plant model. At time formula_0 the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: formula_1. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time formula_2. Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method. Principles of MPC. Model predictive control is a multivariable control algorithm that uses: An example of a quadratic cost function for optimization is given by: formula_3 without violating constraints (low/high limits) with formula_4: formula_5th controlled variable (e.g. measured temperature) formula_6: formula_5th reference variable (e.g. required temperature) formula_7: formula_5th manipulated variable (e.g. control valve) formula_8: weighting coefficient reflecting the relative importance of formula_4 formula_9: weighting coefficient penalizing relative big changes in formula_7 etc. Nonlinear MPC. Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution. The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g... Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function. While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time. Explicit MPC. Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive. Robust MPC. Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below. Commercially available MPC software. Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation. A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in "Control Engineering Practice" 11 (2003) 733–764. MPC vs. LQR. Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs. While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency. Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance. The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR. This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t" }, { "math_id": 1, "text": "[t,t+T]" }, { "math_id": 2, "text": "t+T" }, { "math_id": 3, "text": "J=\\sum_{i=1}^N w_{x_i} (r_i-x_i)^2 + \\sum_{i=1}^M w_{u_i} {\\Delta u_i}^2" }, { "math_id": 4, "text": "x_i" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "r_i" }, { "math_id": 7, "text": "u_i" }, { "math_id": 8, "text": "w_{x_i}" }, { "math_id": 9, "text": "w_{u_i}" } ]
https://en.wikipedia.org/wiki?curid=1100516
11006582
Strong partition cardinal
In Zermelo–Fraenkel set theory without the axiom of choice a strong partition cardinal is an uncountable well-ordered cardinal formula_0 such that every partition of the set formula_1of size formula_0 subsets of formula_0 into less than formula_0 pieces has a homogeneous set of size formula_0. The existence of strong partition cardinals contradicts the axiom of choice. The Axiom of determinacy implies that ℵ1 is a strong partition cardinal. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "[k]^k" } ]
https://en.wikipedia.org/wiki?curid=11006582
11009703
Minimal volume
In mathematics, in particular in differential geometry, the minimal volume is a number that describes one aspect of a smooth manifold's topology. This diffeomorphism invariant was introduced by Mikhael Gromov. Given a smooth Riemannian manifold ("M", "g"), one may consider its volume vol("M", "g") and sectional curvature "K""g". The minimal volume of a smooth manifold M is defined to be: formula_0 Any closed manifold can be given an arbitrarily small volume by scaling any choice of a Riemannian metric. The minimal volume removes the possibility of such scaling by the constraint on sectional curvatures. So, if the minimal volume of M is zero, then a certain kind of nontrivial collapsing phenomena can be exhibited by Riemannian metrics on M. A trivial example, the only in which the possibility of scaling is present, is a closed flat manifold. The Berger spheres show that the minimal volume of the three-dimensional sphere is also zero. Gromov has conjectured that every closed simply connected odd-dimensional manifold has zero minimal volume. By contrast, a positive lower bound for the minimal volume of M amounts to some (usually nontrivial) geometric inequality for the volume of an arbitrary complete Riemannian metric on M in terms of the size of its curvature. According to the Gauss-Bonnet theorem, if M is a closed and connected two-dimensional manifold, then MinVol("M") 2π|χ("M")|. The infimum in the definition of minimal volume is realized by the metrics appearing from the uniformization theorem. More generally, according to the Chern-Gauss-Bonnet formula, if M is a closed and connected manifold then: formula_1 Gromov, in 1982, showed that the volume of a complete Riemannian metric on a smooth manifold can always be estimated by the size of its curvature and by the simplicial volume of the manifold, via the inequality: formula_2
[ { "math_id": 0, "text": "\\operatorname{MinVol}(M):=\\inf\\{\\operatorname{vol}(M,g) :g\\text{ a complete Riemannian metric with }|K_{g}|\\leq 1\\}." }, { "math_id": 1, "text": "\\operatorname{MinVol}(M)\\geq c(n)\\big|\\chi(M)\\big|." }, { "math_id": 2, "text": "\\operatorname{MinVol}(M)\\geq\\frac{\\|M\\|}{(n-1)^nn!}." } ]
https://en.wikipedia.org/wiki?curid=11009703
11009758
Realizability
In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another. Realizability can be seen as a formalization of the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic. In realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide. Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining. It is also related to topos theory via realizability topoi. Example: Kleene's 1945-realizability. Kleene's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic. A few pieces of notation are required: first, an ordered pair ("n","m") is treated as a single number using a fixed primitive recursive pairing function; second, for each natural number "n", φ"n" is the computable function with index "n". The following clauses are used to define a relation ""n" realizes "A"" between natural numbers "n" and formulas "A" in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation: With this definition, the following theorem is obtained: Let "A" be a sentence of Heyting arithmetic (HA). If HA proves "A" then there is an "n" such that "n" realizes "A". On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose. So realizability does not exactly mirror intuitionistic reasoning. Further analysis of the method can be used to prove that HA has the "disjunction and existence properties": More such properties are obtained involving Harrop formulas. Later developments. Kreisel introduced modified realizability, which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise: formula_0. Relative realizability is an intuitionist analysis of computable or computably enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems. Classical realizability was introduced by Krivine and extends realizability to classical logic. It furthermore realizes axioms of Zermelo–Fraenkel set theory. Understood as a generalization of Cohen’s forcing, it was used to provide new models of set theory. Linear realizability extends realizability techniques to linear logic. The term was coined by Seiller to encompass several constructions, such as geometry of interaction models, ludics, interaction graphs models. Applications. Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq.
[ { "math_id": 0, "text": "(A \\rightarrow \\exists x\\;P(x)) \\rightarrow \\exists x\\;(A \\rightarrow P(x))" } ]
https://en.wikipedia.org/wiki?curid=11009758
1101069
List of unsolved problems in computer science
List of unsolved computational problems This article is a list of notable unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known, or when experts in the field disagree about proposed solutions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2/\\sqrt 3" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "(n-1)^2" } ]
https://en.wikipedia.org/wiki?curid=1101069
1101364
Gaussian units
Variant of the centimetre–gram–second unit system Gaussian units constitute a metric system of physical units. This system is the most common of the several electromagnetic unit systems based on cgs (centimetre–gram–second) units. It is also called the Gaussian unit system, Gaussian-cgs units, or often just cgs units. The term "cgs units" is ambiguous and therefore to be avoided if possible: there are several variants of cgs with conflicting definitions of electromagnetic quantities and units. SI units predominate in most fields, and continue to increase in popularity at the expense of Gaussian units. Alternative unit systems also exist. Conversions between quantities in Gaussian and SI units are not direct unit conversions, because the quantities themselves are defined differently in each system. This means that the equations expressing physical laws of electromagnetism—such as Maxwell's equations—will change depending on the system of units employed. As an example, quantities that are dimensionless in one system may have dimension in the other. Alternative unit systems. The Gaussian unit system is just one of several electromagnetic unit systems within CGS. Others include "electrostatic units", "electromagnetic units", and Heaviside–Lorentz units. Some other unit systems are called "natural units", a category that includes atomic units, Planck units, and others. The International System of Units (SI), with the associated International System of Quantities (ISQ), is by far the most common system of units today. In engineering and practical areas, SI is nearly universal and has been for decades. In technical, scientific literature (such as theoretical physics and astronomy), Gaussian units were predominant until recent decades, but are now getting progressively less so. The 8th SI Brochure acknowledges that the CGS-Gaussian unit system has advantages in classical and relativistic electrodynamics, but the 9th SI Brochure makes no mention of CGS systems. Natural units may be used in more theoretical and abstract fields of physics, particularly particle physics and string theory. Major differences between Gaussian and SI systems. "Rationalized" unit systems. One difference between Gaussian and SI units is in the factors of 4"π" in various formulas. With SI electromagnetic units, called "rationalized", Maxwell's equations have no explicit factors of 4"π" in the formulae, whereas the inverse-square force laws – Coulomb's law and the Biot–Savart law – do have a factor of 4"π" attached to the "r"2. With Gaussian units, called "unrationalized" (and unlike Heaviside–Lorentz units), the situation is reversed: two of Maxwell's equations have factors of 4"π" in the formulas, while both of the inverse-square force laws, Coulomb's law and the Biot–Savart law, have no factor of 4"π" attached to "r"2 in the denominator. Unit of charge. A major difference between the Gaussian system and the ISQ is in the respective definitions of the quantity charge. In the ISQ, a separate base dimension, electric current, with the associated SI unit, the ampere, is associated with electromagnetic phenomena, with the consequence that a unit of electrical charge (1 coulomb = 1 ampere × 1 second) is a physical quantity that cannot be expressed purely in terms of the mechanical units (kilogram, metre, second). On the other hand, in the Gaussian system, the unit of electric charge (the statcoulomb, statC) can be written entirely as a dimensional combination of the non-electrical base units (gram, centimetre, second), as: &lt;templatestyles src="Block indent/styles.css"/&gt;= . For example, Coulomb's law in Gaussian units has no constant: formula_0 where F is the repulsive force between two electrical charges, "Q" and "Q" are the two charges in question, and r is the distance separating them. If "Q" and "Q" are expressed in statC and r in centimetres, then the unit of F that is coherent with these units is the dyne. The same law in the ISQ is: formula_1 where "ε"0 is the vacuum permittivity, a quantity that is not dimensionless: it has dimension (charge)2 (time)2 (mass)−1 (length)−3. Without "ε"0, the equation would be dimensionally inconsistent with the quantities as defined in the ISQ, whereas the quantity "ε"0 does not appear in Gaussian equations. This is an example of how some dimensional physical constants can be eliminated from the expressions of physical law by the choice of definition of quantities. In the ISQ, formula_2 converts or scales flux density, D, to the corresponding electric field, E (the latter has dimension of force per charge), while in the Gaussian system, electric flux density is the same quantity as electric field strength in free space aside from a dimensionless constant factor. In the Gaussian system, the speed of light c appears directly in electromagnetic formulas like Maxwell's equations (see below), whereas in the ISQ it appears via the product formula_3. Units for magnetism. In the Gaussian system, unlike the ISQ, the electric field E and the magnetic field B have the same dimension. This amounts to a factor of c between how B is defined in the two unit systems, on top of the other differences. (The same factor applies to other magnetic quantities such as the magnetic field, H, and magnetization, M.) For example, in a planar light wave in vacuum, |E(r, "t")| = |B(r, "t")| in Gaussian units, while |E(r, "t")| = "c" |B(r, "t")| in the ISQ. Polarization, magnetization. There are further differences between Gaussian system and the ISQ in how quantities related to polarization and magnetization are defined. For one thing, in the Gaussian system, "all" of the following quantities have the same dimension: E, D, P, B, H, and M. A further point is that the electric and magnetic susceptibility of a material is dimensionless in both Gaussian system and the ISQ, but a given material will have a different numerical susceptibility in the two systems. (Equation is given below.) List of equations. This section has a list of the basic formulae of electromagnetism, given in both the Gaussian system and the International System of Quantities (ISQ). Most symbol names are not given; for complete explanations and definitions, please click to the appropriate dedicated article for each equation. A simple conversion scheme for use when tables are not available may be found in Garg (2012). All formulas except otherwise noted are from Ref. Maxwell's equations. Here are Maxwell's equations, both in macroscopic and microscopic forms. Only the "differential form" of the equations is given, not the "integral form"; to get the integral forms apply the divergence theorem or Kelvin–Stokes theorem. Dielectric and magnetic materials. Below are the expressions for the various fields in a dielectric medium. It is assumed here for simplicity that the medium is homogeneous, linear, isotropic, and nondispersive, so that the permittivity is a simple constant. where The quantities formula_7 and formula_8 are both dimensionless, and they have the same numeric value. By contrast, the electric susceptibility formula_9 and formula_10 are both unitless, but have different numeric values for the same material: formula_11 Next, here are the expressions for the various fields in a magnetic medium. Again, it is assumed that the medium is homogeneous, linear, isotropic, and nondispersive, so that the permeability is a simple constant. where The quantities formula_15 and formula_16 are both dimensionless, and they have the same numeric value. By contrast, the magnetic susceptibility formula_17 and formula_18 are both unitless, but has different numeric values in the two systems for the same material: formula_19 Vector and scalar potentials. The electric and magnetic fields can be written in terms of a vector potential A and a scalar potential ϕ: Electrical circuit. where Electromagnetic unit names. &lt;templatestyles src="Template:Table alignment/tables.css" /&gt; Note: The SI quantities formula_5 and formula_13 satisfy formula_20 The conversion factors are written both symbolically and numerically. The numerical conversion factors can be derived from the symbolic conversion factors by dimensional analysis. For example, the top row says formula_21, a relation which can be verified with dimensional analysis, by expanding formula_5 and coulombs (C) in SI base units, and expanding statcoulombs (or franklins, Fr) in Gaussian base units. It is surprising to think of measuring capacitance in centimetres. One useful example is that a centimetre of capacitance is the capacitance between a sphere of radius 1 cm in vacuum and infinity. Another surprising unit is measuring resistivity in units of seconds. A physical example is: Take a parallel-plate capacitor, which has a "leaky" dielectric with permittivity 1 but a finite resistivity. After charging it up, the capacitor will discharge itself over time, due to current leaking through the dielectric. If the resistivity of the dielectric is t seconds, the half-life of the discharge is ~0.05 "t" seconds. This result is independent of the size, shape, and charge of the capacitor, and therefore this example illuminates the fundamental connection between resistivity and time units. Dimensionally equivalent units. A number of the units defined by the table have different names but are in fact dimensionally equivalent – i.e., they have the same expression in terms of the base units cm, g, s. (This is analogous to the distinction in SI between becquerel and Hz, or between newton-metre and joule.) The different names help avoid ambiguities and misunderstandings as to what physical quantity is being measured. In particular, all of the following quantities are dimensionally equivalent in Gaussian units, but they are nevertheless given different unit names as follows: General rules to translate a formula. Any formula can be converted between Gaussian and SI units by using the symbolic conversion factors from Table 1 above. For example, the electric field of a stationary point charge has the ISQ formula formula_22 where r is distance, and the "" superscript indicates that the electric field and charge are defined as in the ISQ. If we want the formula to instead use the Gaussian definitions of electric field and charge, we look up how these are related using Table 1, which says: formula_23 Therefore, after substituting and simplifying, we get the Gaussian-system formula: formula_24 which is the correct Gaussian-system formula, as mentioned in a previous section. For convenience, the table below has a compilation of the symbolic conversion factors from Table 1. To convert any formula from Gaussian system to the ISQ using this table, replace each symbol in the Gaussian column by the corresponding expression in the SI column (vice versa to convert the other way). This will reproduce any of the specific formulas given in the list above, such as Maxwell's equations, as well as any other formula not listed. Once all occurrences of the product formula_25 have been replaced by formula_26, there should be no remaining quantities in the equation that have an ISQ electromagnetic dimension (or, equivalently, that have an SI electromagnetic unit). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = \\frac{Q^\\mathrm{G}_1 Q^\\mathrm{G}_2}{r^2} ," }, { "math_id": 1, "text": "F = \\frac{1}{4\\pi\\varepsilon_0} \\frac{Q^\\mathrm{I}_1 Q^\\mathrm{I}_2}{r^2}" }, { "math_id": 2, "text": "1/\\varepsilon_0" }, { "math_id": 3, "text": "\\mu_0 \\varepsilon_0=1/c^2" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "\\varepsilon_0" }, { "math_id": 6, "text": "\\chi_\\mathrm{e}" }, { "math_id": 7, "text": "\\varepsilon^\\mathrm{G}" }, { "math_id": 8, "text": "\\varepsilon^\\mathrm{I}/\\varepsilon_0" }, { "math_id": 9, "text": "\\chi_\\mathrm{e}^\\mathrm{G}" }, { "math_id": 10, "text": "\\chi_\\mathrm{e}^\\mathrm{I}" }, { "math_id": 11, "text": "4\\pi \\chi_\\mathrm{e}^\\mathrm{G} = \\chi_\\mathrm{e}^\\mathrm{I}\\,." }, { "math_id": 12, "text": "\\mu" }, { "math_id": 13, "text": "\\mu_0" }, { "math_id": 14, "text": "\\chi_\\mathrm{m}" }, { "math_id": 15, "text": "\\mu^\\mathrm{G}" }, { "math_id": 16, "text": "\\mu^\\mathrm{I}/\\mu_0" }, { "math_id": 17, "text": "\\chi_\\mathrm{m}^\\mathrm{G}" }, { "math_id": 18, "text": "\\chi_\\mathrm{m}^\\mathrm{I}" }, { "math_id": 19, "text": "4\\pi \\chi_\\mathrm{m}^\\mathrm{G} = \\chi_\\mathrm{m}^\\mathrm{I}" }, { "math_id": 20, "text": "\\varepsilon_0\\mu_0 = 1/c^2" }, { "math_id": 21, "text": "{1} \\,/\\, {\\sqrt{4\\pi\\varepsilon_0}} \\approx {2.998 \\times 10^9 \\,\\mathrm{Fr}} \\,/\\, {1\\,\\mathrm{C}}" }, { "math_id": 22, "text": "\\mathbf{E}^{\\mathrm{I}} = \\frac{q^{\\mathrm{I}}}{4\\pi \\varepsilon_0 r^2} \\hat{\\mathbf{r}} ," }, { "math_id": 23, "text": "\\begin{align}\n\\frac{\\mathbf{E}^{\\mathrm{G}}}{\\mathbf{E}^{\\mathrm{I}}} &= \\sqrt{4\\pi\\varepsilon_0}\\,, \\\\\n\\frac{q^{\\mathrm{G}}}{q^\\mathrm{I}} &= \\frac{1}{\\sqrt{4\\pi\\varepsilon_0}}\\,.\n\\end{align}" }, { "math_id": 24, "text": "\\mathbf{E}^{\\mathrm{G}} = \\frac{q^{\\mathrm{G}}}{r^2}\\hat{\\mathbf{r}}\\,," }, { "math_id": 25, "text": "\\varepsilon_0 \\mu_0" }, { "math_id": 26, "text": "1/c^2" } ]
https://en.wikipedia.org/wiki?curid=1101364
11014228
Hausdorff density
In measure theory, a field of mathematics, the Hausdorff density measures how concentrated a Radon measure is at some point. Definition. Let formula_0 be a Radon measure and formula_1 some point in Euclidean space. The "s"-dimensional upper and lower Hausdorff densities are defined to be, respectively, formula_2 and formula_3 where formula_4 is the ball of radius "r" &gt; 0 centered at "a". Clearly, formula_5 for all formula_1. In the event that the two are equal, we call their common value the s-density of formula_0 at "a" and denote it formula_6. Marstrand's theorem. The following theorem states that the times when the "s"-density exists are rather seldom. Marstrand's theorem: Let formula_0 be a Radon measure on formula_7. Suppose that the "s"-density formula_6 exists and is positive and finite for "a" in a set of positive formula_0 measure. Then "s" is an integer. Preiss' theorem. In 1987 David Preiss proved a stronger version of Marstrand's theorem. One consequence is that sets with positive and finite density are rectifiable sets. Preiss' theorem: Let formula_0 be a Radon measure on formula_7. Suppose that "m"formula_8 is an integer and the "m"-density formula_9 exists and is positive and finite for formula_0 almost every "a" in the support of formula_0. Then formula_0 is "m"-rectifiable, i.e. formula_10 (formula_0 is absolutely continuous with respect to Hausdorff measure formula_11) and the support of formula_0 is an "m"-rectifiable set.
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": "a\\in\\mathbb{R}^{n}" }, { "math_id": 2, "text": " \\Theta^{*s}(\\mu,a)=\\limsup_{r\\rightarrow 0}\\frac{\\mu(B_{r}(a))}{r^{s}}" }, { "math_id": 3, "text": " \\Theta_{*}^{s}(\\mu,a)=\\liminf_{r\\rightarrow 0}\\frac{\\mu(B_{r}(a))}{r^{s}}" }, { "math_id": 4, "text": " B_{r}(a)" }, { "math_id": 5, "text": "\\Theta_{*}^{s}(\\mu,a)\\leq \\Theta^{*s}(\\mu,a)" }, { "math_id": 6, "text": "\\Theta^{s}(\\mu,a)" }, { "math_id": 7, "text": "\\mathbb{R}^{d}" }, { "math_id": 8, "text": "\\geq 1" }, { "math_id": 9, "text": "\\Theta^{m}(\\mu,a)" }, { "math_id": 10, "text": "\\mu\\ll H^{m}" }, { "math_id": 11, "text": "H^m" } ]
https://en.wikipedia.org/wiki?curid=11014228
11015552
Discrete exterior calculus
In mathematics, the discrete exterior calculus (DEC) is the extension of the exterior calculus to discrete spaces including graphs, finite element meshes, and lately also general polygonal meshes (non-flat and non-convex). DEC methods have proved to be very powerful in improving and analyzing finite element methods: for instance, DEC-based methods allow the use of highly non-uniform meshes to obtain accurate results. Non-uniform meshes are advantageous because they allow the use of large elements where the process to be simulated is relatively simple, as opposed to a fine resolution where the process may be complicated (e.g., near an obstruction to a fluid flow), while using less computational power than if a uniformly fine mesh were used. The discrete exterior derivative. Stokes' theorem relates the integral of a differential ("n" − 1)-form "ω" over the boundary ∂"M" of an "n"-dimensional manifold "M" to the integral of d"ω" (the exterior derivative of "ω", and a differential "n"-form on "M") over "M" itself: formula_0 One could think of differential "k"-forms as linear operators that act on "k"-dimensional "bits" of space, in which case one might prefer to use the bracket notation for a dual pairing. In this notation, Stokes' theorem reads as formula_1 In finite element analysis, the first stage is often the approximation of the domain of interest by a triangulation, "T". For example, a curve would be approximated as a union of straight line segments; a surface would be approximated by a union of triangles, whose edges are straight line segments, which themselves terminate in points. Topologists would refer to such a construction as a simplicial complex. The boundary operator on this triangulation/simplicial complex "T" is defined in the usual way: for example, if "L" is a directed line segment from one point, "a", to another, "b", then the boundary ∂"L" of "L" is the formal difference "b" − "a". A "k"-form on "T" is a linear operator acting on "k"-dimensional subcomplexes of "T"; e.g., a 0-form assigns values to points, and extends linearly to linear combinations of points; a 1-form assigns values to line segments in a similarly linear way. If "ω" is a "k"-form on "T", then the discrete exterior derivative d"ω" of "ω" is the unique ("k" + 1)-form defined so that Stokes' theorem holds: formula_2 For every ("k" + 1)-dimensional subcomplex of "T", "S". Other operators and operations such as the discrete wedge product, Hodge star, or Lie derivative can also be defined.
[ { "math_id": 0, "text": "\\int_{M} \\mathrm{d} \\omega = \\int_{\\partial M} \\omega." }, { "math_id": 1, "text": "\\langle \\mathrm{d} \\omega \\mid M \\rangle = \\langle \\omega \\mid \\partial M \\rangle." }, { "math_id": 2, "text": "\\langle \\mathrm{d} \\omega \\mid S \\rangle = \\langle \\omega \\mid \\partial S \\rangle." } ]
https://en.wikipedia.org/wiki?curid=11015552
11016148
Martin measure
In descriptive set theory, the Martin measure is a filter on the set of Turing degrees of sets of natural numbers, named after Donald A. Martin. Under the axiom of determinacy it can be shown to be an ultrafilter. Definition. Let formula_0 be the set of Turing degrees of sets of natural numbers. Given some equivalence class formula_1, we may define the "cone" (or "upward cone") of formula_2 as the set of all Turing degrees formula_3 such that formula_4; that is, the set of Turing degrees that are "at least as complex" as formula_5 under Turing reduction. In order-theoretic terms, the cone of formula_2 is the upper set of formula_2. Assuming the axiom of determinacy, the cone lemma states that if "A" is a set of Turing degrees, either "A" includes a cone or the complement of "A" contains a cone. It is similar to Wadge's lemma for Wadge degrees, and is important for the following result. We say that a set formula_6 of Turing degrees has measure 1 under the Martin measure exactly when formula_6 contains some cone. Since it is possible, for any formula_6, to construct a game in which player I has a winning strategy exactly when formula_6 contains a cone and in which player II has a winning strategy exactly when the complement of formula_6 contains a cone, the axiom of determinacy implies that the measure-1 sets of Turing degrees form an ultrafilter. Consequences. It is easy to show that a countable intersection of cones is itself a cone; the Martin measure is therefore a countably complete filter. This fact, combined with the fact that the Martin measure may be transferred to formula_7 by a simple mapping, tells us that formula_7 is measurable under the axiom of determinacy. This result shows part of the important connection between determinacy and large cardinals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "[X]\\in D" }, { "math_id": 2, "text": "[X]" }, { "math_id": 3, "text": "[Y]" }, { "math_id": 4, "text": "X\\le_T Y" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "\\omega_1" } ]
https://en.wikipedia.org/wiki?curid=11016148
11016935
Hypertranscendental number
A complex number is said to be hypertranscendental if it is not the value at an algebraic point of a function which is the solution of an algebraic differential equation with coefficients in formula_0 and with algebraic initial conditions. The term was introduced by D. D. Morduhai-Boltovskoi in "Hypertranscendental numbers and hypertranscendental functions" (1949). The term is related to transcendental numbers, which are numbers which are not a solution of a non-zero polynomial equation with rational coefficients. The number formula_1 is transcendental but not hypertranscendental, as it can be generated from the solution to the differential equation formula_2. Any hypertranscendental number is also a transcendental number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}[r]" }, { "math_id": 1, "text": "e" }, { "math_id": 2, "text": "y' = y" } ]
https://en.wikipedia.org/wiki?curid=11016935
11017352
Hypertranscendental function
Mathematics analytic function A hypertranscendental function or transcendentally transcendental function is a transcendental analytic function which is not the solution of an algebraic differential equation with coefficients in formula_0 (the integers) and with algebraic initial conditions. History. The term 'transcendentally transcendental' was introduced by E. H. Moore in 1896; the term 'hypertranscendental' was introduced by D. D. Morduhai-Boltovskoi in 1914. Definition. One standard definition (there are slight variants) defines solutions of differential equations of the form formula_1, where formula_2 is a polynomial with constant coefficients, as "algebraically transcendental" or "differentially algebraic". Transcendental functions which are not "algebraically transcendental" are "transcendentally transcendental". Hölder's theorem shows that the gamma function is in this category. Hypertranscendental functions usually arise as the solutions to functional equations, for example the gamma function. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "F\\left(x, y, y', \\cdots, y^{(n)} \\right) = 0" }, { "math_id": 2, "text": "F" } ]
https://en.wikipedia.org/wiki?curid=11017352
11017807
Morrie's law
For angles in degrees, cos(20)*cos(40)*cos(80) equals 1/8 Morrie's law is a special trigonometric identity. Its name is due to the physicist Richard Feynman, who used to refer to the identity under that name. Feynman picked that name because he learned it during his childhood from a boy with the name Morrie Jacobs and afterwards remembered it for all of his life. formula_0 Identity and generalisation. It is a special case of the more general identity formula_1 with "n" = 3 and α = 20° and the fact that formula_2 since formula_3 Similar identities. A similar identity for the sine function also holds: formula_4 Moreover, dividing the second identity by the first, the following identity is evident: formula_5 Proof. Geometric proof of Morrie's law. Consider a regular nonagon formula_6 with side length formula_7 and let formula_8 be the midpoint of formula_9, formula_10 the midpoint formula_11 and formula_12 the midpoint of formula_13. The inner angles of the nonagon equal formula_14 and furthermore formula_15, formula_16 and formula_17 (see graphic). Applying the cosinus definition in the right angle triangles formula_18, formula_19 and formula_20 then yields the proof for Morrie's law: formula_21 Algebraic proof of the generalised identity. Recall the double angle formula for the sine function formula_22 Solve for formula_23 formula_24 It follows that: formula_25 Multiplying all of these expressions together yields: formula_26 The intermediate numerators and denominators cancel leaving only the first denominator, a power of 2 and the final numerator. Note that there are "n" terms in both sides of the expression. Thus, formula_27 which is equivalent to the generalization of Morrie's law.
[ { "math_id": 0, "text": " \\cos(20^\\circ) \\cdot \\cos(40^\\circ) \\cdot \\cos(80^\\circ) = \\frac{1}{8}." }, { "math_id": 1, "text": " 2^n \\cdot \\prod_{k=0}^{n-1} \\cos(2^k \\alpha) = \\frac{\\sin(2^n \\alpha)}{\\sin(\\alpha)}" }, { "math_id": 2, "text": " \\frac{\\sin(160^\\circ)}{\\sin(20^\\circ)} = \\frac{\\sin(180^\\circ-20^\\circ)}{\\sin(20^\\circ)} = 1," }, { "math_id": 3, "text": " \\sin(180^\\circ-x) = \\sin(x)." }, { "math_id": 4, "text": " \\sin(20^\\circ) \\cdot \\sin(40^\\circ) \\cdot \\sin(80^\\circ) = \\frac{\\sqrt 3}{8}." }, { "math_id": 5, "text": " \\tan(20^\\circ) \\cdot \\tan(40^\\circ) \\cdot \\tan(80^\\circ) = \\sqrt 3 = \\tan(60^\\circ)." }, { "math_id": 6, "text": "ABCDEFGHI" }, { "math_id": 7, "text": "1" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "AB" }, { "math_id": 10, "text": "L" }, { "math_id": 11, "text": "BF" }, { "math_id": 12, "text": "J" }, { "math_id": 13, "text": "BD" }, { "math_id": 14, "text": "140^\\circ" }, { "math_id": 15, "text": "\\gamma=\\angle FBM=80^\\circ" }, { "math_id": 16, "text": "\\beta=\\angle DBF=40^\\circ" }, { "math_id": 17, "text": "\\alpha=\\angle CBD=20^\\circ" }, { "math_id": 18, "text": "\\triangle BFM" }, { "math_id": 19, "text": "\\triangle BDL" }, { "math_id": 20, "text": "\\triangle BCJ" }, { "math_id": 21, "text": "\\begin{align}\n1&=|AB|\\\\ \n&=2\\cdot|MB|\\\\\n&=2\\cdot|BF|\\cdot\\cos(\\gamma)\\\\\n&=2^2|BL|\\cos(\\gamma)\\\\\n&=2^2\\cdot|BD|\\cdot\\cos(\\gamma)\\cdot\\cos(\\beta)\\\\\n&=2^3\\cdot|BJ|\\cdot\\cos(\\gamma)\\cdot\\cos(\\beta) \\\\\n&=2^3\\cdot|BC|\\cdot\\cos(\\gamma)\\cdot\\cos(\\beta)\\cdot\\cos(\\alpha) \\\\\n&=2^3\\cdot 1 \\cdot\\cos(\\gamma)\\cdot\\cos(\\beta)\\cdot\\cos(\\alpha) \\\\\n&=8\\cdot\\cos(80^\\circ)\\cdot\\cos(40^\\circ)\\cdot\\cos(20^\\circ)\n\\end{align}" }, { "math_id": 22, "text": " \\sin(2 \\alpha) = 2 \\sin(\\alpha) \\cos(\\alpha). " }, { "math_id": 23, "text": " \\cos(\\alpha) " }, { "math_id": 24, "text": " \\cos(\\alpha)=\\frac{\\sin(2 \\alpha)}{2 \\sin(\\alpha)}. " }, { "math_id": 25, "text": "\n\\begin{align}\n \\cos(2 \\alpha) & = \\frac{\\sin(4 \\alpha)}{2 \\sin(2 \\alpha)} \\\\[6pt]\n \\cos(4 \\alpha) & = \\frac{\\sin(8 \\alpha)}{2 \\sin(4 \\alpha)} \\\\\n & \\,\\,\\,\\vdots \\\\\n \\cos\\left(2^{n-1} \\alpha\\right)\n & = \\frac{\\sin\\left(2^n \\alpha\\right)}{2 \\sin\\left(2^{n-1} \\alpha\\right)}.\n\\end{align}\n" }, { "math_id": 26, "text": "\n \\cos(\\alpha) \\cos(2 \\alpha) \\cos(4 \\alpha) \\cdots \\cos\\left(2^{n-1} \\alpha\\right) =\n \\frac{\\sin(2 \\alpha)}{2 \\sin(\\alpha)} \\cdot\n \\frac{\\sin(4 \\alpha)}{2 \\sin(2 \\alpha)} \\cdot\n \\frac{\\sin(8 \\alpha)}{2 \\sin(4 \\alpha)} \\cdots\n \\frac{\\sin\\left(2^n \\alpha\\right)}{2 \\sin\\left(2^{n-1} \\alpha\\right)}.\n" }, { "math_id": 27, "text": " \\prod_{k=0}^{n-1} \\cos\\left(2^k \\alpha\\right) = \\frac{\\sin\\left(2^n \\alpha\\right)}{2^n \\sin(\\alpha)}, " }, { "math_id": 28, "text": "\\alpha=2^{-n} x" } ]
https://en.wikipedia.org/wiki?curid=11017807
11017946
Matrix coefficient
Functions on special groups related to their matrix representations In mathematics, a matrix coefficient (or matrix element) is a function on a group of a special form, which depends on a linear representation of the group and additional data. Precisely, it is a function on a compact topological group "G" obtained by composing a representation of "G" on a vector space "V" with a linear map from the endomorphisms of "V" into "V"'s underlying field. It is also called a representative function. They arise naturally from finite-dimensional representations of "G" as the matrix-entry functions of the corresponding matrix representations. The Peter–Weyl theorem says that the matrix coefficients on "G" are dense in the Hilbert space of square-integrable functions on "G". Matrix coefficients of representations of Lie groups turned out to be intimately related with the theory of special functions, providing a unifying approach to large parts of this theory. Growth properties of matrix coefficients play a key role in the classification of irreducible representations of locally compact groups, in particular, reductive real and "p"-adic groups. The formalism of matrix coefficients leads to a generalization of the notion of a modular form. In a different direction, mixing properties of certain dynamical systems are controlled by the properties of suitable matrix coefficients. Definition. A matrix coefficient (or matrix element) of a linear representation ρ of a group G on a vector space V is a function fv,η on the group, of the type formula_0 where v is a vector in V, η is a continuous linear functional on V, and g is an element of G. This function takes scalar values on G. If V is a Hilbert space, then by the Riesz representation theorem, all matrix coefficients have the form formula_1 for some vectors v and w in V. For V of finite dimension, and v and w taken from a standard basis, this is actually the function given by the matrix entry in a fixed place. Applications. Finite groups. Matrix coefficients of irreducible representations of finite groups play a prominent role in representation theory of these groups, as developed by Burnside, Frobenius and Schur. They satisfy Schur orthogonality relations. The character of a representation ρ is a sum of the matrix coefficients "f""v"i,ηi, where {"v"i} form a basis in the representation space of ρ, and {ηi} form the dual basis. Finite-dimensional Lie groups and special functions. Matrix coefficients of representations of Lie groups were first considered by Élie Cartan. Israel Gelfand realized that many classical special functions and orthogonal polynomials are expressible as the matrix coefficients of representation of Lie groups "G". This description provides a uniform framework for proving many hitherto disparate properties of special functions, such as addition formulas, certain recurrence relations, orthogonality relations, integral representations, and eigenvalue properties with respect to differential operators. Special functions of mathematical physics, such as the trigonometric functions, the hypergeometric function and its generalizations, Legendre and Jacobi orthogonal polynomials and Bessel functions all arise as matrix coefficients of representations of Lie groups. Theta functions and real analytic Eisenstein series, important in algebraic geometry and number theory, also admit such realizations. Automorphic forms. A powerful approach to the theory of classical modular forms, initiated by Gelfand, Graev, and Piatetski-Shapiro, views them as matrix coefficients of certain infinite-dimensional unitary representations, automorphic representations of adelic groups. This approach was further developed by Langlands, for general reductive algebraic groups over global fields.
[ { "math_id": 0, "text": "f_{v,\\eta}(g) = \\eta(\\rho(g)v)" }, { "math_id": 1, "text": "f_{v,w}(g) = \\langle \\rho(g)v, w \\rangle" } ]
https://en.wikipedia.org/wiki?curid=11017946
11018121
Quaternion-Kähler symmetric space
In differential geometry, a quaternion-Kähler symmetric space or Wolf space is a quaternion-Kähler manifold which, as a Riemannian manifold, is a Riemannian symmetric space. Any quaternion-Kähler symmetric space with positive Ricci curvature is compact and simply connected, and is a Riemannian product of quaternion-Kähler symmetric spaces associated to compact simple Lie groups. For any compact simple Lie group "G", there is a unique "G"/"H" obtained as a quotient of "G" by a subgroup formula_0 Here, Sp(1) is the compact form of the SL(2)-triple associated with the highest root of "G", and "K" its centralizer in "G". These are classified as follows. The twistor spaces of quaternion-Kähler symmetric spaces are the homogeneous holomorphic contact manifolds, classified by Boothby: they are the adjoint varieties of the complex semisimple Lie groups. These spaces can be obtained by taking a projectivization of a minimal nilpotent orbit of the respective complex Lie group. The holomorphic contact structure is apparent, because the nilpotent orbits of semisimple Lie groups are equipped with the Kirillov-Kostant holomorphic symplectic form. This argument also explains how one can associate a unique Wolf space to each of the simple complex Lie groups.
[ { "math_id": 0, "text": " H = K \\cdot \\mathrm{Sp}(1).\\, " } ]
https://en.wikipedia.org/wiki?curid=11018121
1101849
Cyclic voltammetry
Method of analyzing electrochemical reactions In electrochemistry, cyclic voltammetry (CV) is a type of potentiodynamic measurement. In a cyclic voltammetry experiment, the working electrode potential is ramped linearly versus time. Unlike in linear sweep voltammetry, after the set potential is reached in a CV experiment, the working electrode's potential is ramped in the opposite direction to return to the initial potential. These cycles of ramps in potential may be repeated as many times as needed. The current at the working electrode is plotted versus the applied voltage (that is, the working electrode's potential) to give the cyclic voltammogram trace. Cyclic voltammetry is generally used to study the electrochemical properties of an analyte in solution or of a molecule that is adsorbed onto the electrode. Experimental method. In cyclic voltammetry (CV), the electrode potential ramps linearly versus time in cyclical phases (blue trace in Figure 2). The rate of voltage change over time during each of these phases is known as the experiment's scan rate (V/s). The potential is measured between the working electrode and the reference electrode, while the current is measured between the working electrode and the counter electrode. These data are plotted as current density ("j") versus applied potential ("E", often referred to as just 'potential'). In Figure 2, during the initial forward scan (from t0 to t1) an increasingly oxidation potential is applied; thus the anodic current will, at least initially, increase over this time period, assuming that there are oxidable analytes in the system. At some point after the oxidation potential of the analyte is reached, the anodic current will decrease as the concentration of oxidable analyte is depleted. If the redox couple is reversible, then during the reverse scan (from t1 to t2), the oxidized analyte will start to be re-reduced, giving rise to a current of reverse polarity (cathodic current) to before. The more reversible the redox couple is, the more similar the oxidation peak will be in shape to the reduction peak. Hence, CV data can provide information about redox potentials and electrochemical reaction rates. For instance, if the electron transfer at the working electrode surface is fast and the current is limited by the diffusion of analyte species to the electrode surface, then the peak current will be proportional to the square root of the scan rate. This relationship is described by the Randles–Sevcik equation. In this situation, the CV experiment only samples a small portion of the solution, i.e., the diffusion layer at the electrode surface. Characterization. The utility of cyclic voltammetry is highly dependent on the analyte being studied. The analyte has to be redox active within the potential window to be scanned. The analyte is in solution. Reversible couples. Often the analyte displays a reversible CV wave (such as that depicted in Figure 1), which is observed when all of the initial analyte can be recovered after a forward and reverse scan cycle. Although such reversible couples are simpler to analyze, they contain less information than more complex waveforms. The waveform of even reversible couples is complex owing to the combined effects of polarization and diffusion. The difference between the two peak potentials (Ep), ΔEp, is of particular interest. Δ"E"p = "E"pa - "E"pc &gt; 0 This difference mainly results from the effects of analyte diffusion rates. In the ideal case of a reversible 1e- couple, Δ"E"p is 57 mV and the full-width half-max of the forward scan peak is 59 mV. Typical values observed experimentally are greater, often approaching 70 or 80 mV. The waveform is also affected by the rate of electron transfer, usually discussed as the activation barrier for electron transfer. A theoretical description of polarization overpotential is in part described by the Butler–Volmer equation and Cottrell equation. In an ideal system the relationship reduces to formula_0 for an "n" electron process. Focusing on current, reversible couples are characterized by "i"pa/"i"pc = 1. When a reversible peak is observed, thermodynamic information in the form of a half cell potential E01/2 can be determined. When waves are semi-reversible ("i"pa/"i"pc is close but not equal to 1), it may be possible to determine even more specific information (see electrochemical reaction mechanism). The current maxima for oxidation and reduction itself depend on the scan rate, see the figure. To study the nature of the electrochemical reaction mechanism it is useful to perform a power fit according to formula_1 A fit with formula_2 in the figure shows the proportionality of the peak currents to the square root of the scan rate when additionally formula_3 is fulfilled. This leads to the so called Randles–Sevcik equation and the rate determining step of this electrochemical redox reaction can be assigned to diffusion. Nonreversible couples. Many redox processes observed by CV are quasi-reversible or non-reversible. In such cases the thermodynamic potential E01/2 is often deduced by simulation. The irreversibility is indicated by ipa/ipc ≠ 1. Deviations from unity are attributable to a subsequent chemical reaction that is triggered by the electron transfer. Such EC processes can be complex, involving isomerization, dissociation, association, etc. The analyte is adsorbed onto the electrode surface. Adsorbed species give simple voltammetric responses: ideally, at slow scan rates, there is no peak separation, the peak width is 90mV for a one-electron redox couple, and the peak current and peak area are proportional to scan rate (observing that the peak current is proportional to scan rate proves that the redox species that gives the peak is actually immobilised). The effect of increasing the scan rate can be used to measure the rate of interfacial electron transfer and/or the rates of reactions that are coupltransfer. This technique has been useful to study redox proteins, some of which readily adsorb on various electrode materials, but the theory for biological and non-biological redox molecules is the same (see the page about protein film voltammetry). Experimental setup. CV experiments are conducted on a solution in a cell fitted with electrodes. The solution consists of the solvent, in which is dissolved electrolyte and the species to be studied. The cell. A standard CV experiment employs a cell fitted with three electrodes: reference electrode, working electrode, and counter electrode. This combination is sometimes referred to as a three-electrode setup. Electrolyte is usually added to the sample solution to ensure sufficient conductivity. The solvent, electrolyte, and material composition of the working electrode will determine the potential range that can be accessed during the experiment. The electrodes are immobile and sit in unstirred solutions during cyclic voltammetry. This "still" solution method gives rise to cyclic voltammetry's characteristic diffusion-controlled peaks. This method also allows a portion of the analyte to remain after reduction or oxidation so that it may display further redox activity. Stirring the solution between cyclic voltammetry traces is important in order to supply the electrode surface with fresh analyte for each new experiment. The solubility of an analyte can change drastically with its overall charge; as such it is common for reduced or oxidized analyte species to precipitate out onto the electrode. This layering of analyte can insulate the electrode surface, display its own redox activity in subsequent scans, or otherwise alter the electrode surface in a way that affects the CV measurements. For this reason it is often necessary to clean the electrodes between scans. Common materials for the working electrode include glassy carbon, platinum, and gold. These electrodes are generally encased in a rod of inert insulator with a disk exposed at one end. A regular working electrode has a radius within an order of magnitude of 1 mm. Having a controlled surface area with a well-defined shape is necessary for being able to interpret cyclic voltammetry results. To run cyclic voltammetry experiments at very high scan rates a regular working electrode is insufficient. High scan rates create peaks with large currents and increased resistances, which result in distortions. Ultramicroelectrodes can be used to minimize the current and resistance. The counter electrode, also known as the auxiliary or second electrode, can be any material that conducts current easily, will not react with the bulk solution, and has a surface area much larger than the working electrode. Common choices are platinum and graphite. Reactions occurring at the counter electrode surface are unimportant as long as it continues to conduct current well. To maintain the observed current the counter electrode will often oxidize or reduce the solvent or bulk electrolyte. Solvents. CV can be conducted using a variety of solutions. Solvent choice for cyclic voltammetry takes into account several requirements. The solvent must dissolve the analyte and high concentrations of the supporting electrolyte. It must also be stable in the potential window of the experiment with respect to the working electrode. It must not react with either the analyte or the supporting electrolyte. It must be pure to prevent interference. Electrolyte. The electrolyte ensures good electrical conductivity and minimizes "iR" drop such that the recorded potentials correspond to actual potentials. For aqueous solutions, many electrolytes are available, but typical ones are alkali metal salts of perchlorate and nitrate. In nonaqueous solvents, the range of electrolytes is more limited, and a popular choice is tetrabutylammonium hexafluorophosphate. Related potentiometric techniques. Potentiodynamic techniques also exist that add low-amplitude AC perturbations to a potential ramp and measure variable response in a single frequency (AC voltammetry) or in many frequencies simultaneously (potentiodynamic electrochemical impedance spectroscopy). The response in alternating current is two-dimensional, characterized by both amplitude and phase. These data can be analyzed to determine information about different chemical processes (charge transfer, diffusion, double layer charging, etc.). Frequency response analysis enables simultaneous monitoring of the various processes that contribute to the potentiodynamic AC response of an electrochemical system. Whereas cyclic voltammetry is not hydrodynamic voltammetry, useful electrochemical methods are. In such cases, flow is achieved at the electrode surface by stirring the solution, pumping the solution, or rotating the electrode as is the case with rotating disk electrodes and rotating ring-disk electrodes. Such techniques target steady state conditions and produce waveforms that appear the same when scanned in either the positive or negative directions, thus limiting them to linear sweep voltammetry. Applications. Cyclic voltammetry (CV) has become an important and widely used electroanalytical technique in many areas of chemistry. It is often used to study a variety of redox processes, to determine the stability of reaction products, the presence of intermediates in redox reactions, electron transfer kinetics, and the reversibility of a reaction. It can be used for electrochemical deposition of thin films or for determining suitable reduction potential range of the ions present in electrolyte for electrochemical deposition. CV can also be used to determine the electron stoichiometry of a system, the diffusion coefficient of an analyte, and the formal reduction potential of an analyte, which can be used as an identification tool. In addition, because concentration is proportional to current in a reversible, Nernstian system, the concentration of an unknown solution can be determined by generating a calibration curve of current vs. concentration. In cellular biology it is used to measure the concentrations in living organisms. In organometallic chemistry, it is used to evaluate redox mechanisms. Measuring antioxidant capacity. Cyclical voltammetry can be used to determine the antioxidant capacity in food and even skin. Low molecular weight antioxidants, molecules that prevent other molecules from being oxidized by acting as reducing agents, are important in living cells because they inhibit cell damage or death caused by oxidation reactions that produce radicals. Examples of antioxidants include flavonoids, whose antioxidant activity is greatly increased with more hydroxyl groups. Because traditional methods to determine antioxidant capacity involve tedious steps, techniques to increase the rate of the experiment are continually being researched. One such technique involves cyclic voltammetry because it can measure the antioxidant capacity by quickly measuring the redox behavior over a complex system without the need to measure each component's antioxidant capacity. Furthermore, antioxidants are quickly oxidized at inert electrodes, so the half-wave potential can be utilized to determine antioxidant capacity. It is important to note that whenever cyclic voltammetry is utilized, it is usually compared to spectrophotometry or high-performance liquid chromatography (HPLC). Applications of the technique extend to food chemistry, where it is used to determine the antioxidant activity of red wine, chocolate, and hops. Additionally, it even has uses in the world of medicine in that it can determine antioxidants in the skin. Evaluation of a technique. The technique being evaluated uses voltammetric sensors combined in an electronic tongue (ET) to observe the antioxidant capacity in red wines. These electronic tongues (ETs) consist of multiple sensing units like voltammetric sensors, which will have unique responses to certain compounds. This approach is optimal to use since samples of high complexity can be analyzed with high cross-selectivity. Thus, the sensors can be sensitive to pH and antioxidants. As usual, the voltage in the cell was monitored using a working electrode and a reference electrode (silver/silver chloride electrode). Furthermore, a platinum counter electrode allows the current to continue to flow during the experiment. The Carbon Paste Electrodes sensor (CPE) and the Graphite-Epoxy Composite (GEC) electrode are tested in a saline solution before the scanning of the wine so that a reference signal can be obtained. The wines are then ready to be scanned, once with CPE and once with GEC. While cyclic voltammetry was successfully used to generate currents using the wine samples, the signals were complex and needed an additional extraction stage. It was found that the ET method could successfully analyze wine's antioxidant capacity as it agreed with traditional methods like TEAC, Folin-Ciocalteu, and I280 indexes. Additionally, the time was reduced, the sample did not have to be pretreated, and other reagents were unnecessary, all of which diminished the popularity of traditional methods. Thus, cyclic voltammetry successfully determines the antioxidant capacity and even improves previous results. Antioxidant capacity of chocolate and hops. The phenolic antioxidants for cocoa powder, dark chocolate, and milk chocolate can also be determined via cyclic voltammetry. In order to achieve this, the anodic peaks are calculated and analyzed with the knowledge that the first and third anodic peaks can be assigned to the first and second oxidation of flavonoids, while the second anodic peak represents phenolic acids. Using the graph produced by cyclic voltammetry, the total phenolic and flavonoid content can be deduced in each of the three samples. It was observed that cocoa powder and dark chocolate had the highest antioxidant capacity since they had high total phenolic and flavonoid content. Milk chocolate had the lowest capacity as it had the lowest phenolic and flavonoid content. While the antioxidant content was given using the cyclic voltammetry anodic peaks, HPLC must then be used to determine the purity of catechins and procyanidin in cocoa powder, dark chocolate, and milk chocolate. Hops, the flowers used in making beer, contain antioxidant properties due to the presence of flavonoids and other polyphenolic compounds. In this cyclic voltammetry experiment, the working electrode voltage was determined using a ferricinium/ferrocene reference electrode. By comparing different hop extract samples, it was observed that the sample containing polyphenols that were oxidized at less positive potentials proved to have better antioxidant capacity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{pa}-E_{pc}=\\frac{56.5\\text{ mV}}{n}" }, { "math_id": 1, "text": " j_{max}^{\\;ox,read} = j_{0} + A \\cdot \\bigg(\\frac{scan\\,rate}{mV/s}\\bigg)^{x} " }, { "math_id": 2, "text": " x= 0.5 " }, { "math_id": 3, "text": "\\mathrm{ j_{0} < A } " } ]
https://en.wikipedia.org/wiki?curid=1101849
11022628
List of space groups
There are 230 space groups in three dimensions, given by a number index, and a full name in Hermann–Mauguin notation, and a short name (international short symbol). The long names are given with spaces for readability. The groups each have a point group of the unit cell. Symbols. In Hermann–Mauguin notation, space groups are named by a symbol combining the point group identifier with the uppercase letters describing the lattice type. Translations within the lattice in the form of screw axes and glide planes are also noted, giving a complete crystallographic space group. These are the Bravais lattices in three dimensions: A reflection plane m within the point groups can be replaced by a glide plane, labeled as a, b, or c depending on which axis the glide is along. There is also the n glide, which is a glide along the half of a diagonal of a face, and the d glide, which is along a quarter of either a face or space diagonal of the unit cell. The d glide is often called the diamond glide plane as it features in the diamond structure. A gyration point can be replaced by a screw axis denoted by a number, "n", where the angle of rotation is formula_6. The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. For example, 21 is a 180° (twofold) rotation followed by a translation of of the lattice vector. 31 is a 120° (threefold) rotation followed by a translation of of the lattice vector. The possible screw axes are: 21, 31, 32, 41, 42, 43, 61, 62, 63, 64, and 65. Wherever there is both a rotation or screw axis "n" and a mirror or glide plane "m" along the same crystallographic direction, they are represented as a fraction formula_7 or "n/m". For example, 41/a means that the crystallographic axis in question contains both a 41 screw axis as well as a glide plane along a. In Schoenflies notation, the symbol of a space group is represented by the symbol of corresponding point group with additional superscript. The superscript doesn't give any additional information about symmetry elements of the space group, but is instead related to the order in which Schoenflies derived the space groups. This is sometimes supplemented with a symbol of the form formula_8 which specifies the Bravais lattice. Here formula_9 is the lattice system, and formula_10 is the centering type. In Fedorov symbol, the type of space group is denoted as "s" ("symmorphic" ), "h" ("hemisymmorphic"), or "a" ("asymmorphic"). The number is related to the order in which Fedorov derived space groups. There are 73 symmorphic, 54 hemisymmorphic, and 103 asymmorphic space groups. Symmorphic. The 73 symmorphic space groups can be obtained as combination of Bravais lattices with corresponding point group. These groups contain the same symmetry elements as the corresponding point groups, for example, the space groups P4/mmm (formula_11, "36s") and I4/mmm (formula_12, "37s"). Hemisymmorphic. The 54 hemisymmorphic space groups contain only axial combination of symmetry elements from the corresponding point groups. Hemisymmorphic space groups contain the axial combination 422, which are P4/mcc (formula_13, "35h"), P4/nbm (formula_14, "36h"), P4/nnc (formula_15, "37h"), and I4/mcm (formula_16, "38h"). Asymmorphic. The remaining 103 space groups are asymmorphic, for example, those derived from the point group 4/mmm (formula_17). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "b" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "e" }, { "math_id": 6, "text": "\\color{Black}\\tfrac{360^\\circ}{n}" }, { "math_id": 7, "text": "\\frac{n}{m}" }, { "math_id": 8, "text": "\\Gamma_x^y" }, { "math_id": 9, "text": "x \\in \\{t, m, o, q, rh, h, c\\}" }, { "math_id": 10, "text": "y \\in \\{\\empty, b, v, f\\}" }, { "math_id": 11, "text": "P\\tfrac{4}{m}\\tfrac{2}{m}\\tfrac{2}{m}" }, { "math_id": 12, "text": "I\\tfrac{4}{m}\\tfrac{2}{m}\\tfrac{2}{m}" }, { "math_id": 13, "text": "P\\tfrac{4}{m}\\tfrac{2}{c}\\tfrac{2}{c}" }, { "math_id": 14, "text": "P\\tfrac{4}{n}\\tfrac{2}{b}\\tfrac{2}{m}" }, { "math_id": 15, "text": "P\\tfrac{4}{n}\\tfrac{2}{n}\\tfrac{2}{c}" }, { "math_id": 16, "text": "I\\tfrac{4}{m}\\tfrac{2}{c}\\tfrac{2}{m}" }, { "math_id": 17, "text": "\\tfrac{4}{m}\\tfrac{2}{m}\\tfrac{2}{m}" } ]
https://en.wikipedia.org/wiki?curid=11022628
11022873
List of character tables for chemically important 3D point groups
This lists the character tables for the more common molecular point groups used in the study of molecular symmetry. These tables are based on the group-theoretical treatment of the symmetry operations present in common molecules, and are useful in molecular spectroscopy and quantum chemistry. Information regarding the use of the tables, as well as more extensive lists of them, can be found in the references. Notation. For each non-linear group, the tables give the most standard notation of the finite group isomorphic to the point group, followed by the order of the group (number of invariant symmetry operations). The finite group notation used is: Zn: cyclic group of order "n", Dn: dihedral group isomorphic to the symmetry group of an "n"–sided regular polygon, Sn: symmetric group on "n" letters, and An: alternating group on "n" letters. The character tables then follow for all groups. The rows of the character tables correspond to the irreducible representations of the group, with their conventional names, known as Mulliken symbols, in the left margin. The naming conventions are as follows: All but the two rightmost columns correspond to the symmetry operations which are invariant in the group. In the case of sets of similar operations with the same characters for all representations, they are presented as one column, with the number of such similar operations noted in the heading. The body of the tables contain the characters in the respective irreducible representations for each respective symmetry operation, or set of symmetry operations. The symbol "i" used in the body of the table denotes the imaginary unit: "i" 2 = −1. Used in a column heading, it denotes the operation of inversion. A superscripted uppercase "C" denotes complex conjugation. The two rightmost columns indicate which irreducible representations describe the symmetry transformations of the three Cartesian coordinates ("x", "y" and "z"), rotations about those three coordinates ("Rx", "Ry" and "Rz"), and functions of the quadratic terms of the coordinates("x"2, "y"2, "z"2, "xy", "xz", and "yz"). A further column is included in some tables, such as those of Salthouse and Ware For example, The last column relates to cubic functions which may be used in applications regarding "f" orbitals in atoms. Character tables. Nonaxial symmetries. These groups are characterized by a lack of a proper rotation axis, noting that a formula_0 rotation is considered the identity operation. These groups have involutional symmetry: the only nonidentity operation, if any, is its own inverse. In the group formula_0, all functions of the Cartesian coordinates and rotations about them transform as the formula_1 irreducible representation. Cyclic symmetries. The families of groups with these symmetries have only one rotation axis. Cyclic groups ("C"n). The cyclic groups are denoted by "C"n. These groups are characterized by an "n"-fold proper rotation axis "C"n. The "C"1 group is covered in the nonaxial groups section. Reflection groups ("C"nh). The reflection groups are denoted by "C"nh. These groups are characterized by i) an "n"-fold proper rotation axis "C"n; ii) a mirror plane "σh" normal to "C"n. The "C"1"h" group is the same as the "C"s group in the nonaxial groups section. Pyramidal groups ("C"nv). The pyramidal groups are denoted by "C"nv. These groups are characterized by i) an "n"-fold proper rotation axis "C"n; ii) "n" mirror planes "σv" which contain "C"n. The "C"1"v" group is the same as the "C"s group in the nonaxial groups section. Improper rotation groups ("S"n). The improper rotation groups are denoted by "Sn". These groups are characterized by an "n"-fold improper rotation axis "Sn", where "n" is necessarily even. The "S"2 group is the same as the "C"i group in the nonaxial groups section. "Sn" groups with an odd value of "n" are identical to C"n"h groups of same "n" and are therefore not considered here (in particular, S1 is identical to Cs). The S8 table reflects the 2007 discovery of errors in older references. Specifically, ("Rx", "Ry") transform not as E1 but rather as E3. Dihedral symmetries. The families of groups with these symmetries are characterized by 2-fold proper rotation axes normal to a principal rotation axis. Dihedral groups ("D"n). The dihedral groups are denoted by "D"n. These groups are characterized by i) an "n"-fold proper rotation axis "C"n; ii) "n" 2-fold proper rotation axes "C"2 normal to "C"n. The "D"1 group is the same as the "C"2 group in the cyclic groups section. Prismatic groups ("D"nh). The prismatic groups are denoted by "D"nh. These groups are characterized by i) an "n"-fold proper rotation axis "C"n; ii) "n" 2-fold proper rotation axes "C"2 normal to "C"n; iii) a mirror plane "σh" normal to "C"n and containing the "C"2s. The "D"1"h" group is the same as the "C"2"v" group in the pyramidal groups section. The D8"h" table reflects the 2007 discovery of errors in older references. Specifically, symmetry operation column headers 2S8 and 2S83 were reversed in the older references. Antiprismatic groups ("D"nd). The antiprismatic groups are denoted by "D"nd. These groups are characterized by i) an "n"-fold proper rotation axis "C"n; ii) "n" 2-fold proper rotation axes "C"2 normal to "C"n; iii) "n" mirror planes "σd" which contain "C"n. The "D"1"d" group is the same as the "C"2"h" group in the reflection groups section. Polyhedral symmetries. These symmetries are characterized by having more than one proper rotation axis of order greater than 2. Cubic groups. These polyhedral groups are characterized by not having a "C"5 proper rotation axis. Icosahedral groups. These polyhedral groups are characterized by having a "C"5 proper rotation axis. Linear (cylindrical) groups. These groups are characterized by having a proper rotation axis "C"∞ around which the symmetry is invariant to "any" rotation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_1" }, { "math_id": 1, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=11022873
1102297
Difference of two squares
Mathematical identity of polynomials In mathematics, the difference of two squares is a squared (multiplied by itself) number subtracted from another squared number. Every difference of squares may be factored according to the identity formula_0 in elementary algebra. Proof. Algebraic proof. The proof of the factorization identity is straightforward. Starting from the right-hand side, apply the distributive law to get formula_1 By the commutative law, the middle two terms cancel: formula_2 leaving formula_3 The resulting identity is one of the most commonly used in mathematics. Among many uses, it gives a simple proof of the AM–GM inequality in two variables. The proof holds in any commutative ring. Conversely, if this identity holds in a ring "R" for all pairs of elements "a" and "b", then "R" is commutative. To see this, apply the distributive law to the right-hand side of the equation and get formula_4. For this to be equal to formula_5, we must have formula_2 for all pairs "a", "b", so "R" is commutative. Geometric proof. The difference of two squares can also be illustrated geometrically as the difference of two square areas in a plane. In the diagram, the shaded part represents the difference between the areas of the two squares, i.e. formula_5. The area of the shaded part can be found by adding the areas of the two rectangles; formula_6, which can be factorized to formula_7. Therefore, formula_8. Another geometric proof proceeds as follows: We start with the figure shown in the first diagram below, a large square with a smaller square removed from it. The side of the entire square is a, and the side of the small removed square is b. The area of the shaded region is formula_9. A cut is made, splitting the region into two rectangular pieces, as shown in the second diagram. The larger piece, at the top, has width a and height a-b. The smaller piece, at the bottom, has width a-b and height b. Now the smaller piece can be detached, rotated, and placed to the right of the larger piece. In this new arrangement, shown in the last diagram below, the two pieces together form a rectangle, whose width is formula_10 and whose height is formula_11. This rectangle's area is formula_7. Since this rectangle came from rearranging the original figure, it must have the same area as the original figure. Therefore, formula_0. Usage. Factorization of polynomials and simplification of expressions. The formula for the difference of two squares can be used for factoring polynomials that contain the square of a first quantity minus the square of a second quantity. For example, the polynomial formula_12 can be factored as follows: formula_13 As a second example, the first two terms of formula_14 can be factored as formula_15, so we have: formula_16 Moreover, this formula can also be used for simplifying expressions: formula_17 Complex number case: sum of two squares. The difference of two squares is used to find the linear factors of the "sum" of two squares, using complex number coefficients. For example, the complex roots of formula_18 can be found using difference of two squares: formula_18 formula_19 (since formula_20) formula_21 formula_22 Therefore, the linear factors are formula_23 and formula_24. Since the two factors found by this method are complex conjugates, we can use this in reverse as a method of multiplying a complex number to get a real number. This is used to get real denominators in complex fractions. Rationalising denominators. The difference of two squares can also be used in the rationalising of irrational denominators. This is a method for removing surds from expressions (or at least moving them), applying to division by some combinations involving square roots. For example: The denominator of formula_25 can be rationalised as follows: formula_25 formula_26 formula_27 formula_28 formula_29 formula_30 Here, the irrational denominator formula_31 has been rationalised to formula_32. Mental arithmetic. The difference of two squares can also be used as an arithmetical short cut. If two numbers (whose average is a number which is easily squared) are multiplied, the difference of two squares can be used to give you the product of the original two numbers. For example: formula_33 Using the difference of two squares, formula_34 can be restated as formula_5 which is formula_35. Difference of two consecutive perfect squares. The difference of two consecutive perfect squares is the sum of the two bases "n" and "n"+1. This can be seen as follows: formula_36 Therefore, the difference of two consecutive perfect squares is an odd number. Similarly, the difference of two arbitrary perfect squares is calculated as follows: formula_37 Therefore, the difference of two even perfect squares is a multiple of 4 and the difference of two odd perfect squares is a multiple of 8. Galileo's law of odd numbers. A ramification of the difference of consecutive squares, Galileo's law of odd numbers states that the distance covered by an object falling without resistance in uniform gravity in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length. From the equation for uniform linear acceleration, the distance covered formula_38 for initial speed formula_39 constant acceleration formula_40 (acceleration due to gravity without air resistance), and time elapsed formula_41 it follows that the distance formula_42 is proportional to formula_43 (in symbols, formula_44), thus the distance from the starting point are consecutive squares for integer values of time elapsed. Factorization of integers. Several algorithms in number theory and cryptography use differences of squares to find factors of integers and detect composite numbers. A simple example is the Fermat factorization method, which considers the sequence of numbers formula_45, for formula_46. If one of the formula_47 equals a perfect square formula_48, then formula_49 is a (potentially non-trivial) factorization of formula_50. This trick can be generalized as follows. If formula_51 mod formula_50 and formula_52 mod formula_50, then formula_50 is composite with non-trivial factors formula_53 and formula_54. This forms the basis of several factorization algorithms (such as the quadratic sieve) and can be combined with the Fermat primality test to give the stronger Miller–Rabin primality test. Generalizations. The identity also holds in inner product spaces over the field of real numbers, such as for dot product of Euclidean vectors: formula_55 The proof is identical. For the special case that a and b have equal norms (which means that their dot squares are equal), this demonstrates analytically the fact that two diagonals of a rhombus are perpendicular. This follows from the left side of the equation being equal to zero, requiring the right side to equal zero as well, and so the vector sum of a + b (the long diagonal of the rhombus) dotted with the vector difference a - b (the short diagonal of the rhombus) must equal zero, which indicates the diagonals are perpendicular. Difference of two nth powers. If "a" and "b" are two elements of a commutative ring "R", then formula_56 History. Historically, the Babylonians used the difference of two squares to calculate multiplications. For example: 93 × 87 = 90² − 3² = 8091 64 × 56 = 60² − 4² = 3584
[ { "math_id": 0, "text": "a^2-b^2 = (a+b)(a-b)" }, { "math_id": 1, "text": "(a+b)(a-b) = a^2+ba-ab-b^2" }, { "math_id": 2, "text": "ba - ab = 0" }, { "math_id": 3, "text": "(a+b)(a-b) = a^2-b^2" }, { "math_id": 4, "text": "a^2 + ba - ab - b^2" }, { "math_id": 5, "text": "a^2 - b^2" }, { "math_id": 6, "text": "a(a-b) + b(a-b)" }, { "math_id": 7, "text": "(a+b)(a-b)" }, { "math_id": 8, "text": "a^2 - b^2 = (a+b)(a-b)" }, { "math_id": 9, "text": "a^2-b^2" }, { "math_id": 10, "text": "a+b" }, { "math_id": 11, "text": "a-b" }, { "math_id": 12, "text": "x^4 - 1" }, { "math_id": 13, "text": "x^4 - 1 = (x^2 + 1)(x^2 - 1) = (x^2 + 1)(x + 1)(x - 1)" }, { "math_id": 14, "text": "x^2 - y^2 + x - y" }, { "math_id": 15, "text": "(x + y)(x - y)" }, { "math_id": 16, "text": "x^2 - y^2 + x - y = (x + y)(x - y) + x - y = (x - y)(x + y + 1)" }, { "math_id": 17, "text": "(a+b)^2-(a-b)^2=(a+b+a-b)(a+b-a+b)=(2a)(2b)=4ab" }, { "math_id": 18, "text": "z^2 + 4" }, { "math_id": 19, "text": " = z^2 - 4i^2" }, { "math_id": 20, "text": "i^2 = -1" }, { "math_id": 21, "text": " = z^2 - (2 i)^2" }, { "math_id": 22, "text": " = (z + 2 i)(z - 2 i)" }, { "math_id": 23, "text": "(z + 2 i)" }, { "math_id": 24, "text": "(z - 2 i)" }, { "math_id": 25, "text": "\\dfrac{5}{\\sqrt{3} + 4}" }, { "math_id": 26, "text": " = \\dfrac{5}{\\sqrt{3} + 4} \\times \\dfrac{\\sqrt{3} - 4}{\\sqrt{3} - 4}" }, { "math_id": 27, "text": " = \\dfrac{5(\\sqrt{3} - 4)}{(\\sqrt{3} + 4)(\\sqrt{3} - 4)}" }, { "math_id": 28, "text": " = \\dfrac{5(\\sqrt{3} - 4)}{\\sqrt{3}^2 - 4^2}" }, { "math_id": 29, "text": " = \\dfrac{5(\\sqrt{3} - 4)}{3 - 16}" }, { "math_id": 30, "text": " = -\\dfrac{5(\\sqrt{3} - 4)}{13}." }, { "math_id": 31, "text": "\\sqrt{3} + 4" }, { "math_id": 32, "text": "13" }, { "math_id": 33, "text": " 27 \\times 33 = (30 - 3)(30 + 3)" }, { "math_id": 34, "text": "27 \\times 33" }, { "math_id": 35, "text": "30^2 - 3^2 = 891" }, { "math_id": 36, "text": "\n\\begin{array}{lcl}\n (n+1)^2 - n^2 & = & ((n+1)+n)((n+1)-n) \\\\\n & = & 2n+1\n\\end{array}\n" }, { "math_id": 37, "text": "\n\\begin{array}{lcl}\n (n+k)^2 - n^2 & = & ((n+k)+n)((n+k)-n) \\\\\n & = & k(2n+k)\n\\end{array}\n" }, { "math_id": 38, "text": "s = u t + \\tfrac{1}{2} a t^2" }, { "math_id": 39, "text": "u = 0," }, { "math_id": 40, "text": "a" }, { "math_id": 41, "text": "t," }, { "math_id": 42, "text": "s" }, { "math_id": 43, "text": "t^2" }, { "math_id": 44, "text": "s \\propto t^2" }, { "math_id": 45, "text": "x_i:=a_i^2-N" }, { "math_id": 46, "text": "a_i:=\\left\\lceil \\sqrt{N}\\right\\rceil+i" }, { "math_id": 47, "text": "x_i" }, { "math_id": 48, "text": "b^2" }, { "math_id": 49, "text": "N=a_i^2-b^2=(a_i+b)(a_i-b)" }, { "math_id": 50, "text": "N" }, { "math_id": 51, "text": "a^2\\equiv b^2" }, { "math_id": 52, "text": "a\\not\\equiv \\pm b" }, { "math_id": 53, "text": "\\gcd(a-b,N)" }, { "math_id": 54, "text": "\\gcd(a+b,N)" }, { "math_id": 55, "text": "{\\mathbf a}\\cdot{\\mathbf a} - {\\mathbf b}\\cdot{\\mathbf b} = ({\\mathbf a}+{\\mathbf b})\\cdot({\\mathbf a}-{\\mathbf b})" }, { "math_id": 56, "text": "a^n-b^n=(a-b)\\biggl(\\sum_{k=0}^{n-1} a^{n-1-k}b^k\\biggr)." } ]
https://en.wikipedia.org/wiki?curid=1102297
11023142
Generalised Hough transform
The generalized Hough transform (GHT), introduced by Dana H. Ballard in 1981, is the modification of the Hough transform using the principle of template matching. The Hough transform was initially developed to detect analytically defined shapes (e.g., line, circle, ellipse etc.). In these cases, we have knowledge of the shape and aim to find out its location and orientation in the image. This modification enables the Hough transform to be used to detect an arbitrary object described with its model. The problem of finding the object (described with a model) in an image can be solved by finding the model's position in the image. With the generalized Hough transform, the problem of finding the model's position is transformed to a problem of finding the transformation's parameter that maps the model into the image. Given the value of the transformation's parameter, the position of the model in the image can be determined. The original implementation of the GHT used edge information to define a mapping from orientation of an edge point to a reference point of the shape. In the case of a binary image where pixels can be either black or white, every black pixel of the image can be a black pixel of the desired pattern thus creating a locus of reference points in the Hough space. Every pixel of the image votes for its corresponding reference points. The maximum points of the Hough space indicate possible reference points of the pattern in the image. This maximum can be found by scanning the Hough space or by solving a relaxed set of equations, each of them corresponding to a black pixel. History. Merlin and Farber showed how to use a Hough algorithm when the desired curves could not be described analytically. It was a precursor to Ballard's algorithm that was restricted to translation and did not account for rotation and scale changes. The Merlin-Farber algorithm is impractical for real image data as in an image with many edge pixels, it finds many false positives due to repetitive pixel arrangements. Theory of generalized Hough transform. To generalize the Hough algorithm to non-analytic curves, Ballard defines the following parameters for a generalized shape: "a={y,s,θ}" where "y" is a reference origin for the shape, "θ" is its orientation, and "s = (sx, sy)" describes two orthogonal scale factors. An algorithm can compute the best set of parameters for a given shape from edge pixel data. These parameters do not have equal status. The reference origin location, "y", is described in terms of a template table called the R table of possible edge pixel orientations. The computation of the additional parameters "s" and "θ" is then accomplished by straightforward transformations to this table. The key generalization to arbitrary shapes is the use of directional information. Given any shape and a fixed reference point on it, instead of a parametric curve, the information provided by the boundary pixels is stored in the form of the R-table in the transform stage. For every edge point on the test image, the properties of the point are looked up on the R-table and reference point is retrieved and the appropriate cell in a matrix called the Accumulator matrix is incremented. The cell with maximum 'votes' in the Accumulator matrix can be a possible point of existence of fixed reference of the object in the test image. Building the R-table. Choose a reference point "y" for the shape (typically chosen inside the shape). For each boundary point "x", compute "ɸ(x)", the gradient direction and "r = y – x" as shown in the image. Store "r" as a function of "ɸ". Notice that each index of "ɸ" may have many values of "r". One can either store the co-ordinate differences between the fixed reference and the edge point "((xc – xij), (yc – yij))" or as the radial distance and the angle between them "(rij, αij)". Having done this for each point, the R-table will fully represent the template object. Also, since the generation phase is invertible, we may use it to localise object occurrences at other places in the image. Object localization. For each edge pixel "x" in the image, find the gradient "ɸ" and increment all the corresponding points "x+r" in the accumulator array "A" (initialized to a maximum size of the image) where r is a table entry indexed by "ɸ", i.e., "r(ɸ)". These entry points give us each possible position for the reference point. Although some bogus points may be calculated, given that the object exists in the image, a maximum will occur at the reference point. Maxima in "A" correspond to possible instances of the shape. Generalization of scale and orientation. For a fixed orientation of shape, the accumulator array was two-dimensional in the reference point co-ordinates. To search for shapes of arbitrary orientation "θ" and scale "s", these two parameters are added to the shape description. The accumulator array now consists of four dimensions corresponding to the parameters "(y, s, θ)". The R-table can also be used to increment this larger dimensional space since different orientations and scales correspond to easily computed transformations of the table. Denote a particular R-table for a shape "S" by "R(ɸ)". Simple transformations to this table will allow it to detect scaled or rotated instances of the same shape. For example, if the shape is scaled by s and this transformation is denoted by "Ts". then "Ts[R(ɸ)] = sR(ɸ)" i.e., all the vectors are scaled by "s". Also, if the object is rotated by "θ" and this transformation is denoted by "Tθ", then "Tθ[R(ɸ)] = Rot{R[(ɸ-θ)mod2π],θ}" i.e., all the indices are incremented by – "θ" modulo 2π, the appropriate vectors "r" are found, and then they are rotated by "θ". Another property which will be useful in describing the composition of generalized Hough transforms is the change of reference point. If we want to choose a new reference point "ỹ" such that "y-ỹ = z" then the modification to the R-table is given by "R(ɸ)+ z", i.e. "z" is added to each vector in the table. Alternate way using pairs of edges. A pair of edge pixels can be used to reduce the parameter space. Using the R-table and the properties as described above, each edge pixel defines a surface in the four-dimensional accumulator space of "a = (y, s, θ)". Two edge pixels at different orientations describe the same surface rotated by the same amount with respect to "θ". If these two surfaces intersect, points where they intersect will correspond to possible parameters "a" for the shape. Thus it is theoretically possible to use the two points in image space to reduce the locus in parameter space to a single point. However, the difficulties of finding the intersection points of the two surfaces in parameter space will make this approach unfeasible for most cases. Composite shapes. If the shape S has a composite structure consisting of subparts "S1", "S2", .. "SN" and the reference points for the shapes "S", "S1", "S2", .. "SN" are "y", "y1", "y2", .. "yn", respectively, then for a scaling factor "s" and orientation "θ", the generalized Hough transform "Rs(ɸ)" is given by formula_0. The concern with this transform is that the choice of reference can greatly affect the accuracy. To overcome this, Ballard has suggested smoothing the resultant accumulator with a composite smoothing template. The composite smoothing template "H(y)" is given as a composite convolution of individual smoothing templates of the sub-shapes. formula_1. Then the improved Accumulator is given by "As = A*H" and the maxima in "As" corresponds to possible instances of the shape. Spatial decomposition. Observing that the global Hough transform can be obtained by the summation of local Hough transforms of disjoint sub-region, Heather and Yang proposed a method which involves the recursive subdivision of the image into sub-images, each with their own parameter space, and organized in a quadtree structure. It results in improved efficiency in finding endpoints of line segments and improved robustness and reliability in extracting lines in noisy situations, at a slightly increased cost of memory. Implementation. The implementation uses the following equations: formula_2 formula_3 formula_4 formula_5 Combining the above equations we have: formula_6 formula_7 Constructing the R-table (0) Convert the sample shape image into an edge image using any edge detecting algorithm like Canny edge detector (1) Pick a reference point (e.g., "(xc, yc)") (2) Draw a line from the reference point to the boundary (3) Compute "ɸ" (4) Store the reference point "(xc, yc)" as a function of "ɸ" in "R(ɸ)" table. Detection: (0) Convert the sample shape image into an edge image using any edge detecting algorithm like Canny edge detector. (1) Initialize the Accumulator table: "A[xcmin . . . xcmax][ycmin . . . ycmax]" (2) For each edge point "(x, y)" (2.1) Using the gradient angle "ɸ", retrieve from the R-table all the "(α, r)" values indexed under "ɸ". (2.2) For each "(α,r)", compute the candidate reference points: "xc = x + r cos(α)" "yc = y + r sin(α)" (2.3) Increase counters (voting): "++A(xc[yc])" (3) Possible locations of the object contour are given by local maxima in "A[xc][yc]". If "A[xc][yc] &gt; T", then the object contour is located at "(xc, yc)" General case: Suppose the object has undergone some rotation "Θ" and uniform scaling "s": "(x′, y′) → (x″, y″)" "x″ = (x′ cos(Θ) – y′ sin(Θ))s" "y″ = (x′ sin(Θ) + y′ cos(Θ))s" "Replacing x′ by x″ and y′ by y″: " "xc = x – x″ or xc = x - (x′ cos(Θ) – y′ sin(Θ))s" "yc = y – y″ or yc = y - (x′ sin(Θ) + y′ cos(Θ))s" (1) Initialize the Accumulator table: "A[xcmin . . . xcmax][ycmin . . . ycmax][qmin . . . qmax][smin . . . smax]" (2) For each edge point "(x, y)" (2.1) Using its gradient angle "ɸ", retrieve all the "(α, r)" values from the R-table (2.2) For each "(α, r)", compute the candidate reference points: "x′ = r cos(α)" "y′ = r sin(α)" for("Θ = Θmin; Θ ≤ Θmax; Θ++") for("s = smin; s ≤ smax; s++") "xc = x - (x′ cos(Θ) – y′ sin(Θ))s" "yc = y - (x′ sin(Θ) + y′ cos(Θ))s" "++(A[xc][yc][Θ][s])" (3) Possible locations of the object contour are given by local maxima in "A[xc][yc][Θ][s]" If "A[xc][yc][Θ][s] &gt; T", then the object contour is located at "(xc, yc)", has undergone a rotation "Θ", and has been scaled by "s". Related work. Ballard suggested using orientation information of the edge decreasing the cost of the computation. Many efficient GHT techniques have been suggested such as the SC-GHT (Using slope and curvature as local properties). Davis and Yam also suggested an extension of Merlin's work for orientation and scale invariant matching which complement's Ballard's work but does not include Ballard's utilization of edge-slope information and composite structures References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_{\\phi} = T_{s} \\left\\{ T_{\\theta} \\left[ \\bigcup_{k=1}^{N} R_{s_{k}}(\\phi)\\right] \\right\\}" }, { "math_id": 1, "text": "H(y) = \\sum_{i=1}^{N} h_{i}(y-y_{i})" }, { "math_id": 2, "text": "x = x_{c}+x' \\ \\ or \\ \\ x_{c} = x-x'" }, { "math_id": 3, "text": "y = y_{c}+y' \\ \\ or \\ \\ y_{c} = y-y'" }, { "math_id": 4, "text": "\\cos(\\pi-\\alpha) = x'/r \\ \\ \\text{or} \\ \\ x' = r \\cos(\\pi-\\alpha) = -r\\cos(\\alpha)" }, { "math_id": 5, "text": "\\sin(\\pi-\\alpha) = y'/r \\ \\ \\text{or} \\ \\ y' = r \\sin(\\pi-\\alpha) = r\\sin(\\alpha)" }, { "math_id": 6, "text": "x_{c} = x + r \\cos(\\alpha)" }, { "math_id": 7, "text": "y_{c} = y + r \\sin(\\alpha)" } ]
https://en.wikipedia.org/wiki?curid=11023142
1102649
Laccolith
Mass of igneous rock formed from magma A laccolith is a body of intrusive rock with a dome-shaped upper surface and a level base, fed by a conduit from below. A laccolith forms when magma (molten rock) rising through the Earth's crust begins to spread out horizontally, prying apart the host rock strata. The pressure of the magma is high enough that the overlying strata are forced upward, giving the laccolith its dome-like form. Over time, erosion can expose the solidified laccolith, which is typically more resistant to weathering than the host rock. The exposed laccolith then forms a hill or mountain. The Henry Mountains of Utah, US, are an example of a mountain range composed of exposed laccoliths. It was here that geologist Grove Karl Gilbert carried out pioneering field work on this type of intrusion. Laccolith mountains have since been identified in many other parts of the world. Description. A laccolith is a type of igneous intrusion, formed when magma forces its way upwards through the Earth's crust but cools and solidifies before reaching the surface. Laccoliths are distinguished from other igneous intrusions by their dome-shaped upper surface and level base. They are assumed to be fed by a conduit from below, though this is rarely exposed. When the host rock is volcanic, the laccolith is referred to as a cryptodome. Laccoliths form only at relatively shallow depth in the crust, usually from intermediate composition magma, though laccoliths of all compositions from silica-poor basalt to silica-rich rhyolite are known. A laccolith forms after an initial sheet-like intrusion has been injected between layers of sedimentary rock. If the intrusion remains limited in size, it forms a sill, in which the strata above and below the intrusion remain parallel to each other and the intrusion remains sheetlike. The intrusion begins to lift and dome the overlying strata only if the radius of the intrusion exceeds a critical radius, which is roughly: formula_0 where formula_1 is the pressure of the magma, formula_2 is the lithostatic pressure (weight of the overlying rock), formula_3 is the thickness of the overlying rocks, and formula_4 is the shear strength of the overlying rock. For example, in the Henry Mountains of Utah, US, the geologist Grove Karl Gilbert found in 1877 that sills were always less than in area while laccoliths were always greater than 1 square kilometer in area. From this, Gilbert concluded that sills were forerunners of laccoliths. Laccoliths formed from sills only when they became large enough for the pressure of the magma to force the overlying strata to dome upwards. Gilbert also determined that larger laccoliths formed at greater depth. Both laccoliths and sills are classified as "concordant" intrusions, since the bulk of the intrusion does not cut across host rock strata, but intrudes between strata. More recent study of laccoliths has confirmed Gilbert's basic conclusions, while refining the details. Both sills and laccoliths have blunt rather than wedgelike edges, and sills of the Henry Mountains are typically up to thick while laccoliths are up to thick. The periphery of a laccolith may be smooth, but it may also have fingerlike projections consistent with Rayleigh-Taylor instability of the magma pushing along the strata. An example of a fingered laccolith is the Shonkin Sag laccolith in Montana, US. The critical radius for the sill to laccolith transition is now thought to be affected the viscosity of the magma (being greater for less viscous magma) as well as the strength of the host rock. A modern formula for the shape of a laccolith is: formula_5 where formula_6 is the height of the laccolith roof, formula_7 is the acceleration of gravity, formula_8 is the elastic modulus of the host rock, formula_9 is the horizontal distance from the center of the laccolith, and formula_10 is the outer radius of the laccolith. Because of their greater thickness, which slows the cooling rate, the rock of laccoliths is usually coarser-grained than the rock of sills. The growth of laccoliths can take as little as a few months when associated with a single magma injection event, or up to hundreds or thousands of years by multiple magmatic pulses stacking sills on top of each other and deforming the host rock incrementally. Over time, erosion can form small hills and even mountains around a central peak since the intrusive rock is usually more resistant to weathering than the host rock. Because the emplacement of the laccolith domes up the overlying beds, local topographic relief is increased and erosion is accelerated, so that the overlying beds are eroded away to expose the intrusive cores. Etymology. The term was first applied as "laccolite" by Gilbert after his study of intrusions of diorite in the Henry Mountains of Utah in about 1875. The word "laccolith" was derived in 1875–1880, from Greek "lákko(s)" 'pond' plus "-lith" 'stone'. Where laccoliths form. Laccoliths tend to form at relatively shallow depths and in some cases are formed by relatively viscous magmas, such as those that crystallize to diorite, granodiorite, and granite. In those cases cooling underground may take place slowly, giving time for larger crystals to form in the cooling magma. In other cases less viscous magma such as shonkinite may form phenocrysts of augite at depth, then inject through a vertical feeder dike that ends in a laccolith. Sheet intrusions tend to form perpendicular to the direction of least stress in the country rock they intrude. Thus laccoliths are characteristic of regions where the crust is being compressed and the direction of least stress is vertical, while areas where the crust is in tension are more likely to form dikes, since the direction of least stress is then horizontal. For example, the laccoliths of the Ortiz porphyry belt in New Mexico likely formed during Laramide compression of the region 33 to 36 million years ago. When Laramide compression was later replaced by extension, emplacement of sills and laccoliths was replaced by emplacement of dikes. Dating of the intrusions has helped determine the point in geologic time when compression was replaced with extension. Examples. In addition to the Henry Mountains, laccolith mountains are found on the nearby Colorado Plateau in the La Sal Mountains and Abajo Mountains. The filled and solidified magma chamber of Torres del Paine (Patagonia) is one of the best exposed laccoliths, built up incrementally by horizontal granitic and mafic magma intrusions over 162 ± 11 thousand years. Horizontal sheeted intrusions were fed by vertical intrusions. The small Barber Hill syenite-stock laccolith in Charlotte, Vermont, has several volcanic trachyte dikes associated with it. Molybdenite is also visible in outcrops on this exposed laccolith. In Big Bend Ranch State Park, at the southwesternmost visible extent of the Ouachita orogeny, lies the Solitario. It consists of the eroded remains of a laccolith, presumably named for the sense of solitude that observers within the structure might have, due to the partial illusion of endless expanse in all directions. One of the largest laccoliths in the United States is Pine Valley Mountain in the Pine Valley Mountain Wilderness area near St. George, Utah. A system of laccoliths is exposed on the Italian island of Elba, which form a "Christmas tree" laccolith system in which a single igneous plumbing system has produced multiple laccoliths at different levels in the crust. Problems reconstructing shapes of intrusions. The original shape of intrusions can be difficult to reconstruct. For instance, Devils Tower in Wyoming and Needle Rock in Colorado were both thought to be volcanic necks, but further study has suggested they are eroded laccoliths. At Devils Tower, intrusion would have had to cool very slowly so as to form the slender pencil-shaped columns of phonolite porphyry seen today. However, erosion has stripped away the overlying and surrounding rock, so it is impossible to reconstruct the original shape of the igneous intrusion, which may or may not be the remnant of a laccolith. At other localities, such as in the Henry Mountains and other isolated mountain ranges of the Colorado Plateau, some intrusions demonstrably have the classic shapes of laccoliths. Extraterrestrial laccoliths. There are many examples of possible laccoliths on the surface of the Moon. Some are centered in impact craters and may form as part of the post-impact evolution of the crater. Others are located along possible faults or fissures. Laccoliths on the Moon are much wider but less thick than those on Earth, due to the Moon's lower gravity and more fluid magmatism. Possible laccoliths have also been identified on Mars, in western Arcadia Planitia. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r \\ge \\frac{2T\\tau}{P_m-P_l}" }, { "math_id": 1, "text": "P_m" }, { "math_id": 2, "text": "P_l" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "z = \\frac{3(P_m-\\rho_cgT)}{16BT^3}(r_0^2-r^2)^2" }, { "math_id": 6, "text": "z" }, { "math_id": 7, "text": "g" }, { "math_id": 8, "text": "B" }, { "math_id": 9, "text": "r" }, { "math_id": 10, "text": "r_0" } ]
https://en.wikipedia.org/wiki?curid=1102649
11027904
Epsilon calculus
Extension of a formal language by the epsilon operator In logic, Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The "epsilon operator" and "epsilon substitution method" are typically applied to a first-order predicate calculus, followed by a demonstration of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels. Epsilon operator. Hilbert notation. For any formal language "L", extend "L" by adding the epsilon operator to redefine quantification: The intended interpretation of ϵ"x" "A" is "some x" that satisfies "A", if it exists. In other words, ϵ"x" "A" returns some term "t" such that "A"("t") is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy "A", then any one of these terms (which make "A" true) can be chosen, non-deterministically. Equality is required to be defined under "L", and the only rules required for "L" extended by the epsilon operator are modus ponens and the substitution of "A"("t") to replace "A"("x") for any term "t". Bourbaki notation. In tau-square notation from N. Bourbaki's "Theory of Sets", the quantifiers are defined as follows: where "A" is a relation in "L", "x" is a variable, and formula_4 juxtaposes a formula_5 at the front of "A", replaces all instances of "x" with formula_6, and links them back to formula_5. Then let "Y" be an assembly, "(Y|x)A" denotes the replacement of all variables "x" in "A" with "Y". This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement. Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approximately 4.5 × 1012, and for a later edition of Bourbaki that combined this notation with the Kuratowski definition of ordered pairs, this number grows to approximately 2.4 × 1054. Modern approaches. Hilbert's program for mathematics was to justify those formal systems as consistent in relation to constructive or semi-constructive systems. While Gödel's results on incompleteness mooted Hilbert's Program to a great extent, modern researchers find the epsilon calculus to provide alternatives for approaching proofs of systemic consistency as described in the epsilon substitution method. Epsilon substitution method. A theory to be checked for consistency is first embedded in an appropriate epsilon calculus. Second, a process is developed for re-writing quantified theorems to be expressed in terms of epsilon operations via the epsilon substitution method. Finally, the process must be shown to normalize the re-writing process, so that the re-written theorems satisfy the axioms of the theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (\\exists x) A(x)\\ \\equiv \\ A(\\epsilon x\\ A) " }, { "math_id": 1, "text": " (\\forall x) A(x)\\ \\equiv \\ \\neg \\exists x \\neg A(x) \\iff \\neg \\big(\\neg A(\\epsilon x\\ \\neg A)\\big) \\iff A(\\epsilon x\\ (\\neg A)) " }, { "math_id": 2, "text": " (\\exists x) A(x)\\ \\equiv \\ (\\tau_x(A)|x)A " }, { "math_id": 3, "text": " (\\forall x) A(x)\\ \\equiv \\ \\neg (\\tau_x(\\neg A)|x)\\neg A\\ \\equiv \\ (\\tau_x(\\neg A)|x)A" }, { "math_id": 4, "text": "\\tau_x(A)" }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "\\square" } ]
https://en.wikipedia.org/wiki?curid=11027904
11028411
Jordan and Einstein frames
The Lagrangian in scalar-tensor theory can be expressed in the Jordan frame or in the Einstein frame, which are field variables that stress different aspects of the gravitational field equations and the evolution equations of the matter fields. In the Jordan frame the scalar field or some function of it multiplies the Ricci scalar in the Lagrangian and the matter is typically coupled minimally to the metric, whereas in the Einstein frame the Ricci scalar is not multiplied by the scalar field and the matter is coupled non-minimally. As a result, in the Einstein frame the field equations for the space-time metric resemble the Einstein equations but test particles do not move on geodesics of the metric. On the other hand, in the Jordan frame test particles move on geodesics, but the field equations are very different from Einstein equations. The causal structure in both frames is always equivalent and the frames can be transformed into each other as convenient for the given application. Christopher Hill and Graham Ross have shown that there exist ``gravitational contact terms" in the Jordan frame, whereby the action is modified by graviton exchange. This modification leads back to the Einstein frame as the effective theory. Contact interactions arise in Feynman diagrams when a vertex contains a power of the exchanged momentum, formula_0, which then cancels against the Feynman propagator, formula_1, leading to a point-like interaction. This must be included as part of the effective action of the theory. When the contact term is included results for amplitudes in the Jordan frame will be equivalent to those in the Einstein frame, and results of physical calculations in the Jordan frame that omit the contact terms will generally be incorrect. This implies that the Jordan frame action is misleading, and the Einstein frame is uniquely correct for fully representing the physics. Equations and physical interpretation. If we perform the Weyl rescaling formula_2, then the Riemann and Ricci tensors are modified as follows. formula_3 formula_4 As an example consider the transformation of a simple Scalar-tensor action with an arbitrary set of matter fields formula_5 coupled minimally to the curved background formula_6 The tilde fields then correspond to quantities in the Jordan frame and the fields without the tilde correspond to fields in the Einstein frame. See that the matter action formula_7 changes only in the rescaling of the metric. The Jordan and Einstein frames are constructed to render certain parts of physical equations simpler which also gives the frames and the fields appearing in them particular physical interpretations. For instance, in the Einstein frame, the equations for the gravitational field will be of the form formula_8 I.e., they can be interpreted as the usual Einstein equations with particular sources on the right-hand side. Similarly, in the Newtonian limit one would recover the Poisson equation for the Newtonian potential with separate source terms. However, by transforming to the Einstein frame the matter fields are now coupled not only to the background but also to the field formula_9 which now acts as an effective potential. Specifically, an isolated test particle will experience a universal four-acceleration formula_10 where formula_11 is the particle four-velocity. I.e., no particle will be in free-fall in the Einstein frame. On the other hand, in the Jordan frame, all the matter fields formula_5 are coupled minimally to formula_12 and isolated test particles will move on geodesics with respect to the metric formula_12. This means that if we were to reconstruct the Riemann curvature tensor by measurements of geodesic deviation, we would in fact obtain the curvature tensor in the Jordan frame. When, on the other hand, we deduce on the presence of matter sources from gravitational lensing from the usual relativistic theory, we obtain the distribution of the matter sources in the sense of the Einstein frame. Models. Jordan frame gravity can be used to calculate type IV singular bouncing cosmological evolution, to derive the type IV singularity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q^2" }, { "math_id": 1, "text": "1/q^2" }, { "math_id": 2, "text": "\\tilde{g}_{\\mu\\nu}=\\Phi^{-2/(d-2)} g_{\\mu\\nu}" }, { "math_id": 3, "text": "\\sqrt{-\\tilde{g}}=\\Phi^{-d/(d-2)}\\sqrt{-g}" }, { "math_id": 4, "text": "\\tilde{R}=\\Phi^{2/(d-2)}\\left[ R + \\frac{2(d-1)}{d-2}\\frac{\\Box \\Phi}{\\Phi} -\\frac{3(d-1)}{(d-2)}\\left(\\frac{\\nabla\\Phi}{\\Phi}\\right)^2 \\right]" }, { "math_id": 5, "text": "\\psi_\\mathrm{m}" }, { "math_id": 6, "text": "S = \\int d^dx \\sqrt{-\\tilde{g}} \\Phi \\tilde{R} + S_\\mathrm{m}[\\tilde{g}_{\\mu \\nu},\\psi_\\mathrm{m}] =\\int d^dx \\sqrt{-g} \\left[ R + \\frac{2(d-1)}{d-2}\\frac{\\Box \\Phi}{\\Phi} - \\frac{3(d-1)}{(d-2)}\\left( \\nabla\\left(\\ln \\Phi \\right) \\right)^2\\right] + S_\\mathrm{m}[\\Phi^{-2/(d-2)} g_{\\mu\\nu},\\psi_\\mathrm{m}]" }, { "math_id": 7, "text": "S_\\mathrm{m}" }, { "math_id": 8, "text": "R_{\\mu \\nu} - \\frac{1}{2} R g_{\\mu \\nu}= \\mathrm{other \\; fields}\\,." }, { "math_id": 9, "text": "\\Phi" }, { "math_id": 10, "text": "a^\\mu= \\frac{-1}{d-2} \\frac{\\Phi_{,\\nu}}{\\Phi}(g^{\\mu \\nu} + u^\\mu u^\\nu)," }, { "math_id": 11, "text": "u^\\mu" }, { "math_id": 12, "text": "\\tilde{g}_{\\mu \\nu}" } ]
https://en.wikipedia.org/wiki?curid=11028411
11028723
Hepatic fructokinase
Class of enzymes Hepatic fructokinase (or ketohexokinase) is an enzyme that catalyzes the phosphorylation of fructose to produce fructose-1-phosphate. ATP +  formula_0  ADP + ATP + D-fructose → ADP + D-fructose-1-phosphate Pathology. A deficiency is associated with essential fructosuria.
[ { "math_id": 0, "text": "\\longrightarrow" } ]
https://en.wikipedia.org/wiki?curid=11028723
11030945
Limit comparison test
Method of testing for the convergence of an infinite series In mathematics, the limit comparison test (LCT) (in contrast with the related direct comparison test) is a method of testing for the convergence of an infinite series. Statement. Suppose that we have two series formula_0 and formula_1 with formula_2 for all formula_3. Then if formula_4 with formula_5, then either both series converge or both series diverge. Proof. Because formula_4 we know that for every formula_6 there is a positive integer formula_7 such that for all formula_8 we have that formula_9, or equivalently formula_10 formula_11 formula_12 As formula_13 we can choose formula_14 to be sufficiently small such that formula_15 is positive. So formula_16 and by the direct comparison test, if formula_17 converges then so does formula_18. Similarly formula_19, so if formula_20 diverges, again by the direct comparison test, so does formula_18. That is, both series converge or both series diverge. Example. We want to determine if the series formula_21 converges. For this we compare it with the convergent series formula_22 As formula_23 we have that the original series also converges. One-sided version. One can state a one-sided comparison test by using limit superior. Let formula_24 for all formula_3. Then if formula_25 with formula_26 and formula_1 converges, necessarily formula_0 converges. Example. Let formula_27 and formula_28 for all natural numbers formula_29. Now formula_30 does not exist, so we cannot apply the standard comparison test. However, formula_31 and since formula_32 converges, the one-sided comparison test implies that formula_33 converges. Converse of the one-sided comparison test. Let formula_24 for all formula_3. If formula_34 diverges and formula_35 converges, then necessarily formula_36, that is, formula_37. The essential content here is that in some sense the numbers formula_38 are larger than the numbers formula_39. Example. Let formula_40 be analytic in the unit disc formula_41 and have image of finite area. By Parseval's formula the area of the image of formula_42 is proportional to formula_43. Moreover, formula_44 diverges. Therefore, by the converse of the comparison test, we have formula_45, that is, formula_46. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Sigma_n a_n " }, { "math_id": 1, "text": "\\Sigma_n b_n" }, { "math_id": 2, "text": " a_n\\geq 0, b_n > 0 " }, { "math_id": 3, "text": " n" }, { "math_id": 4, "text": " \\lim_{n \\to \\infty} \\frac{a_n}{b_n} = c" }, { "math_id": 5, "text": " 0 < c < \\infty " }, { "math_id": 6, "text": " \\varepsilon > 0 " }, { "math_id": 7, "text": "n_0" }, { "math_id": 8, "text": "n \\geq n_0 " }, { "math_id": 9, "text": " \\left| \\frac{a_n}{b_n} - c \\right| < \\varepsilon " }, { "math_id": 10, "text": " - \\varepsilon < \\frac{a_n}{b_n} - c < \\varepsilon " }, { "math_id": 11, "text": " c - \\varepsilon < \\frac{a_n}{b_n} < c + \\varepsilon " }, { "math_id": 12, "text": " (c - \\varepsilon)b_n < a_n < (c + \\varepsilon)b_n " }, { "math_id": 13, "text": " c > 0 " }, { "math_id": 14, "text": " \\varepsilon " }, { "math_id": 15, "text": " c-\\varepsilon " }, { "math_id": 16, "text": " b_n < \\frac{1}{c-\\varepsilon} a_n " }, { "math_id": 17, "text": "\\sum_n a_n" }, { "math_id": 18, "text": "\\sum_n b_n " }, { "math_id": 19, "text": " a_n < (c + \\varepsilon)b_n " }, { "math_id": 20, "text": " \\sum_n a_n " }, { "math_id": 21, "text": " \\sum_{n=1}^{\\infty} \\frac{1}{n^2 + 2n} " }, { "math_id": 22, "text": " \\sum_{n=1}^{\\infty} \\frac{1}{n^2} = \\frac{\\pi^2}{6} " }, { "math_id": 23, "text": " \\lim_{n \\to \\infty} \\frac{1}{n^2 + 2n} \\frac{n^2}{1} = 1 > 0 " }, { "math_id": 24, "text": " a_n, b_n \\geq 0 " }, { "math_id": 25, "text": " \\limsup_{n \\to \\infty} \\frac{a_n}{b_n} = c" }, { "math_id": 26, "text": " 0 \\leq c < \\infty " }, { "math_id": 27, "text": " a_n = \\frac{1-(-1)^n}{n^2} " }, { "math_id": 28, "text": " b_n = \\frac{1}{n^2} " }, { "math_id": 29, "text": " n " }, { "math_id": 30, "text": " \\lim_{n\\to\\infty} \\frac{a_n}{b_n} = \\lim_{n\\to\\infty}(1-(-1)^n) " }, { "math_id": 31, "text": " \\limsup_{n\\to\\infty} \\frac{a_n}{b_n} = \\limsup_{n\\to\\infty}(1-(-1)^n) =2\\in [0,\\infty) " }, { "math_id": 32, "text": "\\sum_{n=1}^{\\infty} \\frac{1}{n^2}" }, { "math_id": 33, "text": "\\sum_{n=1}^{\\infty}\\frac{1-(-1)^n}{n^2}" }, { "math_id": 34, "text": "\\Sigma_n a_n " }, { "math_id": 35, "text": "\\Sigma_n b_n " }, { "math_id": 36, "text": " \\limsup_{n\\to\\infty} \\frac{a_n}{b_n}=\\infty " }, { "math_id": 37, "text": " \\liminf_{n\\to\\infty} \\frac{b_n}{a_n}= 0 " }, { "math_id": 38, "text": " a_n " }, { "math_id": 39, "text": " b_n " }, { "math_id": 40, "text": " f(z)=\\sum_{n=0}^{\\infty}a_nz^n " }, { "math_id": 41, "text": "D = \\{ z\\in\\mathbb{C} : |z|<1\\}" }, { "math_id": 42, "text": " f " }, { "math_id": 43, "text": " \\sum_{n=1}^{\\infty} n|a_n|^2" }, { "math_id": 44, "text": " \\sum_{n=1}^{\\infty} 1/n" }, { "math_id": 45, "text": " \\liminf_{n\\to\\infty} \\frac{n|a_n|^2}{1/n}= \\liminf_{n\\to\\infty} (n|a_n|)^2 = 0 " }, { "math_id": 46, "text": " \\liminf_{n\\to\\infty} n|a_n| = 0 " } ]
https://en.wikipedia.org/wiki?curid=11030945
11031482
Topological derivative
The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradient is also used to name the first-order term of the topological asymptotic expansion, dealing only with infinitesimal singular domain perturbations. It has applications in shape optimization, topology optimization, image processing and mechanical modeling. Definition. Let formula_0 be an open bounded domain of formula_1, with formula_2, which is subject to a nonsmooth perturbation confined in a small region formula_3 of size formula_4 with formula_5 an arbitrary point of formula_0 and formula_6 a fixed domain of formula_1. Let formula_7 be a characteristic function associated to the unperturbed domain and formula_8 be a characteristic function associated to the perforated domain formula_9. A given shape functional formula_10 associated to the topologically perturbed domain, admits the following topological asymptotic expansion: formula_11 where formula_12 is the shape functional associated to the reference domain, formula_13 is a positive first order correction function of formula_12 and formula_14 is the remainder. The function formula_15 is called the topological derivative of formula_16 at formula_5. Applications. Structural mechanics. The topological derivative can be applied to shape optimization problems in structural mechanics. The topological derivative can be considered as the singular limit of the shape derivative. It is a generalization of this classical tool in shape optimization. Shape optimization concerns itself with finding an optimal shape. That is, find formula_17 to minimize some scalar-valued objective function, formula_18. The topological derivative technique can be coupled with level-set method. In 2005, the topological asymptotic expansion for the Laplace equation with respect to the insertion of a short crack inside a plane domain had been found. It allows to detect and locate cracks for a simple model problem: the steady-state heat equation with the heat flux imposed and the temperature measured on the boundary. The topological derivative had been fully developed for a wide range of second-order differential operators and in 2011, it had been applied to Kirchhoff plate bending problem with a fourth-order operator. Image processing. In the field of image processing, in 2006, the topological derivative has been used to perform edge detection and image restoration. The impact of an insulating crack in the domain is studied. The topological sensitivity gives information on the image edges. The presented algorithm is non-iterative and thanks to the use of spectral methods has a short computing time. Only formula_19 operations are needed to detect edges, where formula_20 is the number of pixels. During the following years, other problems have been considered: classification, segmentation, inpainting and super-resolution. This approach can be applied to gray-level or color images. Until 2010, isotropic diffusion was used for image reconstructions. The topological gradient is also able to provide edge orientation and this information can be used to perform anisotropic diffusion. In 2012, a general framework is presented to reconstruct an image formula_21 given some noisy observations formula_22 in a Hilbert space formula_23 where formula_0 is the domain where the image formula_24 is defined. The observation space formula_23 depends on the specific application as well as the linear observation operator formula_25. The norm on the space formula_23 is formula_26. The idea to recover the original image is to minimize the following functional for formula_27: formula_28 where formula_29 is a positive definite tensor. The first term of the equation ensures that the recovered image formula_24 is regular, and the second term measures the discrepancy with the data. In this general framework, different types of image reconstruction can be performed such as In this framework, the asymptotic expansion of the cost function formula_37 in the case of a crack provides the same topological derivative formula_38 where formula_39 is the normal to the crack and formula_40 a constant diffusion coefficient. The functions formula_41 and formula_42 are solutions of the following direct and adjoint problems. formula_43 in formula_17 and formula_44 on formula_45 formula_46 in formula_17 and formula_47 on formula_45 Thanks to the topological gradient, it is possible to detect the edges and their orientation and to define an appropriate formula_29 for the image reconstruction process. In image processing, the topological derivatives have also been studied in the case of a multiplicative noise of gamma law or in presence of Poissonian statistics. Inverse problems. In 2009, the topological gradient method has been applied to tomographic reconstruction. The coupling between the topological derivative and the level set has also been investigated in this application. In 2023, topological derivative was used to optimize shapes for inverse rendering. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Books. A. A. Novotny and J. Sokolowski, "Topological derivatives in shape optimization", Springer, 2013.
[ { "math_id": 0, "text": " \\Omega " }, { "math_id": 1, "text": " \\mathbb{R}^d " }, { "math_id": 2, "text": " d \\geq 2 " }, { "math_id": 3, "text": " \\omega_\\varepsilon(\\tilde{x}) = \\tilde{x} + \\varepsilon \\omega " }, { "math_id": 4, "text": " \\varepsilon " }, { "math_id": 5, "text": " \\tilde{x} " }, { "math_id": 6, "text": " \\omega " }, { "math_id": 7, "text": " \\Psi " }, { "math_id": 8, "text": " \\Psi_\\varepsilon " }, { "math_id": 9, "text": " \\Omega_\\varepsilon = \\Omega \\backslash \\overline{\\omega_\\varepsilon} " }, { "math_id": 10, "text": " \\Phi(\\Psi_\\varepsilon(\\tilde{x})) " }, { "math_id": 11, "text": " \\Phi(\\Psi_\\varepsilon(\\tilde{x})) = \\Phi(\\Psi) + f(\\varepsilon) g(\\tilde{x}) + o(f(\\varepsilon)) " }, { "math_id": 12, "text": " \\Phi(\\Psi) " }, { "math_id": 13, "text": " f(\\varepsilon) " }, { "math_id": 14, "text": " o(f(\\varepsilon)) " }, { "math_id": 15, "text": " g(\\tilde{x}) " }, { "math_id": 16, "text": " \\Phi " }, { "math_id": 17, "text": "\\Omega" }, { "math_id": 18, "text": "J(\\Omega)" }, { "math_id": 19, "text": "O(Nlog(N))" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": " u \\in L^2(\\Omega) " }, { "math_id": 22, "text": " Lu+n " }, { "math_id": 23, "text": " E " }, { "math_id": 24, "text": " u " }, { "math_id": 25, "text": " L : L^2(\\Omega) \\rightarrow E " }, { "math_id": 26, "text": " \\|.\\|_E " }, { "math_id": 27, "text": " u \\in H^1(\\Omega) " }, { "math_id": 28, "text": " \\| C^{1/2} \\nabla u \\|_{L^2(\\Omega)}^2 + \\|Lu-v\\|_E^2" }, { "math_id": 29, "text": " C " }, { "math_id": 30, "text": " E=L^2(\\Omega) " }, { "math_id": 31, "text": " Lu=u " }, { "math_id": 32, "text": " Lu=\\phi \\ast u " }, { "math_id": 33, "text": " \\phi " }, { "math_id": 34, "text": " E=L^2(\\Omega\\backslash\\omega) " }, { "math_id": 35, "text": " Lu=u|_{\\Omega\\backslash\\omega} " }, { "math_id": 36, "text": " \\omega \\subset\\Omega " }, { "math_id": 37, "text": " J_\\Omega(u_\\Omega) = \\frac{1}{2} \\int_\\Omega u_\\Omega^2 " }, { "math_id": 38, "text": "g(x,n) = - \\pi c (\\nabla u_0.n) (\\nabla p_0.n) - \\pi(\\nabla u_0.n)^2" }, { "math_id": 39, "text": " n " }, { "math_id": 40, "text": " c " }, { "math_id": 41, "text": " u_0 " }, { "math_id": 42, "text": " p_0 " }, { "math_id": 43, "text": " -\\nabla ( c \\nabla u_0 ) + L^* L u_0 = L^* v" }, { "math_id": 44, "text": " \\partial_n u_0 = 0" }, { "math_id": 45, "text": "\\partial \\Omega" }, { "math_id": 46, "text": " -\\nabla ( c \\nabla p_0 ) + L^* L p_0 = \\Delta u_0" }, { "math_id": 47, "text": " \\partial_n p_0 = 0" } ]
https://en.wikipedia.org/wiki?curid=11031482
1103195
UA2 experiment
Particle physics experiment at CERNThe Underground Area 2 (UA2) experiment was a high-energy physics experiment at the Proton-Antiproton Collider (SppS) — a modification of the Super Proton Synchrotron (SPS) — at CERN. The experiment ran from 1981 until 1990, and its main objective was to discover the W and Z bosons. UA2, together with the UA1 experiment, succeeded in discovering these particles in 1983, leading to the 1984 Nobel Prize in Physics being awarded to Carlo Rubbia and Simon van der Meer. The UA2 experiment also observed the first evidence for jet production in hadron collisions in 1981, and was involved in the searches of the top quark and of supersymmetric particles. Pierre Darriulat was the spokesperson of UA2 from 1981 to 1986, followed by Luigi Di Lella from 1986 to 1990. Background. Around 1968 Sheldon Glashow, Steven Weinberg, and Abdus Salam came up with the electroweak theory, which unified electromagnetism and weak interactions, and for which they shared the 1979 Nobel Prize in Physics. The theory postulated the existence of W and Z bosons, and the pressure on the research community to prove the existence of these particles experimentally was substantial. During the 70s it was established that the masses of the W and Z bosons were in the range of 60 to 80 GeV (W boson) and 75 to 92 GeV (Z boson) — energies too large to be accessible by any accelerator in operation at that time. In 1976, Carlo Rubbia, Peter McIntyre and David Cline proposed to modify a proton accelerator — at that time a proton accelerator was already running at Fermilab and one was under construction at CERN (SPS) — into a proton–antiproton collider, able to reach energies large enough to produce W and Z bosons. The proposal was adopted at CERN in 1978, and the Super Proton Synchrotron (SPS) was modified to occasionally operate as a proton-antiproton collider (SppS). History. On 29 June 1978 the UA1 experiment was approved. Two proposals for a second detector, with the same purpose as UA1, were made the same year. On 14 December 1978, the proposal of Pierre Darriulat, Luigi Di Lella and collaborators, was approved. Like UA1, UA2 was a moveable detector, custom built around the beam pipe of the collider, which searched proton–antiproton collisions for signatures of the W and Z particles. The UA2 experiment began operating in December 1981. The initial UA2 collaboration consisted of about 60 physicists from Bern, CERN, Copenhagen, Orsay, Pavia and Saclay. From 1981 to 1985, the UA1 and UA2 experiments collected data corresponding to an integrated luminosity of approximately . From 1985 to 1987 the SppS was upgraded, and the luminosity of the machine increased by a factor 10 compared to the previous performance. The UA2 sub-detectors were also upgraded, making the detector hermetic, which increased its ability to measure missing transverse energy. The second experimental phase ran from 1987 to 1990. Groups from Cambridge, Heidelberg, Milano, Perugia and Pisa joined the collaboration, which grew to about 100 physicists. During this phase, UA2 accumulated data corresponding to an integrated luminosity of in three major running periods. After nearly ten years of operation, the UA2 experimental program stopped running at the end of 1990. Components and operation. The UA1 and UA2 experiments recorded data during proton–antiproton collision operation and moved back after periods of data taking, so that the SPS could revert to fixed-target operation. UA2 was moved on air cushions when removed from the beam pipe of the SppS. Construction. The UA2 experiment was located some 50 meters underground, in the ring of the SPS/SppS accelerator, and was housed in a big cavern. The cavern was large enough to house the detector, provide room for it to be assembled in a "garage position" without shutting down the accelerator and to where it was also moved back after periods of data taking. The accelerator could therefore revert to fixed-target operation, after periods of operating as a collider. Detectors. The UA1 and the UA2 experiments had many things in common; they were both operating on the same accelerator and both had the same objective (to discover the W and Z bosons). The main difference was the detector design; UA1 was a multipurpose detector, while UA2 had a more limited scope. UA2 was optimized for the detection of electrons from W and Z decays. The emphasis was on a highly granular calorimeter – a detector measuring how much energy particles deposit – with spherical projective geometry, which also was well adapted to the detection of hadronic jets. Charged particle tracking was performed in the central detector utilising a combination of multi-wire proportional chambers and drift chambers and hodoscopes. Energy measurements were performed in the calorimeters. Unlike UA1, UA2 had no muon detector. The calorimeter had 24 slices, each weighing 4 tons. These slices were arranged around the collision point like segments of an orange. Particles ejected from the collision produced showers of secondary particles in the layers of heavy material. These showers passed through layers of plastic scintillators, generating light which was read with photomultiplier by the data collection electronics. The amount of light was proportional to the energy of the original particle. Accurate calibration of the central calorimeter allowed the W and Z masses to be measured with a precision of about 1%. Upgrades of the detector. The 1985-1987 upgrade of the detector was aimed at two aspects: full calorimeter coverage and better electron identification at lower transverse momenta. The first aspect was addressed by replacing the end-caps with new calorimeters that covered the regions 6°-40° with respect to the beam direction, thereby hermetically sealing the detector. The end-cap calorimeters consisted of lead/scintillator samplings for the electromagnetic part, and iron/scintillator for the hadronic part. The performance and granularity of the new calorimeters were set to match the central calorimeter, which was of importance for the triggering system. The electron identification was improved by the use of a completely new central tracking detector assembly, partly consisting of a pioneering silicone-pad detector. In 1989, the collaboration pushed this concept even further by developing a Silicon Pad Detector (SPD) with finer pad segmentation to be placed directly around the collision region beam pipe. This detector was built as a cylinder, closely surrounding the beam pipe. The detector had to fit into the available space of less than 1 cm. It was therefore necessary to miniaturize the components of the detector. This was achieved with two brand new technologies: the silicon sensor and the Application Specific Integrated Circuit (ASIC). Existing electronics were too bulky, and therefore a novel ASIC had to be developed. This was the first silicon tracker adapted to a collider experiment, a technology prior to the present silicon detectors. Results. Hadronic jets at high transverse momentum. The very first result of the UA2 collaboration, published on 2 December 1982, was the first unambiguous observation of hadronic jet production at high transverse momentum from hadronic collisions. Observations of hadronic jets confirmed that the theory of quantum chromodynamics could describe the gross features of the strong parton interaction. Discovery of the W and Z bosons. The UA2 and UA1 collaboration chose to search for the W boson by identifying its leptonic decay, because the hadronic decays, although more frequent, have a larger background. By the end 1982, the SppS had reached high enough luminosity to permit the observation of formula_0 and formula_1 decays. On 22 January 1983, the UA2 collaboration announced that the UA2 detector had recorded four events that were candidates for a W boson. This brought the combined number of candidate events seen by UA1 and UA2 up to 10. Three days later, CERN made a public announcement that the W boson was found. The next step was to track down the Z boson. However, the theory said that the Z boson would be ten times rarer than the W boson. The experiments therefore needed to collect several times the data collected in the 1982 run that showed the existence of the W boson. With improved techniques and methods, the luminosity was increased substantially. These efforts were successful, and on 1 June 1983, the formal announcement of the discovery of the Z boson was made at CERN. Search for the top quark. Throughout the runs with the upgraded detector, the UA2 collaboration was in competition with experiments at Fermilab in the US in the search for the top quark. Physicists had anticipated its existence since 1977, when its partner — the bottom quark — was discovered. It was felt that the discovery of the top quark was imminent. During the 1987-1990 run UA2 collected 2065 formula_0 decays, and 251 Z decays to electron pairs, from which the ratio of the mass of the W boson and the mass of the Z boson could be measured with a precision of 0.5%. By 1991 a precise measurement for the mass of the Z boson from LEP had become available. Using the ratio of the W mass to Z mass, a first precise measurement of the W mass could be made. These mass values could be used to predict the top quark from its virtual effect on the W mass. The result of this study gave a top quark mass value in the range of 110 GeV to 220 GeV, beyond the reach for direct detection by UA2 at the SppS. The top quark was ultimately discovered in 1995 by physicists at Fermilab with a mass near 175 GeV. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " W \\rightarrow e \\nu " }, { "math_id": 1, "text": " W \\rightarrow \\mu \\nu " } ]
https://en.wikipedia.org/wiki?curid=1103195
1103352
Divide-and-conquer eigenvalue algorithm
Algorithm on Hermitian matrices Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is the divide-and-conquer approach from computer science. An eigenvalue problem is divided into two problems of roughly half the size, each of these are solved recursively, and the eigenvalues of the original problem are computed from the results of these smaller problems. This article covers the basic idea of the algorithm as originally proposed by Cuppen in 1981, which is not numerically stable without additional refinements. Background. As with most eigenvalue algorithms for Hermitian matrices, divide-and-conquer begins with a reduction to tridiagonal form. For an formula_0 matrix, the standard method for this, via Householder reflections, takes formula_1 floating point operations, or formula_2 if eigenvectors are needed as well. There are other algorithms, such as the Arnoldi iteration, which may do better for certain classes of matrices; we will not consider this further here. In certain cases, it is possible to "deflate" an eigenvalue problem into smaller problems. Consider a block diagonal matrix formula_3 The eigenvalues and eigenvectors of formula_4 are simply those of formula_5 and formula_6, and it will almost always be faster to solve these two smaller problems than to solve the original problem all at once. This technique can be used to improve the efficiency of many eigenvalue algorithms, but it has special significance to divide-and-conquer. For the rest of this article, we will assume the input to the divide-and-conquer algorithm is an formula_0 real symmetric tridiagonal matrix formula_4. The algorithm can be modified for Hermitian matrices. Divide. The "divide" part of the divide-and-conquer algorithm comes from the realization that a tridiagonal matrix is "almost" block diagonal. The size of submatrix formula_5 we will call formula_7, and then formula_6 is formula_8. formula_4 is almost block diagonal regardless of how formula_9 is chosen. For efficiency we typically choose formula_10. We write formula_4 as a block diagonal matrix, plus a rank-1 correction: The only difference between formula_5 and formula_11 is that the lower right entry formula_12 in formula_11 has been replaced with formula_13 and similarly, in formula_14 the top left entry formula_15 has been replaced with formula_16. The remainder of the divide step is to solve for the eigenvalues (and if desired the eigenvectors) of formula_11 and formula_14, that is to find the diagonalizations formula_17 and formula_18. This can be accomplished with recursive calls to the divide-and-conquer algorithm, although practical implementations often switch to the QR algorithm for small enough submatrices. Conquer. The "conquer" part of the algorithm is the unintuitive part. Given the diagonalizations of the submatrices, calculated above, how do we find the diagonalization of the original matrix? First, define formula_19, where formula_20 is the last row of formula_21 and formula_22 is the first row of formula_23. It is now elementary to show that formula_24 The remaining task has been reduced to finding the eigenvalues of a diagonal matrix plus a rank-one correction. Before showing how to do this, let us simplify the notation. We are looking for the eigenvalues of the matrix formula_25, where formula_26 is diagonal with distinct entries and formula_27 is any vector with nonzero entries. In this case formula_28. The case of a zero entry is simple, since if wi is zero, (formula_29,di) is an eigenpair (formula_29 is in the standard basis) of formula_25 since formula_30. If formula_31 is an eigenvalue, we have: formula_32 where formula_33 is the corresponding eigenvector. Now formula_34 formula_35 formula_36 Keep in mind that formula_37 is a nonzero scalar. Neither formula_27 nor formula_33 are zero. If formula_37 were to be zero, formula_33 would be an eigenvector of formula_26 by formula_32. If that were the case, formula_33 would contain only one nonzero position since formula_26 is distinct diagonal and thus the inner product formula_37 can not be zero after all. Therefore, we have: formula_38 or written as a scalar equation, formula_39 This equation is known as the "secular equation". The problem has therefore been reduced to finding the roots of the rational function defined by the left-hand side of this equation. All general eigenvalue algorithms must be iterative, and the divide-and-conquer algorithm is no different. Solving the nonlinear secular equation requires an iterative technique, such as the Newton–Raphson method. However, each root can be found in O(1) iterations, each of which requires formula_40 flops (for an formula_41-degree rational function), making the cost of the iterative part of this algorithm formula_42. Analysis. W will use the master theorem for divide-and-conquer recurrences to analyze the running time. Remember that above we stated we choose formula_10. We can write the recurrence relation: formula_43 In the notation of the Master theorem, formula_44 and thus formula_45. Clearly, formula_46, so we have formula_47 Above, we pointed out that reducing a Hermitian matrix to tridiagonal form takes formula_1 flops. This dwarfs the running time of the divide-and-conquer part, and at this point it is not clear what advantage the divide-and-conquer algorithm offers over the QR algorithm (which also takes formula_42 flops for tridiagonal matrices). The advantage of divide-and-conquer comes when eigenvectors are needed as well. If this is the case, reduction to tridiagonal form takes formula_2, but the second part of the algorithm takes formula_48 as well. For the QR algorithm with a reasonable target precision, this is formula_49, whereas for divide-and-conquer it is formula_50. The reason for this improvement is that in divide-and-conquer, the formula_48 part of the algorithm (multiplying formula_51 matrices) is separate from the iteration, whereas in QR, this must occur in every iterative step. Adding the formula_2 flops for the reduction, the total improvement is from formula_52 to formula_53 flops. Practical use of the divide-and-conquer algorithm has shown that in most realistic eigenvalue problems, the algorithm actually does better than this. The reason is that very often the matrices formula_51 and the vectors formula_54 tend to be "numerically sparse", meaning that they have many entries with values smaller than the floating point precision, allowing for "numerical deflation", i.e. breaking the problem into uncoupled subproblems. Variants and implementation. The algorithm presented here is the simplest version. In many practical implementations, more complicated rank-1 corrections are used to guarantee stability; some variants even use rank-2 corrections. There exist specialized root-finding techniques for rational functions that may do better than the Newton-Raphson method in terms of both performance and stability. These can be used to improve the iterative part of the divide-and-conquer algorithm. The divide-and-conquer algorithm is readily parallelized, and linear algebra computing packages such as LAPACK contain high-quality parallel implementations.
[ { "math_id": 0, "text": "m \\times m" }, { "math_id": 1, "text": "\\frac{4}{3}m^{3}" }, { "math_id": 2, "text": "\\frac{8}{3}m^{3}" }, { "math_id": 3, "text": "T = \\begin{bmatrix} T_{1} & 0 \\\\ 0 & T_{2}\\end{bmatrix}." }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "T_{1}" }, { "math_id": 6, "text": "T_{2}" }, { "math_id": 7, "text": "n \\times n" }, { "math_id": 8, "text": "(m - n) \\times (m - n)" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "n \\approx m/2" }, { "math_id": 11, "text": "\\hat{T}_{1}" }, { "math_id": 12, "text": "t_{nn}" }, { "math_id": 13, "text": "t_{nn} - \\beta" }, { "math_id": 14, "text": "\\hat{T}_{2}" }, { "math_id": 15, "text": "t_{n+1,n+1}" }, { "math_id": 16, "text": "t_{n+1,n+1} - \\beta" }, { "math_id": 17, "text": "\\hat{T}_{1} = Q_{1} D_{1} Q_{1}^{T}" }, { "math_id": 18, "text": "\\hat{T}_{2} = Q_{2} D_{2} Q_{2}^{T}" }, { "math_id": 19, "text": "z^{T} = (q_{1}^{T},q_{2}^{T})" }, { "math_id": 20, "text": "q_{1}^{T}" }, { "math_id": 21, "text": "Q_{1}" }, { "math_id": 22, "text": "q_{2}^{T}" }, { "math_id": 23, "text": "Q_{2}" }, { "math_id": 24, "text": "T = \\begin{bmatrix} Q_{1} & \\\\ & Q_{2} \\end{bmatrix} \\left( \\begin{bmatrix} D_{1} & \\\\ & D_{2} \\end{bmatrix} + \\beta z z^{T} \\right) \\begin{bmatrix} Q_{1}^{T} & \\\\ & Q_{2}^{T} \\end{bmatrix}" }, { "math_id": 25, "text": "D + w w^{T}" }, { "math_id": 26, "text": "D" }, { "math_id": 27, "text": "w" }, { "math_id": 28, "text": "w = \\sqrt{|\\beta|}\\cdot z" }, { "math_id": 29, "text": "e_i" }, { "math_id": 30, "text": "(D + w w^{T})e_i = De_i = d_i e_i" }, { "math_id": 31, "text": "\\lambda" }, { "math_id": 32, "text": "(D + w w^{T})q = \\lambda q" }, { "math_id": 33, "text": "q" }, { "math_id": 34, "text": "(D - \\lambda I)q + w(w^{T}q) = 0" }, { "math_id": 35, "text": "q + (D - \\lambda I)^{-1} w(w^{T}q) = 0" }, { "math_id": 36, "text": "w^{T}q + w^{T}(D - \\lambda I)^{-1} w(w^{T}q) = 0" }, { "math_id": 37, "text": "w^{T}q" }, { "math_id": 38, "text": "1 + w^{T}(D - \\lambda I)^{-1} w = 0" }, { "math_id": 39, "text": "1 + \\sum_{j=1}^{m} \\frac{w_{j}^{2}}{d_{j} - \\lambda} = 0." }, { "math_id": 40, "text": "\\Theta(m)" }, { "math_id": 41, "text": "m" }, { "math_id": 42, "text": "\\Theta(m^{2})" }, { "math_id": 43, "text": "T(m) = 2 \\times T\\left(\\frac{m}{2}\\right) + \\Theta(m^{2})" }, { "math_id": 44, "text": "a = b = 2" }, { "math_id": 45, "text": "\\log_{b} a = 1" }, { "math_id": 46, "text": "\\Theta(m^{2}) = \\Omega(m^{1})" }, { "math_id": 47, "text": "T(m) = \\Theta(m^{2})" }, { "math_id": 48, "text": "\\Theta(m^{3})" }, { "math_id": 49, "text": "\\approx 6 m^{3}" }, { "math_id": 50, "text": "\\approx \\frac{4}{3}m^{3}" }, { "math_id": 51, "text": "Q" }, { "math_id": 52, "text": "\\approx 9 m^{3}" }, { "math_id": 53, "text": "\\approx 4 m^{3}" }, { "math_id": 54, "text": "z" } ]
https://en.wikipedia.org/wiki?curid=1103352
11033535
Less-than sign
Mathematical symbol for "less than" The less-than sign is a mathematical symbol that denotes an inequality between two values. The widely adopted form of two equal-length strokes connecting in an acute angle at the left, &lt;, has been found in documents dated as far back as the 1560s. In mathematical writing, the less-than sign is typically placed between two values being compared and signifies that the first number is less than the second number. Examples of typical usage include and −2 &lt; 0. Since the development of computer programming languages, the less-than sign and the greater-than sign have been repurposed for a range of uses and operations. Computing. The less-than sign, &lt;, is an original ASCII character (hex 3C, decimal 60). Programming. In BASIC, Lisp-family languages, and C-family languages (including Java and C++), comparison operator codice_0 means "less than". In Coldfusion, operator codice_1 means "less than". In Fortran, operator codice_2 means "less than"; later versions allow codice_0. Shell scripts. In Bourne shell (and many other shells), operator codice_4 means "less than". Less-than sign is used to redirect input from a file. Less-than plus ampersand () is used to redirect from a file descriptor. Double less-than sign. The double less-than sign, «, may be used for an approximation of the "much-less-than sign" (≪) or of the opening guillemet («). ASCII does not encode either of these signs, though they are both included in Unicode. In Bash, Perl, and Ruby, operator (where "EOF" is an arbitrary string, but commonly "EOF" denoting "end of file") is used to denote the beginning of a here document. In C and C++, operator represents a binary left shift. In the C++ Standard Library, operator , when applied on an output stream, acts as "insertion operator" and performs an output operation on the stream. In Ruby, operator acts as "append operator" when used between an array and the value to be appended. In XPath the operator returns true if the left operand precedes the right operand in document order; otherwise it returns false. Triple less-than sign. In PHP, operator is used to denote the beginning of a heredoc statement (where codice_5 is an arbitrary named variable.) In Bash, is used as a "here string", where is expanded and supplied to the command on its standard input, similar to a heredoc. Less-than sign with equals sign. The less-than sign with the equals sign, , may be used for an approximation of the less-than-or-equal-to sign, ≤. ASCII does not have a less-than-or-equal-to sign, but Unicode defines it at code point U+2264. In BASIC, Lisp-family languages, and C-family languages (including Java and C++), operator means "less than or equal to". In Sinclair BASIC it is encoded as a single-byte code point token. In Prolog, means "less than or equal to" (as distinct from the arrow ). In Fortran, operators and both mean "less than or equal to". In Bourne shell and Windows PowerShell, the operator means "less than or equal to". Less-than sign with hyphen-minus. In the R programming language, the less-than sign is used in conjunction with a hyphen-minus to create an arrow (), this can be used as the left assignment operator. Spaceship operator. Less-than sign is used in the spaceship operator. HTML. In HTML (and SGML and XML), the less-than sign is used at the beginning of tags. The less-than sign may be included with codice_6. The less-than-or-equal-to sign, ≤, may be included with codice_7. Unicode. Unicode provides various Less Than Symbol: The less-than sign may be seen for an approximation of the opening angle bracket, ⟨. True angle bracket characters, as required in linguistics notation, are expected in formal texts. Mathematics. In an inequality, the less-than sign and greater-than sign always "point" to the smaller number. Put another way, the "jaws" (the wider section of the symbol) always direct to the larger number. The less-than-sign is sometimes used to represent a total order, partial order or preorder. However, the symbol formula_0 is often used when it would be confusing or not convenient to use &lt;. In mathematical writing using LaTeX, the TeX command is . The Unicode code point is . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\prec " } ]
https://en.wikipedia.org/wiki?curid=11033535
1103376
Planimeter
Tool for measuring area A planimeter, also known as a platometer, is a measuring instrument used to determine the area of an arbitrary two-dimensional shape. Construction. There are several kinds of planimeters, but all operate in a similar way. The precise way in which they are constructed varies, with the main types of mechanical planimeter being polar, linear, and Prytz or "hatchet" planimeters. The Swiss mathematician Jakob Amsler-Laffon built the first modern planimeter in 1854, the concept having been pioneered by Johann Martin Hermann in 1818. Many developments followed Amsler's famous planimeter, including electronic versions. The Amsler (polar) type consists of a two-bar linkage. At the end of one link is a pointer, used to trace around the boundary of the shape to be measured. The other end of the linkage pivots freely on a weight that keeps it from moving. Near the junction of the two links is a measuring wheel of calibrated diameter, with a scale to show fine rotation, and worm gearing for an auxiliary turns counter scale. As the area outline is traced, this wheel rolls on the surface of the drawing. The operator sets the wheel, turns the counter to zero, and then traces the pointer around the perimeter of the shape. When the tracing is complete, the scales at the measuring wheel show the shape's area. When the planimeter's measuring wheel moves perpendicular to its axis, it rolls, and this movement is recorded. When the measuring wheel moves parallel to its axis, the wheel skids without rolling, so this movement is ignored. That means the planimeter measures the distance that its measuring wheel travels, projected perpendicularly to the measuring wheel's axis of rotation. The area of the shape is proportional to the number of turns through which the measuring wheel rotates. The polar planimeter is restricted by design to measuring areas within limits determined by its size and geometry. However, the linear type has no restriction in one dimension, because it can roll. Its wheels must not slip, because the movement must be constrained to a straight line. Developments of the planimeter can establish the position of the first moment of area (center of mass), and even the second moment of area. The images show the principles of a linear and a polar planimeter. The pointer M at one end of the planimeter follows the contour C of the surface S to be measured. For the linear planimeter the movement of the "elbow" E is restricted to the "y"-axis. For the polar planimeter the "elbow" is connected to an arm with its other endpoint O at a fixed position. Connected to the arm ME is the measuring wheel with its axis of rotation parallel to ME. A movement of the arm ME can be decomposed into a movement perpendicular to ME, causing the wheel to rotate, and a movement parallel to ME, causing the wheel to skid, with no contribution to its reading. Principle. The working of the linear planimeter may be explained by measuring the area of a rectangle ABCD (see image). Moving with the pointer from A to B the arm EM moves through the yellow parallelogram, with area equal to PQ×EM. This area is also equal to the area of the parallelogram A"ABB". The measuring wheel measures the distance PQ (perpendicular to EM). Moving from C to D the arm EM moves through the green parallelogram, with area equal to the area of the rectangle D"DCC". The measuring wheel now moves in the opposite direction, subtracting this reading from the former. The movements along BC and DA are the same but opposite, so they cancel each other with no net effect on the reading of the wheel. The net result is the measuring of the difference of the yellow and green areas, which is the area of ABCD. Mathematical derivation. The operation of a linear planimeter can be justified by applying Green's theorem onto the components of the vector field N, given by: formula_0 where "b" is the "y"-coordinate of the elbow E. This vector field is perpendicular to the measuring arm EM: formula_1 and has a constant size, equal to the length "m" of the measuring arm: formula_2 Then: formula_3 because: formula_4 The left hand side of the above equation, which is equal to the area "A" enclosed by the contour, is proportional to the distance measured by the measuring wheel, with proportionality factor "m", the length of the measuring arm. The justification for the above derivation lies in noting that the linear planimeter only records movement perpendicular to its measuring arm, or when formula_5 is non-zero. When this quantity is integrated over the closed curve C, Green's theorem and the area follow. Polar coordinates. The connection with Green's theorem can be understood in terms of integration in polar coordinates: in polar coordinates, area is computed by the integral formula_6 where the form being integrated is "quadratic" in "r," meaning that the rate at which area changes with respect to change in angle varies quadratically with the radius. For a parametric equation in polar coordinates, where both "r" and "θ" vary as a function of time, this becomes formula_7 For a polar planimeter the total rotation of the wheel is proportional to formula_8 as the rotation is proportional to the distance traveled, which at any point in time is proportional to radius and to change in angle, as in the circumference of a circle (formula_9). This last integrand formula_10 can be recognized as the derivative of the earlier integrand formula_11 (with respect to "r"), and shows that a polar planimeter computes the area integral in terms of the "derivative", which is reflected in Green's theorem, which equates a line integral of a function on a (1-dimensional) contour to the (2-dimensional) integral of the derivative. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\!\\,N(x,y)=(b-y,x)," }, { "math_id": 1, "text": "\\overrightarrow{EM}\\cdot N = xN_x+(y-b)N_y=0" }, { "math_id": 2, "text": "\\!\\,\\|N\\| =\\sqrt{(b-y)^2+x^2}=m" }, { "math_id": 3, "text": "\n\\begin{align}\n& \\oint_C(N_x \\, dx + N_y \\, dy) = \\iint_S\\left(\\frac{\\partial N_y}{\\partial x}-\\frac{\\partial N_x}{\\partial y}\\right) \\, dx \\, dy \\\\[8pt]\n= {} & \\iint_S\\left(\\frac{\\partial x}{\\partial x}-\\frac{\\partial (b-y)}{\\partial y}\\right) \\, dx \\, dy = \\iint_S \\, dx \\, dy = A,\n\\end{align}\n" }, { "math_id": 4, "text": "\\frac{\\partial}{\\partial y}(y-b) = \\frac{\\partial}{\\partial y}\\sqrt{m^2-x^2} = 0," }, { "math_id": 5, "text": "N\\cdot(dx,dy)=N_xdx+N_ydy" }, { "math_id": 6, "text": " \\int_\\theta \\tfrac{1}{2} (r(\\theta))^2\\,d\\theta," }, { "math_id": 7, "text": "\\int_t \\tfrac{1}{2} (r(t))^2 \\, d(\\theta(t)) = \\int_t \\tfrac{1}{2} (r(t))^2\\, \\dot \\theta(t)\\,dt." }, { "math_id": 8, "text": " \\int_t r(t)\\, \\dot \\theta(t)\\,dt," }, { "math_id": 9, "text": " \\int r\\,d\\theta = 2\\pi r" }, { "math_id": 10, "text": " r(t) \\,\\dot \\theta(t)" }, { "math_id": 11, "text": " \\tfrac{1}{2} (r(t))^2 \\dot \\theta(t)" } ]
https://en.wikipedia.org/wiki?curid=1103376
11033818
Danskin's theorem
In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form formula_0 The theorem has applications in optimization, where it sometimes is used to solve minimax problems. The original theorem given by J. M. Danskin in his 1967 monograph provides a formula for the directional derivative of the maximum of a (not necessarily convex) directionally differentiable function. An extension to more general conditions was proven 1971 by Dimitri Bertsekas. Statement. The following version is proven in "Nonlinear programming" (1991). Suppose formula_1 is a continuous function of two arguments, formula_2 where formula_3 is a compact set. Under these conditions, Danskin's theorem provides conclusions regarding the convexity and differentiability of the function formula_0 To state these results, we define the set of maximizing points formula_4 as formula_5 Danskin's theorem then provides the following results. formula_6 is convex if formula_1 is convex in formula_7 for every formula_8. The semi-differential of formula_6 in the direction formula_9, denoted formula_10 is given by formula_11 where formula_12 is the directional derivative of the function formula_13 at formula_7 in the direction formula_14 formula_6 is differentiable at formula_7 if formula_4 consists of a single element formula_15. In this case, the derivative of formula_6 (or the gradient of formula_6 if formula_7 is a vector) is given by formula_16 Example of no directional derivative. In the statement of Danskin, it is important to conclude semi-differentiability of formula_17 and not directional-derivative as explains this simple example. Set formula_18, we get formula_19 which is semi-differentiable with formula_20 but has not a directional derivative at formula_21. If formula_1 is differentiable with respect to formula_7 for all formula_22 and if formula_23 is continuous with respect to formula_24 for all formula_7, then the subdifferential of formula_6 is given by formula_25 where formula_26 indicates the convex hull operation. Extension. The 1971 Ph.D. Thesis by Dimitri P. Bertsekas (Proposition A.22) proves a more general result, which does not require that formula_13 is differentiable. Instead it assumes that formula_13 is an extended real-valued closed proper convex function for each formula_24 in the compact set formula_27 that formula_28 the interior of the effective domain of formula_29 is nonempty, and that formula_30 is continuous on the set formula_31 Then for all formula_7 in formula_28 the subdifferential of formula_32 at formula_7 is given by formula_33 where formula_34 is the subdifferential of formula_13 at formula_7 for any formula_24 in formula_35 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = \\max_{z \\in Z} \\phi(x,z)." }, { "math_id": 1, "text": "\\phi(x,z)" }, { "math_id": 2, "text": "\\phi : \\R^n \\times Z \\to \\R" }, { "math_id": 3, "text": "Z \\subset \\R^m" }, { "math_id": 4, "text": "Z_0(x)" }, { "math_id": 5, "text": "Z_0(x) = \\left\\{\\overline{z} : \\phi(x,\\overline{z}) = \\max_{z \\in Z} \\phi(x,z)\\right\\}." }, { "math_id": 6, "text": "f(x)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "z \\in Z" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "\\partial_y\\ f(x)," }, { "math_id": 11, "text": "\\partial_y f(x) = \\max_{z \\in Z_0(x)} \\phi'(x,z;y)," }, { "math_id": 12, "text": "\\phi'(x,z;y)" }, { "math_id": 13, "text": "\\phi(\\cdot,z)" }, { "math_id": 14, "text": "y." }, { "math_id": 15, "text": "\\overline{z}" }, { "math_id": 16, "text": "\\frac{\\partial f}{\\partial x} = \\frac{\\partial \\phi(x,\\overline{z})}{\\partial x}." }, { "math_id": 17, "text": " f " }, { "math_id": 18, "text": " Z=\\{-1,+1\\},\\ \\phi(x,z)= zx" }, { "math_id": 19, "text": " f(x)=|x| " }, { "math_id": 20, "text": " \\partial_-f(0)=-1, \\partial_+f(0)=+1 " }, { "math_id": 21, "text": " x=0 " }, { "math_id": 22, "text": "z \\in Z," }, { "math_id": 23, "text": "\\partial \\phi/\\partial x" }, { "math_id": 24, "text": "z" }, { "math_id": 25, "text": "\\partial f(x) = \\mathrm{conv} \\left\\{\\frac{\\partial \\phi(x,z)}{\\partial x} : z \\in Z_0(x)\\right\\}" }, { "math_id": 26, "text": "\\mathrm{conv}" }, { "math_id": 27, "text": "Z," }, { "math_id": 28, "text": "\\operatorname{int}(\\operatorname{dom}(f))," }, { "math_id": 29, "text": "f," }, { "math_id": 30, "text": "\\phi" }, { "math_id": 31, "text": "\\operatorname{int}(\\operatorname{dom}(f)) \\times Z." }, { "math_id": 32, "text": "f" }, { "math_id": 33, "text": "\\partial f(x) = \\operatorname{conv} \\left\\{\\partial \\phi(x,z) : z \\in Z_0(x)\\right\\}" }, { "math_id": 34, "text": "\\partial \\phi(x,z)" }, { "math_id": 35, "text": "Z." } ]
https://en.wikipedia.org/wiki?curid=11033818
11034
Fluid dynamics
Aspects of fluid mechanics involving flow In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids — liquids and gases. It has several subdisciplines, including "aerodynamics" (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time. Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases. Equations. The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the First Law of Thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem. In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored. For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state: formula_0 where p is pressure, ρ is density, and T is the absolute temperature, while Ru is the gas constant and M is molar mass for a particular gas. A constitutive relation may also be useful. Conservation laws. Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a "control volume". A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow. &lt;templatestyles src="Glossary/styles.css" /&gt; Classifications. Compressible versus incompressible flow. All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used. Mathematically, incompressibility is expressed by saying that the density ρ of a fluid parcel does not change as it moves in the flow field, that is, formula_1 where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density. For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate. Newtonian versus non-Newtonian fluids. All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions "T"−1. Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate. Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and "sticky liquids" such as latex, honey and lubricants. Inviscid versus viscous versus Stokes flow. The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects. The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number ("Re" ≪ 1) indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow. In contrast, high Reynolds numbers ("Re" ≫ 1) indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression. This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox. A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions. Steady versus unsteady flow. A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady. Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field "U"("x", "t") is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow. Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field. Laminar versus turbulent flow. Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component. It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows. Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human (L &gt; 3 m), moving faster than is well beyond the limit of DNS simulation (Re = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling. Other approximations. There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below. Multidisciplinary types. Flows according to Mach regimes. While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of "M" [[Category:Pages which use a template in place of a magic word|TFluid dynamics]] 1 (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately. Reactive versus non-reactive flows. Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics. Magnetohydrodynamics. Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism. Relativistic fluid dynamics. Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime. Fluctuating hydrodynamics. This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model thermal fluctuations. As formulated by Landau and Lifshitz, a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics is added to the viscous stress tensor and heat flux. Terminology. The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods. Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics. Terminology in incompressible fluid dynamics. The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field. A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field. Terminology in compressible fluid dynamics. In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion. To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference. Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p= \\frac{\\rho R_u T}{M}" }, { "math_id": 1, "text": "\\frac{\\mathrm{D} \\rho}{\\mathrm{D}t} = 0 \\, ," } ]
https://en.wikipedia.org/wiki?curid=11034
11034989
Trigonometric moment problem
In mathematics, the trigonometric moment problem is formulated as follows: given a finite sequence formula_0, does there exist a distribution function formula_1 on the interval formula_2 such that: formula_3 In other words, an affirmative answer to the problems means that formula_0 are the first "n" + 1 Fourier coefficients of some measure formula_1 on formula_2. Characterization. The trigonometric moment problem is solvable, that is, formula_4 is a sequence of Fourier coefficients, if and only if the ("n" + 1) × ("n" + 1) Hermitian Toeplitz matrix formula_5 with formula_6 for formula_7, is positive semi-definite. The "only if" part of the claims can be verified by a direct calculation. We sketch an argument for the converse. The positive semidefinite matrix formula_8 defines a sesquilinear product on formula_9, resulting in a Hilbert space formula_10 of dimensional at most "n" + 1. The Toeplitz structure of formula_8 means that a "truncated" shift is a partial isometry on formula_11. More specifically, let formula_12 be the standard basis of formula_9. Let formula_13 and formula_14 be subspaces generated by the equivalence classes formula_15 respectively formula_16. Define an operator formula_17 by formula_18 Since formula_19 formula_20 can be extended to a partial isometry acting on all of formula_11. Take a minimal unitary extension formula_21 of formula_20, on a possibly larger space (this always exists). According to the spectral theorem, there exists a Borel measure formula_22 on the unit circle formula_23 such that for all integer "k" formula_24 For formula_25, the left hand side is formula_26 So formula_27 which is equivalent to formula_28 for some suitable measure formula_1. Parametrization of solutions. The above discussion shows that the trigonometric moment problem has infinitely many solutions if the Toeplitz matrix formula_8 is invertible. In that case, the solutions to the problem are in bijective correspondence with minimal unitary extensions of the partial isometry formula_20. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{c_0,\\dotsc,c_n\\}" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "[0,2\\pi]" }, { "math_id": 3, "text": "c_k = \\frac{1}{2 \\pi}\\int_0 ^{2 \\pi} e^{-ik\\theta}\\,d \\mu(\\theta)." }, { "math_id": 4, "text": "\\{c_k\\}_{k=0}^{n}" }, { "math_id": 5, "text": "\nT =\n\\left(\\begin{matrix}\nc_0 & c_1 & \\cdots & c_n \\\\\nc_{-1} & c_0 & \\cdots & c_{n-1} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nc_{-n} & c_{-n+1} & \\cdots & c_0 \\\\\n\\end{matrix}\\right)" }, { "math_id": 6, "text": "c_{-k}=\\overline{c_{k}}" }, { "math_id": 7, "text": "k \\geq 1" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "\\mathbb{C}^{n+1}" }, { "math_id": 10, "text": "(\\mathcal{H}, \\langle \\;,\\; \\rangle)" }, { "math_id": 11, "text": "\\mathcal{H}" }, { "math_id": 12, "text": "\\{e_0,\\dotsc,e_n\\}" }, { "math_id": 13, "text": "\\mathcal{E}" }, { "math_id": 14, "text": "\\mathcal{F}" }, { "math_id": 15, "text": "\\{[e_0],\\dotsc,[e_{n-1}]\\}" }, { "math_id": 16, "text": "\\{[e_1],\\dotsc,[e_{n}]\\}" }, { "math_id": 17, "text": "V: \\mathcal{E} \\rightarrow \\mathcal{F}" }, { "math_id": 18, "text": "V[e_k] = [e_{k+1}] \\quad \\mbox{for} \\quad k = 0 \\ldots n-1." }, { "math_id": 19, "text": "\\langle V[e_j], V[e_k] \\rangle = \\langle [e_{j+1}], [e_{k+1}] \\rangle = T_{j+1, k+1} = T_{j, k} = \\langle [e_{j}], [e_{k}] \\rangle," }, { "math_id": 20, "text": "V" }, { "math_id": 21, "text": "U" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "\\mathbb{T}" }, { "math_id": 24, "text": "\\langle (U^*)^k [ e_ {n+1} ], [ e_ {n+1} ] \\rangle = \\int_{\\mathbb{T}} z^{k} dm ." }, { "math_id": 25, "text": "k = 0,\\dotsc,n" }, { "math_id": 26, "text": "\n\\langle (U^*)^k [ e_ {n+1} ], [ e_ {n+1} ] \\rangle\n= \\langle (V^*)^k [ e_ {n+1} ], [ e_{n+1} ] \\rangle\n= \\langle [e_{n+1-k}], [ e_{n+1} ] \\rangle\n= T_{n+1, n+1-k}\n= c_{-k}=\\overline{c_k}.\n" }, { "math_id": 27, "text": "\nc_k = \\int_{\\mathbb{T}} z^{-k} dm\n= \\int_{\\mathbb{T}} \\bar{z}^k dm\n" }, { "math_id": 28, "text": " c_k = \\frac{1}{2 \\pi} \\int_0 ^{2 \\pi} e^{-ik\\theta} d\\mu(\\theta) " } ]
https://en.wikipedia.org/wiki?curid=11034989
11035198
Discrete phase-type distribution
The discrete phase-type distribution is a probability distribution that results from a system of one or more inter-related geometric distributions occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of an absorbing Markov chain with one absorbing state. Each of the states of the Markov chain represents one of the phases. It has continuous time equivalent in the phase-type distribution. Definition. A terminating Markov chain is a Markov chain where all states are transient, except one which is absorbing. Reordering the states, the transition probability matrix of a terminating Markov chain with formula_0 transient states is formula_1 where formula_2 is a formula_3 matrix, formula_4 and formula_5 are column vectors with formula_0 entries, and formula_6. The transition matrix is characterized entirely by its upper-left block formula_2. Definition. A distribution on formula_7 is a discrete phase-type distribution if it is the distribution of the first passage time to the absorbing state of a terminating Markov chain with finitely many states. Characterization. Fix a terminating Markov chain. Denote formula_2 the upper-left block of its transition matrix and formula_8 the initial distribution. The distribution of the first time to the absorbing state is denoted formula_9 or formula_10. Its cumulative distribution function is formula_11 for formula_12, and its density function is formula_13 for formula_14. It is assumed the probability of process starting in the absorbing state is zero. The factorial moments of the distribution function are given by, formula_15 where formula_16 is the appropriate dimension identity matrix. Special cases. Just as the continuous time distribution is a generalisation of the exponential distribution, the discrete time distribution is a generalisation of the geometric distribution, for example:
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "\n{P}=\\left[\\begin{matrix}{T}&\\mathbf{T}^0\\\\\\mathbf{0}^\\mathsf{T}&1\\end{matrix}\\right],\n" }, { "math_id": 2, "text": "{T}" }, { "math_id": 3, "text": "m\\times m" }, { "math_id": 4, "text": "\\mathbf{T}^0" }, { "math_id": 5, "text": "\\mathbf{0}" }, { "math_id": 6, "text": "\\mathbf{T}^0+{T}\\mathbf{1}=\\mathbf{1}" }, { "math_id": 7, "text": "\\{0,1,2,...\\}" }, { "math_id": 8, "text": "\\tau" }, { "math_id": 9, "text": "\\mathrm{PH}_{d}(\\boldsymbol{\\tau},{T})" }, { "math_id": 10, "text": "\\mathrm{DPH}(\\boldsymbol{\\tau},{T})" }, { "math_id": 11, "text": "\nF(k)=1-\\boldsymbol{\\tau}{T}^{k}\\mathbf{1},\n" }, { "math_id": 12, "text": "k=1,2,... " }, { "math_id": 13, "text": "\nf(k)=\\boldsymbol{\\tau}{T}^{k-1}\\mathbf{T^{0}},\n" }, { "math_id": 14, "text": " k=1,2,... " }, { "math_id": 15, "text": "\nE[K(K-1)...(K-n+1)]=n!\\boldsymbol{\\tau}(I-{T})^{-n}{T}^{n-1}\\mathbf{1},\n" }, { "math_id": 16, "text": "I " } ]
https://en.wikipedia.org/wiki?curid=11035198
1103773
Fractional Fourier transform
N-th power Fourier transform In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the "n"-th power, where "n" need not be an integer — thus, it can transform a function to any "intermediate" domain between time and frequency. Its applications range from filter design and signal analysis to phase retrieval and pattern recognition. The FRFT can be used to define fractional convolution, correlation, and other operations, and can also be further generalized into the linear canonical transformation (LCT). An early definition of the FRFT was introduced by Condon, by solving for the Green's function for phase-space rotations, and also by Namias, generalizing work of Wiener on Hermite polynomials. However, it was not widely recognized in signal processing until it was independently reintroduced around 1993 by several groups. Since then, there has been a surge of interest in extending Shannon's sampling theorem for signals which are band-limited in the Fractional Fourier domain. A completely different meaning for "fractional Fourier transform" was introduced by Bailey and Swartztrauber as essentially another name for a z-transform, and in particular for the case that corresponds to a discrete Fourier transform shifted by a fractional amount in frequency space (multiplying the input by a linear chirp) and evaluating at a fractional set of frequency points (e.g. considering only a small portion of the spectrum). (Such transforms can be evaluated efficiently by Bluestein's FFT algorithm.) This terminology has fallen out of use in most of the technical literature, however, in preference to the FRFT. The remainder of this article describes the FRFT. Introduction. The continuous Fourier transform formula_0 of a function formula_1 is a unitary operator of formula_2 space that maps the function formula_3 to its frequential version formula_4 (all expressions are taken in the formula_2 sense, rather than pointwise): formula_5 and formula_3 is determined by formula_4 via the inverse transform formula_6 formula_7 Let us study its "n"-th iterated formula_8 defined by formula_9 and formula_10 when "n" is a non-negative integer, and formula_11. Their sequence is finite since formula_0 is a 4-periodic automorphism: for every function formula_3, formula_12. More precisely, let us introduce the parity operator formula_13 that inverts formula_14, formula_15. Then the following properties hold: formula_16 formula_17 The FRFT provides a family of linear transforms that further extends this definition to handle non-integer powers formula_18 of the FT. Definition. Note: some authors write the transform in terms of the "order a" instead of the "angle α", in which case the α is usually a times "π"/2. Although these two forms are equivalent, one must be careful about which definition the author uses. For any real α, the α-angle fractional Fourier transform of a function ƒ is denoted by formula_19 and defined by formula_20 Formally, this formula is only valid when the input function is in a sufficiently nice space (such as L1 or Schwartz space), and is defined via a density argument, in a way similar to that of the ordinary Fourier transform (see article), in the general case. If α is an integer multiple of π, then the cotangent and cosecant functions above diverge. However, this can be handled by taking the limit, and leads to a Dirac delta function in the integrand. More directly, since formula_21 must be simply "f"("t") or "f"(−"t") for α an even or odd multiple of π respectively. For "α" "π"/2, this becomes precisely the definition of the continuous Fourier transform, and for "α" −"π"/2 it is the definition of the inverse continuous Fourier transform. The FRFT argument u is neither a spatial one x nor a frequency ξ. We will see why it can be interpreted as linear combination of both coordinates ("x","ξ"). When we want to distinguish the α-angular fractional domain, we will let formula_22 denote the argument of formula_23. Remark: with the angular frequency ω convention instead of the frequency one, the FRFT formula is the Mehler kernel, formula_24 Properties. The "α"-th order fractional Fourier transform operator, formula_23, has the properties: Additivity. For any real angles "α, β", formula_25 Linearity. formula_26 Integer Orders. If "α" is an integer multiple of formula_27, then: formula_28 Moreover, it has following relation formula_29 Inverse. formula_30 Commutativity. formula_31 Associativity. formula_32 Unitarity. formula_33 Time Reversal. formula_34 formula_35 Transform of a shifted function. Define the shift and the phase shift operators as follows: formula_36 Then formula_37 that is, formula_38 Transform of a scaled function. Define the scaling and chirp multiplication operators as follows: formula_39 Then, formula_40 Notice that the fractional Fourier transform of formula_41 cannot be expressed as a scaled version of formula_42. Rather, the fractional Fourier transform of formula_41 turns out to be a scaled and chirp modulated version of formula_43 where formula_44 is a different order. Fractional kernel. The FRFT is an integral transform formula_45 where the α-angle kernel is formula_46 Here again the special cases are consistent with the limit behavior when α approaches a multiple of π. The FRFT has the same properties as its kernels : Related transforms. There also exist related fractional generalizations of similar transforms such as the discrete Fourier transform. Generalizations. The Fourier transform is essentially bosonic; it works because it is consistent with the superposition principle and related interference patterns. There is also a fermionic Fourier transform. These have been generalized into a supersymmetric FRFT, and a supersymmetric Radon transform. There is also a fractional Radon transform, a symplectic FRFT, and a symplectic wavelet transform. Because quantum circuits are based on unitary operations, they are useful for computing integral transforms as the latter are unitary operators on a function space. A quantum circuit has been designed which implements the FRFT. Interpretation. The usual interpretation of the Fourier transform is as a transformation of a time domain signal into a frequency domain signal. On the other hand, the interpretation of the inverse Fourier transform is as a transformation of a frequency domain signal into a time domain signal. Fractional Fourier transforms transform a signal (either in the time domain or frequency domain) into the domain between time and frequency: it is a rotation in the time–frequency domain. This perspective is generalized by the linear canonical transformation, which generalizes the fractional Fourier transform and allows linear transforms of the time–frequency domain other than rotation. Take the figure below as an example. If the signal in the time domain is rectangular (as below), it becomes a sinc function in the frequency domain. But if one applies the fractional Fourier transform to the rectangular signal, the transformation output will be in the domain between time and frequency. The fractional Fourier transform is a rotation operation on a time–frequency distribution. From the definition above, for "α" = 0, there will be no change after applying the fractional Fourier transform, while for "α" = "π"/2, the fractional Fourier transform becomes a plain Fourier transform, which rotates the time–frequency distribution with "π"/2. For other value of "α", the fractional Fourier transform rotates the time–frequency distribution according to α. The following figure shows the results of the fractional Fourier transform with different values of "α". Application. Fractional Fourier transform can be used in time frequency analysis and DSP. It is useful to filter noise, but with the condition that it does not overlap with the desired signal in the time–frequency domain. Consider the following example. We cannot apply a filter directly to eliminate the noise, but with the help of the fractional Fourier transform, we can rotate the signal (including the desired signal and noise) first. We then apply a specific filter, which will allow only the desired signal to pass. Thus the noise will be removed completely. Then we use the fractional Fourier transform again to rotate the signal back and we can get the desired signal. Thus, using just truncation in the time domain, or equivalently low-pass filters in the frequency domain, one can cut out any convex set in time–frequency space. In contrast, using time domain or frequency domain tools without a fractional Fourier transform would only allow cutting out rectangles parallel to the axes. Fractional Fourier transforms also have applications in quantum physics. For example, they are used to formulate entropic uncertainty relations, in high-dimensional quantum key distribution schemes with single photons, and in observing spatial entanglement of photon pairs. They are also useful in the design of optical systems and for optimizing holographic storage efficiency. See also. Other time–frequency transforms: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{F}" }, { "math_id": 1, "text": "f: \\mathbb{R} \\mapsto \\mathbb{C}" }, { "math_id": 2, "text": "L^2" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\hat{f}" }, { "math_id": 5, "text": "\\hat{f}(\\xi) = \\int_{-\\infty}^{\\infty} f(x)\\ e^{- 2\\pi i x \\xi}\\,\\mathrm{d}x" }, { "math_id": 6, "text": "\\mathcal{F}^{-1}\\, ," }, { "math_id": 7, "text": "f(x) = \\int_{-\\infty}^{\\infty} \\hat{f}(\\xi)\\ e^{2 \\pi i \\xi x}\\,\\mathrm{d}\\xi\\, ." }, { "math_id": 8, "text": "\\mathcal{F}^{n}" }, { "math_id": 9, "text": "\\mathcal{F}^{n}[f] = \\mathcal{F}[\\mathcal{F}^{n-1}[f]]" }, { "math_id": 10, "text": "\\mathcal{F}^{-n} = (\\mathcal{F}^{-1})^n" }, { "math_id": 11, "text": "\\mathcal{F}^{0}[f] = f" }, { "math_id": 12, "text": "\\mathcal{F}^4 [f] = f" }, { "math_id": 13, "text": "\\mathcal{P}" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "\\mathcal{P}[f]\\colon x \\mapsto f(-x)" }, { "math_id": 16, "text": "\\mathcal{F}^0 = \\mathrm{Id}, \\qquad \\mathcal{F}^1 = \\mathcal{F}, \\qquad \\mathcal{F}^2 = \\mathcal{P}, \\qquad \\mathcal{F}^4 = \\mathrm{Id}" }, { "math_id": 17, "text": "\\mathcal{F}^3 = \\mathcal{F}^{-1} = \\mathcal{P} \\circ \\mathcal{F} = \\mathcal{F} \\circ \\mathcal{P}." }, { "math_id": 18, "text": "n = 2\\alpha/\\pi" }, { "math_id": 19, "text": "\\mathcal{F}_\\alpha (u)" }, { "math_id": 20, "text": "\\mathcal{F}_\\alpha[f](u) = \n\\sqrt{1-i\\cot(\\alpha)} e^{i \\pi \\cot(\\alpha) u^2} \n\\int_{-\\infty}^\\infty \ne^{-2\\pi i\\left(\\csc(\\alpha) u x - \\frac{\\cot(\\alpha)}{2} x^2\\right)}\nf(x)\\, \\mathrm{d}x\n" }, { "math_id": 21, "text": "\\mathcal{F}^2(f)=f(-t)~, ~~\\mathcal{F}_{\\alpha} ~ (f) " }, { "math_id": 22, "text": "x_a" }, { "math_id": 23, "text": "\\mathcal{F}_\\alpha" }, { "math_id": 24, "text": "\\mathcal{F}_\\alpha(f)(\\omega) = \n\\sqrt{\\frac{1-i\\cot(\\alpha)}{2\\pi}} \ne^{i \\cot(\\alpha) \\omega^2/2} \n\\int_{-\\infty}^\\infty \ne^{-i\\csc(\\alpha) \\omega t + i \\cot(\\alpha) t^2/2}\nf(t)\\, dt~. \n" }, { "math_id": 25, "text": "\\mathcal{F}_{\\alpha+\\beta} = \\mathcal{F}_\\alpha \\circ \\mathcal{F}_\\beta = \\mathcal{F}_\\beta \\circ \\mathcal{F}_\\alpha." }, { "math_id": 26, "text": "\\mathcal{F}_\\alpha \\left [\\sum\\nolimits_k b_kf_k(u) \\right ]=\\sum\\nolimits_k b_k\\mathcal{F}_\\alpha \\left [f_k(u) \\right ]" }, { "math_id": 27, "text": "\\pi / 2" }, { "math_id": 28, "text": "\\mathcal{F}_\\alpha = \\mathcal{F}_{k\\pi/2} = \\mathcal{F}^k = (\\mathcal{F})^k" }, { "math_id": 29, "text": "\\begin{align}\n\\mathcal{F}^2 &= \\mathcal{P} && \\mathcal{P}[f(u)]=f(-u)\\\\\n\\mathcal{F}^3 &= \\mathcal{F}^{-1} = (\\mathcal{F})^{-1} \\\\ \n\\mathcal{F}^4 &= \\mathcal{F}^0 = \\mathcal{I} \\\\\n\\mathcal{F}^i &= \\mathcal{F}^j && i \\equiv j \\mod 4\n\\end{align}" }, { "math_id": 30, "text": "(\\mathcal{F}_\\alpha)^{-1}=\\mathcal{F}_{-\\alpha}" }, { "math_id": 31, "text": "\\mathcal{F}_{\\alpha_1}\\mathcal{F}_{\\alpha_2}=\\mathcal{F}_{\\alpha_2}\\mathcal{F}_{\\alpha_1}" }, { "math_id": 32, "text": " \\left (\\mathcal{F}_{\\alpha_1}\\mathcal{F}_{\\alpha_2} \\right )\\mathcal{F}_{\\alpha_3} = \\mathcal{F}_{\\alpha_1} \\left (\\mathcal{F}_{\\alpha_2}\\mathcal{F}_{\\alpha_3} \\right )" }, { "math_id": 33, "text": "\\int f(t)g^*(t)dt=\\int f_\\alpha(u)g_\\alpha^*(u)du" }, { "math_id": 34, "text": "\\mathcal{F}_\\alpha\\mathcal{P}=\\mathcal{P}\\mathcal{F}_\\alpha" }, { "math_id": 35, "text": "\\mathcal{F}_\\alpha[f(-u)]=f_\\alpha(-u)" }, { "math_id": 36, "text": "\\begin{align}\n\\mathcal{SH}(u_0)[f(u)] &= f(u+u_0) \\\\\n\\mathcal{PH}(v_0)[f(u)] &= e^{j2\\pi v_0u}f(u)\n\\end{align}" }, { "math_id": 37, "text": "\\begin{align}\n\\mathcal{F}_\\alpha \\mathcal{SH}(u_0) &= e^{j\\pi u_0^2 \\sin\\alpha \\cos\\alpha} \\mathcal{PH}(u_0\\sin\\alpha) \\mathcal{SH}(u_0\\cos\\alpha) \\mathcal{F}_\\alpha,\n\\end{align}" }, { "math_id": 38, "text": "\\begin{align}\n\\mathcal{F}_\\alpha [f(u+u_0)] &=e^{j\\pi u_0^2 \\sin\\alpha \\cos\\alpha} e^{j2\\pi uu_0 \\sin\\alpha} f_\\alpha (u+u_0 \\cos\\alpha)\n\\end{align}" }, { "math_id": 39, "text": "\\begin{align}\nM(M)[f(u)] &= |M|^{-\\frac{1}{2}} f \\left (\\tfrac{u}{M} \\right) \\\\\nQ(q)[f(u)] &= e^{-j\\pi qu^2 } f(u)\n\\end{align}" }, { "math_id": 40, "text": "\\begin{align}\n\\mathcal{F}_\\alpha M(M) &= Q \\left (-\\cot \\left (\\frac{1-\\cos^2 \\alpha'}{\\cos^2 \\alpha}\\alpha \\right ) \\right)\\times M \\left (\\frac{\\sin \\alpha}{M\\sin \\alpha'} \\right )\\mathcal{F}_{\\alpha'} \\\\ [6pt]\n\\mathcal{F}_\\alpha \\left [|M|^{-\\frac{1}{2}} f \\left (\\tfrac{u}{M} \\right) \\right ] &= \\sqrt{\\frac{1-j \\cot\\alpha}{1-jM^2 \\cot\\alpha}} e^{j\\pi u^2\\cot \\left (\\frac{1-\\cos^2 \\alpha'}{\\cos^2 \\alpha}\\alpha \\right )} \\times f_a \\left (\\frac{Mu \\sin\\alpha'}{\\sin\\alpha} \\right )\n\\end{align}" }, { "math_id": 41, "text": "f(u/M)" }, { "math_id": 42, "text": "f_\\alpha (u)" }, { "math_id": 43, "text": "f_{\\alpha'}(u)" }, { "math_id": 44, "text": "\\alpha\\neq\\alpha'" }, { "math_id": 45, "text": "\\mathcal{F}_\\alpha f (u) = \\int K_\\alpha (u, x) f(x)\\, \\mathrm{d}x" }, { "math_id": 46, "text": "K_\\alpha (u, x) = \\begin{cases}\\sqrt{1-i\\cot(\\alpha)} \\exp \\left(i \\pi (\\cot(\\alpha)(x^2+ u^2) -2 \\csc(\\alpha) u x) \\right) & \\mbox{if } \\alpha \\mbox{ is not a multiple of }\\pi, \\\\\n\\delta (u - x) & \\mbox{if } \\alpha \\mbox{ is a multiple of } 2\\pi, \\\\\n\\delta (u + x) & \\mbox{if } \\alpha+\\pi \\mbox{ is a multiple of } 2\\pi, \\\\\n\\end{cases}" }, { "math_id": 47, "text": "K_\\alpha~(u, u')=K_\\alpha ~(u', u)" }, { "math_id": 48, "text": "K_\\alpha^{-1} (u, u') = K_\\alpha^* (u, u') = K_{-\\alpha} (u', u) " }, { "math_id": 49, "text": "K_{\\alpha+\\beta} (u,u') = \\int K_\\alpha (u, u'') K_\\beta (u'', u')\\,\\mathrm{d}u''." } ]
https://en.wikipedia.org/wiki?curid=1103773
1103898
DD-transpeptidase
Bacterial enzyme DD-transpeptidase (EC 3.4.16.4, "DD-peptidase", "DD-transpeptidase", "DD-carboxypeptidase", "D-alanyl-D-alanine carboxypeptidase", "D-alanyl-D-alanine-cleaving-peptidase", "D-alanine carboxypeptidase", "D-alanyl carboxypeptidase", and "serine-type D-Ala-D-Ala carboxypeptidase".) is a bacterial enzyme that catalyzes the transfer of the R-L-αα-D-alanyl moiety of R-L-αα-D-alanyl-D-alanine carbonyl donors to the γ-OH of their active-site serine and from this to a final acceptor. It is involved in bacterial cell wall biosynthesis, namely, the transpeptidation that crosslinks the peptide side chains of peptidoglycan strands. The antibiotic penicillin irreversibly binds to and inhibits the activity of the transpeptidase enzyme by forming a highly stable penicilloyl-enzyme intermediate. Because of the interaction between penicillin and transpeptidase, this enzyme is also known as penicillin-binding protein (PBP). Mechanism. DD-transpeptidase is mechanistically similar to the proteolytic reactions of the trypsin protein family. Crosslinking of peptidyl moieties of adjacent glycan strands is a two-step reaction. The first step involves the cleavage of the D-alanyl-D-alanine bond of a peptide unit precursor acting as carbonyl donor, the release of the carboxyl-terminal D-alanine, and the formation of the acyl-enzyme. The second step involves the breakdown of the acyl-enzyme intermediate and the formation of a new peptide bond between the carbonyl of the D-alanyl moiety and the amino group of another peptide unit. Most discussion of DD-peptidase mechanisms revolves around the catalysts of proton transfer. During formation of the acyl-enzyme intermediate, a proton must be removed from the active site serine hydroxyl group and one must be added to the amine leaving group. A similar proton movement must be facilitated in deacylation. The identity of the general acid and base catalysts involved in these proton transfers has not yet been elucidated. However, the catalytic triad tyrosine, lysine, and serine, as well as serine, lysine, serine have been proposed. Structure. Transpeptidases are members of the penicilloyl-serine transferase superfamily, which has a signature SxxK conserved motif. With "x" denoting a variable amino acid residue, the transpeptidases of this superfamily show a trend in the form of three motifs: SxxK, SxN (or analogue), and KTG (or analogue). These motifs occur at equivalent places, and are roughly equally spaced, along the polypeptide chain. The folded protein brings these motifs close to each other at the catalytic center between an all-α domain and an α/β domain. The structure of the streptomyces K15 DD-transpeptidase has been studied, and consists of a single polypeptide chain organized into two domains. One domain contains mainly α-helices, and the second one is of α/β-type. The center of the catalytic cleft is occupied by the Ser35-Thr36-Thr37-Lys38 tetrad, which includes the nucleophilic Ser35 residue at the amino-terminal end of helix α2. One side of the cavity is defined by the Ser96-Gly97-Cys98 loop connecting helices α4 and α5. The Lys213-Thr214-Gly215 triad lies on strand β3 on the opposite side of the cavity. The backbone NH group of the essential Ser35 residue and that of Ser216 downstream from the motif Lys213-Thr214-Gly215 occupy positions that are compatible with the oxyanion hole function required for catalysis. The enzyme is classified as a DD-transpeptidase because the susceptible peptide bond of the carbonyl donor extends between two carbon atoms with the D-configuration. Biological Function. All bacteria possess at least one, most often several, monofunctional serine DD-peptidases. Disease Relevance. This enzyme is an excellent drug target because it is essential, is accessible from the periplasm, and has no equivalent in mammalian cells. DD-transpeptidase is the target protein of β-lactam antibiotics (e.g. penicillin) This is because the structure of the β-lactam closely resembles the D-ala-D-ala residue. β-lactams exert their effect by competitively inactivating the serine DD-transpeptidase catalytic site. Penicillin is a cyclic analogue of the D-Ala-D-Ala terminated carbonyl donors, therefore in the presence of this antibiotic, the reaction stops at the level of the serine ester-linked penicilloyl enzyme. Thus β-lactam antibiotics force these enzymes to behave like penicillin binding proteins. Kinetically, the interaction between the DD-peptidase and beta-lactams is a three-step reaction: formula_0 Beta-Iactams may form an adduct E-I* of high stability with DD-transpeptidase. The half life of this adduct is around hours, whereas the half-life of the normal reaction is in the order of milliseconds. The interference with the enzyme processes responsible for cell wall formation results in cellular lysis and death due to the triggering of the autolytic system in the bacteria. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E+I\\rightleftharpoons E\\cdot I\\rightarrow E-I*\\rightarrow E + P" } ]
https://en.wikipedia.org/wiki?curid=1103898
11040097
Calo tester
The Calo tester, also known as a ball crater or coating thickness tester, is a simple and inexpensive piece of equipment used to measure the thickness of coatings. Coatings with thicknesses typically between 0.1 to 50 micrometres, such as Physical Vapor Deposition (PVD) coatings or Chemical Vapor Deposition (CVD) coatings, are used in many industries to improve the surface properties of tools and components. The Calo tester is also used to measure the amount of coating wear after a wear test carried out using a Pin-on-Disc Tester. The Calo tester consists of a holder for the surface to be tested and a steel sphere of known diameter that is rotated against the surface by a rotating shaft connected to a motor whilst diamond paste is applied to the contact area. The sphere is rotated for a short period of time (less than 20 seconds for a 0.1 to 5 micrometre thickness) but due to the abrasive nature of the diamond paste this is sufficient time to wear a crater through thin coatings. Calculating coating thickness using the Calo tester. An optical microscope is used to take two measurements across the crater after the Calo test and the coating thickness is calculated using a simple geometrical equation. formula_0 Where t = coating thickness, d = diameter of the sphere x = difference between the radius of the crater and radius of the part of the crater at the bottom of the coating x+y = diameter of the crater References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t = \\frac{x y}d" } ]
https://en.wikipedia.org/wiki?curid=11040097
11041424
Carl Johan Malmsten
Swedish mathematician and politician (1814–1886) Carl Johan Malmsten (April 9, 1814, in Uddetorp, Skara County, Sweden – February 11, 1886, in Uppsala, Sweden) was a Swedish mathematician and politician. He is notable for early research into the theory of functions of a complex variable, for the evaluation of several important logarithmic integrals and series, for his studies in the theory of Zeta-function related series and integrals, as well as for helping Mittag-Leffler start the journal "Acta Mathematica". Malmsten became Docent in 1840, and then, Professor of mathematics at the Uppsala University in 1842. He was elected a member of the Royal Swedish Academy of Sciences in 1844. He was also a minister without portfolio in 1859–1866 and Governor of Skaraborg County in 1866–1879. Main contributions. Usually, Malmsten is known for his earlier works in complex analysis. However, he also greatly contributed in other branches of mathematics, but his results were undeservedly forgotten and many of them were erroneously attributed to other persons. Thus, it was comparatively recently that it was discovered by Iaroslav Blagouchine that Malmsten was first who evaluated several important logarithmic integrals and series, which are closely related to the gamma- and zeta-functions, and among which we can find the so-called "Vardi's integral" and the "Kummer's series" for the logarithm of the Gamma function. In particular, in 1842 he evaluated following lnln-logarithmic integrals. formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 formula_8 The details and an interesting historical analysis are given in Blagouchine's paper. Many of these integrals were later rediscovered by various researchers, including Vardi, Adamchik, Medina and Moll. Moreover, some authors even named the first of these integrals after Vardi, who re-evaluated it in 1988 (they call it "Vardi's integral"), and so did many well-known internet resources such as Wolfram MathWorld site or OEIS Foundation site (taking into account the undoubted Malmsten priority in the evaluation of such a kind of logarithmic integrals, it seems that the name "Malmsten's integrals" would be more appropriate for them). Malmsten derived the above formulae by making use of different series representations. At the same time, it has been shown that they can be also evaluated by methods of contour integration, by making use of the Hurwitz Zeta function, by employing polylogarithms and by using L-functions. More complicated forms of Malmsten's integrals appear in works of Adamchik and Blagouchine (more than 70 integrals). Below are several examples of such integrals formula_9 formula_10 formula_11 formula_12 formula_13 formula_14 where "m" and "n" are positive integers such that "m"&lt;"n", G is Catalan's constant, ζ stands for the Riemann zeta-function, Ψ is the digamma function, and Ψ1 is the trigamma function; see respectively eq. (43), (47) and (48) in Adamchik for the first three integrals, and exercises no. 36-a, 36-b, 11-b and 13-b in Blagouchine for the last four integrals respectively (the third integral being calculated in both works). It is curious that some of Malmsten's integrals lead to the gamma- and polygamma functions of a complex argument, which are not often encountered in analysis. For instance, as shown by Iaroslav Blagouchine, formula_15 or, formula_16 see exercises 7-а and 37 respectively. By the way, Malmsten's integrals are also found to be closely connected to the Stieltjes constants. In 1842, Malmsten also evaluated several important logarithmic series, among which we can find these two series formula_17 and formula_18 The latter series was later rediscovered in a slightly different form by Ernst Kummer, who derived a similar expression formula_19 in 1847 (strictly speaking, the Kummer's result is obtained from the Malmsten's one by putting a=π(2x-1)). Moreover, this series is even known in analysis as "Kummer's series" for the logarithm of the Gamma function, although Malmsten derived it 5 years before Kummer. Malsmten also notably contributed into the theory of zeta-function related series and integrals. In 1842 he proved following important functional relationship for the L-function formula_20 as well as for the M-function formula_21 where in both formulae 0L. Euler "Remarques sur un beau rapport entre les séries des puissances tant directes que réciproques." Histoire de l'Académie Royale des Sciences et Belles-Lettres, année MDCCLXI, Tome 17, pp. 83-106, A Berlin, chez Haude et Spener, Libraires de la Cour et de l'Académie Royale, 1768 [read in 1749]&lt;/ref&gt; but it was Malmsten who proved it (Euler only suggested this formula and verified it for several integer and semi-integer values of s). Curiously enough, the same formula for L(s) was unconsciously rediscovered by Oscar Schlömilch in 1849 (proof provided only in 1858). Four years later, Malmsten derived several other similar reflection formulae, which turn out to be particular cases of the Hurwitz's functional equation. Speaking about the Malmsten's contribution into the theory of zeta-functions, we can not fail to mention the very recent discovery of his authorship of the reflection formula for the first generalized Stieltjes constant at rational argument formula_22 where "m" and "n" are positive integers such that "m"&lt;"n". This identity was derived, albeit in a slightly different form, by Malmsten already in 1846 and has been also discovered independently several times by various authors. In particular, in the literature devoted to Stieltjes constants, it is often attributed to Almkvist and Meurman who derived it in 1990s. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\int_0^1 \\!\\frac{\\,\\ln\\ln\\frac{1}{x}\\,}{1+x^2}\\,dx\\, =\n\\,\\int_1^\\infty \\!\\frac{\\,\\ln\\ln{x}\\,}{1+x^2}\\,dx\\, =\n\\,\\frac{\\pi}{\\,2\\,}\\ln\\left\\{ \\frac{\\Gamma{(3/4)}}{\\Gamma{(1/4)}}\\sqrt{2\\pi\\,}\\right\\}" }, { "math_id": 1, "text": "\\int_0^{1}\\frac{\\ln\\ln\\frac{1}{x}}{(1+x)^2}\\,dx = \\int\\limits_1^{\\infty}\n\\!\\frac{\\ln\\ln{x}}{(1+x)^2}\\,dx = \n\\frac{1}{2} \\bigl(\\ln\\pi - \\ln2 -\\gamma\\bigr),\n" }, { "math_id": 2, "text": "\\int\\limits_0^{1}\\! \\frac{\\ln\\ln\\frac{1}{x}}{1-x+x^2}\\,dx = \n\\int_1^{\\infty}\\! \\frac{\\ln\\ln{x}}{1-x+x^2}\\,dx = \n\\frac{2\\pi}{\\sqrt{3}}\\ln \\biggl\\{ \\frac{\\sqrt[6]{32\\pi^5\n}}{\\Gamma{(1/6)}} \\biggr\\}\n" }, { "math_id": 3, "text": "\\int\\limits_0^{1}\\! \\frac{\\ln\\ln\\frac{1}{x}}{1+x+x^2}\\,dx =\n\\int\\limits_1^{\\infty}\\! \\frac{\\ln\\ln{x}}{1+x+x^2}\\,dx =\n\\frac{\\pi}{\\sqrt{3}}\\ln \\biggl\\{ \\frac{\\Gamma{(2/3)}}{\\Gamma\n{(1/3)}}\\sqrt[3]{2\\pi}\n\\biggr\\}\n" }, { "math_id": 4, "text": "\n \\int\\limits_0^1 \\!\\frac{\\ln\\ln\\frac{1}{x}}{1+2x\\cos\n\\varphi+x^2}\n\\,dx \\,=\\int\\limits_1^{\\infty}\\!\\frac{\\ln\\ln{x}}{1+2x\\cos\\varphi+x^2}\\,dx\n =\n\\frac{\\pi}{2\\sin\\varphi}\\ln \\left\\{\\frac{(2\\pi)^{\\frac{\\scriptstyle\\varphi}{\\scriptstyle\\pi}}\n\\,\\Gamma\\!\\left(\\!\\displaystyle\\frac{1}{\\,2\\,}+\\frac{\\varphi}{\\,2\\pi\\,}\\!\\right)}\n{\\Gamma\\!\\left(\\!\\displaystyle\\frac{1}{\\,2\\,}-\\frac{\\varphi}{\\,2\\pi\\,}\\!\\right)}\\right\\} ,\n\\qquad -\\pi<\\varphi<\\pi\n" }, { "math_id": 5, "text": "\\int\\limits_0^{1} \\!\\frac{x^{n-2}\\ln\\ln\\frac{1}{x}}{1-x^2+x^4-\\cdots\n+x^{2n-2}}\\,dx\\, = \\int\\limits_1^{\\infty}\\!\\frac{x^{n-2}\\ln\\ln{x}}{1-x^2+x^4-\\cdots\n+x^{2n-2}}\\,dx =\n" }, { "math_id": 6, "text": "\\quad =\\, \\frac{\\pi}{\\,2n\\,}\\sec\\frac{\\,\\pi\\,}{2n}\\!\\cdot\\ln \\pi + \n \\frac{\\pi}{\\,n\\,}\\cdot\\!\\!\\!\\!\\!\\!\\sum_{l=1}^{\\;\\;\\frac{1}{2}(n-1)} \n\\!\\!\\!\\! (-1)^{l-1} \\cos\\frac{\\,(2l-1)\\pi\\,}{2n}\\cdot \n\\ln\\left\\{\\!\\frac{\\Gamma\\!\\left(1-\\displaystyle\\frac{2l-1}{2n}\\right) }\n{\\Gamma\\!\\left(\\displaystyle\\frac{2l-1}{2n}\\right)}\\right\\} ,\\qquad n=3,5,7,\\ldots \n" }, { "math_id": 7, "text": "\\int\\limits_0^{1} \\!\\frac{x^{n-2} \\ln\\ln\\frac{1}{x}}{\n1+x^2+x^4+\\cdots+x^{2n-2}}\\,dx \\, =\n\\int\\limits_1^{\\infty}\\!\\frac{x^{n-2} \\ln\\ln{x}}{1+x^2+x^4+\\cdots\n+x^{2n-2}}\\,dx =\n" }, { "math_id": 8, "text": " \\qquad =\\begin{cases}\n\\displaystyle \\frac{\\,\\pi\\,}{2n}\\tan\\frac{\\,\\pi\\,}{2n}\\ln2\\pi + \\frac{\\pi}{n}\\sum_{l=1}^{n-1} (-1)^{l-1} \n\\sin\\frac{\\,\\pi l\\,}{n}\\cdot \n\\ln\\left\\{\\!\\frac{\\Gamma\\!\\left(\\!\\displaystyle\\frac{1}{\\,2\\,}+\\displaystyle\\frac{l}{\\,2n}\\!\\right) }{\\Gamma\\!\\left(\\!\\displaystyle\\frac{l}{\\,2n}\\!\\right)}\\right\\}\n,\\quad n=2,4,6,\\ldots \\\\[10mm]\n\\displaystyle \\frac{\\,\\pi\\,}{2n}\\tan\\frac{\\,\\pi\\,}{2n}\\ln\\pi + \\frac{\\pi}{n}\\!\\!\\!\\!\\!\n\\sum_{l=1}^{\\;\\;\\;\\frac{1}{2}(n-1)} \\!\\!\\!\\! (-1)^{l-1} \\sin\\frac{\\,\\pi l\\,}{n}\\cdot \n\\ln\\left\\{\\!\\frac{\\Gamma\\!\\left(1-\\displaystyle\\frac{\\,l}{n}\\!\\right) }{\\Gamma\\!\\left(\\!\\displaystyle\\frac{\\,l}{n}\\!\\right)}\\right\\} ,\\qquad n=3,5,7,\\ldots \n\\end{cases}\n" }, { "math_id": 9, "text": " \n \\int\\limits_0^1 \\frac{\\ln\\ln\\frac{1}{x}}{1+x^3}\\,dx \n=\\int\\limits_1^\\infty \\frac{x\\ln\\ln x}{1+x^3}\\,dx \n=\\frac{\\ln2}{6}\\ln\\frac{3}{2}-\\frac{\\pi}{6\\sqrt3}\n\\left\\{\\ln54-8\\ln2\\pi+12\\ln\\Gamma\\left(\\frac{1}{3}\\right) \\right\\}\n" }, { "math_id": 10, "text": " \n \\int\\limits_0^1 \\!\\frac{x\\ln\\ln\\frac{1}{x}}{(1-x+x^2)^2}\\,dx \n=\\int\\limits_1^\\infty \\!\\frac{x\\ln\\ln x}{(1-x+x^2)^2}\\,dx \n=-\\frac{\\gamma}{3}-\\frac{1}{3}\\ln\\frac{6\\sqrt3}{\\pi} + \\frac{\\pi\\sqrt3}{27}\n\\left\\{5\\ln2\\pi-6\\ln\\Gamma\\left(\\frac{1}{6}\\right) \\right\\}\n" }, { "math_id": 11, "text": " \n\\int\\limits_0^1 \\frac{\\left(x^4-6x^2+1\\right)\\ln\\ln\\frac{1}{x}}{\\,(1+x^2)^3\\,}\\, dx=\n\\int\\limits_1^\\infty \\frac{\\left(x^4-6x^2+1\\right)\\ln\\ln{x}}{\\,(1+x^2)^3\\,}\\, dx = \\frac{2 \\,\\mathrm{G}}{\\pi}\n" }, { "math_id": 12, "text": " \n\\int\\limits_0^1 \\frac{x\\left(x^4-4x^2+1\\right)\\ln\\ln\\frac{1}{x}}{\\,(1+x^2)^4\\,}\\, dx =\n\\int\\limits_1^\\infty \\frac{x\\left(x^4-4x^2+1\\right)\\ln\\ln{x}}{\\,(1+x^2)^4\\,}\\, dx = \\frac{7 \\zeta(3)}{8\\pi^2}\n" }, { "math_id": 13, "text": " \n\\begin{array}{ll}\n\\displaystyle\n\\int\\limits_0^1 \\frac{x\\!\\left(x^{\\frac{m}{n}}-x^{-\\frac{m}{n}}\\right)^{\\!2}\\ln\\ln\\frac{1}{x}}{\\,(1-x^2)^2\\,}\\, dx =\n\\int\\limits_1^\\infty \\frac{x\\!\\left(x^{\\frac{m}{n}}-x^{-\\frac{m}{n}}\\right)^{\\!2}\\ln\\ln{x}}{\\,(1-x^2)^2\\,}\\, dx = \\!\\!\\!&\\displaystyle\n\\frac{\\,m\\pi\\,}{\\,n\\,} \\sum_{l=1}^{n-1} \\sin\\dfrac{2\\pi m l}{n}\\cdot\\ln\\Gamma\\!\\left(\\!\\frac{l}{n}\\!\\right) \n- \\,\\frac{\\pi m}{\\,2n\\,}\\cot\\frac{\\pi m}{n}\\cdot\\ln\\pi n \\\\[3mm]\n &\\displaystyle\n- \\,\\frac{\\,1\\,}{2}\\ln\\!\\left(\\!\\frac{\\,2\\,}{\\pi}\\sin\\frac{\\,m\\pi\\,}{n}\\!\\right) \n- \\,\\frac{\\gamma}{2}\n\\end{array}\n" }, { "math_id": 14, "text": " \n\\begin{array}{l}\n\\displaystyle\n\\int\\limits_0^1 \\frac{x^2\\!\\left(x^{\\frac{m}{n}}+x^{-\\frac{m}{n}}\\right)\\ln\\ln\\frac{1}{x}}{\\,(1+x^2)^3\\,}\\, dx =\n\\int\\limits_1^\\infty \\frac{x^2\\!\\left(x^{\\frac{m}{n}}+x^{-\\frac{m}{n}}\\right)\\ln\\ln{x}}{\\,(1+x^2)^3\\,}\\, dx = \n-\\frac{\\,\\pi\\left(n^2-m^2\\right)\\,}{8n^2}\\!\\sum_{l=0}^{2n-1} \\! (-1)^l\n\\cos\\dfrac{(2l+1)m\\pi}{2n} \\cdot\\ln\\Gamma\\!\\left(\\!\\frac{2l+1}{4n}\\right) \\\\[3mm]\n\\displaystyle \\,\\,\n+\\frac{\\,m\\,}{\\,8n^2\\,}\\! \\sum_{l=0}^{2n-1} \\! (-1)^l \\sin\\dfrac{(2l+1)m\\pi}{2n}\\cdot \\Psi\\!\\left(\\!\\frac{2l+1}{4n}\\right) \n-\\frac{\\,1\\,}{\\,32\\pi n^2\\,} \\!\\sum_{l=0}^{2n-1}(-1)^l \\cos\\dfrac{(2l+1)m\\pi}{2n}\\cdot \\Psi_1\\!\\left(\\!\\frac{2l+1}{4n}\\right) \n+ \\,\\frac{\\,\\pi (n^2-m^2)\\,}{16n^2}\\sec\\dfrac{m \\pi}{2n}\\cdot\\ln2\\pi n\n\\end{array}\n" }, { "math_id": 15, "text": " \n \\int\\limits_0^1 \\!\\frac{x\\ln\\ln\\frac{1}{x}}{1+4x^2+x^4}\\,dx \n=\\int\\limits_1^{\\infty}\\!\\frac{x\\ln\\ln{x}}{1+4x^2+x^4}\\,dx\n= \\frac{\\,\\pi\\,}{\\,2\\sqrt{3\\,}\\,}\n\\mathrm{Im}\\!\\left[\\ln\\Gamma\\!\\left(\\!\\frac{1}{2}-\\frac{\\ln(2+\\sqrt{3\\,})}{2\\pi i}\\right)\\!\\right] +\\,\n\\frac{\\ln(2+\\sqrt{3\\,})}{\\,4\\sqrt{3\\,}\\,}\\ln\\pi\n" }, { "math_id": 16, "text": " \n\\int\\limits_{0}^{1} \\!\\frac{\\,x \\ln\\ln\\frac{1}{x}\\,}{\\,x^4-2x^2\\cosh{2}+1\\,}\\,dx =\n\\int\\limits_{1}^{\\infty} \\!\\frac{\\,x \\ln\\ln{x}\\,}{\\,x^4-2x^2\\cosh{2}+1\\,}\\,dx\n=-\\frac{\\,\\pi\\,}{2\\,\\sinh{2}\\,}\n\\mathrm{Im}\\!\\left[\\ln\\Gamma\\!\\left(\\!\\frac{i}{2\\pi}\\right) \n- \\ln\\Gamma\\!\\left(\\!\\frac{1}{2}-\\frac{i}{2\\pi}\\right)\\!\\right]\n-\\frac{\\,\\pi^2}{8\\,\\sinh{2}\\,}-\\frac{\\,\\ln2\\pi\\,}{2\\,\\sinh{2}\\,} \n" }, { "math_id": 17, "text": " \n\\sum_{n=0}^{\\infty}(-1)^{n}\\frac{\\ln(2n+1)}{2n+1} \\,=\\,\\frac{\\pi}{4}\\big(\\ln\\pi - \\gamma) -\\pi\\ln\\Gamma\\left(\\frac{3}{4}\\right)\n" }, { "math_id": 18, "text": " \n\\sum_{n=1}^{\\infty}(-1)^{n-1}\n\\frac{\\sin a n \\cdot\\ln{n}}{n} \\,=\\,\\pi\\ln\\left\\{\\frac{\\pi^{\\frac{1}{2}-\\frac{a}{2\\pi}}}{\\Gamma\\left(\\displaystyle\\frac{1}{2}+\\frac{a}{2\\pi}\\right)}\\right\\} - \\frac{a}{2}\\big(\\gamma+\\ln2 \\big) -\\frac{\\pi}{2}\\ln\\cos\\frac{a}{2}\\,,\n\\qquad -\\pi<a<\\pi.\n" }, { "math_id": 19, "text": " \n\\frac{1}{\\pi}\\sum_{n=1}^{\\infty}\\frac{\\sin 2\\pi n x \\cdot\\ln{n}}{n} =\n\\ln\\Gamma(x) - \\frac{1}{2}\\ln(2\\pi) + \\frac{1}{2}\\ln(2\\sin\\pi x) - \\frac{1}{2}(\\gamma+\\ln2\\pi)(1-2x)\\,, \\qquad 0<x<1,\n" }, { "math_id": 20, "text": "L(s)\\equiv\\sum_{n=0}^{\\infty}\\frac{(-1)^n}{(2n+1)^s} \\qquad\\qquad\nL(1-s)=L(s)\\Gamma(s) 2^s \\pi^{-s}\\sin\\frac{\\pi s}{2}, \n" }, { "math_id": 21, "text": "M(s)\\equiv\\frac{2}{\\sqrt{3}}\\sum_{n=1}^{\\infty}\\frac{(-1)^{n+1}}{n^s} \\sin\\frac{\\pi n}{3} \\qquad\\qquad\nM(1-s)=\\displaystyle\\frac{2}{\\sqrt{3}} \\, M(s)\\Gamma(s) 3^s (2\\pi)^{-s}\\sin\\frac{\\pi s}{2}, \n" }, { "math_id": 22, "text": "\n\\gamma_1 \\biggl(\\frac{m}{n}\\biggr)- \\gamma_1 \\biggl(1-\\frac{m}{n} \n\\biggr) =2\\pi\\sum_{l=1}^{n-1} \\sin\\frac{2\\pi m l}{n} \\cdot\\ln\\Gamma \\biggl(\\frac{l}{n} \\biggr)\n-\\pi(\\gamma+\\ln2\\pi n)\\cot\\frac{m\\pi}{n}\n" } ]
https://en.wikipedia.org/wiki?curid=11041424
11042546
Oxygen enhancement ratio
The oxygen enhancement ratio (OER) or oxygen enhancement effect in radiobiology refers to the enhancement of therapeutic or detrimental effect of ionizing radiation due to the presence of oxygen. This so-called oxygen effect is most notable when cells are exposed to an ionizing radiation dose. The OER is traditionally defined as the ratio of radiation doses during lack of oxygen compared to no lack of oxygen for the same biological effect. This may give varying numerical values depending on the chosen biological effect. Additionally, OER may be presented in terms of hyperoxic environments and/or with altered oxygen baseline, complicating the significance of this value. formula_0 The maximum OER depends mainly on the ionizing density or LET of the radiation. Radiation with higher LET and higher relative biological effectiveness (RBE) have a lower OER in mammalian cell tissues. The value of the maximum OER varies from about 1–4. The maximum OER ranges from about 2–4 for low-LET radiations such as X-rays, beta particles and gamma rays, whereas the OER is unity for high-LET radiations such as low energy alpha particles. Uses in medicine. The effect is used in medical physics to increase the effect of radiation therapy in oncology treatments. Additional oxygen abundance creates additional free radicals and increases the damage to the target tissue. In solid tumors the inner parts become less oxygenated than normal tissue and up to three times higher dose is needed to achieve the same tumor control probability as in tissue with normal oxygenation. Explanation of the Oxygen Effect. The best known explanation of the oxygen effect is the oxygen fixation hypothesis which postulates that oxygen permanently fixes radical-induced DNA damage so it becomes permanent. Recently, it has been posited that the oxygen effect involves radiation exposures of cells causing their mitochondria to produce greater amounts of reactive oxygen species. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Eric J. Hall and Amato J. Giaccia: Radiobiology for the radiologist, Lippincott Williams &amp; Wilkins, 6th Ed., 2006
[ { "math_id": 0, "text": "OER = \\frac{Radiation\\, dose\\, in\\, hypoxia} {Radiation\\, dose\\, in\\, air}" } ]
https://en.wikipedia.org/wiki?curid=11042546
11042759
Addition-subtraction chain
An addition-subtraction chain, a generalization of addition chains to include subtraction, is a sequence "a"0, "a"1, "a"2, "a"3, ... that satisfies formula_0 formula_1 An addition-subtraction chain for "n", of length "L", is an addition-subtraction chain such that formula_2. That is, one can thereby compute "n" by "L" additions and/or subtractions. (Note that "n" need not be positive. In this case, one may also include "a"−1 = 0 in the sequence, so that "n" = −1 can be obtained by a chain of length 1.) By definition, every addition chain is also an addition-subtraction chain, but not vice versa. Therefore, the length of the "shortest" addition-subtraction chain for "n" is bounded above by the length of the shortest addition chain for "n". In general, however, the determination of a minimal addition-subtraction chain (like the problem of determining a minimum addition chain) is a difficult problem for which no efficient algorithms are currently known. The related problem of finding an optimal addition sequence is NP-complete (Downey et al., 1981), but it is not known for certain whether finding optimal addition or addition-subtraction chains is NP-hard. For example, one addition-subtraction chain is: formula_3, formula_4, formula_5, formula_6. This is not a "minimal" addition-subtraction chain for "n"=3, however, because we could instead have chosen formula_7. The smallest "n" for which an addition-subtraction chain is shorter than the minimal addition chain is "n"=31, which can be computed in only 6 additions (rather than 7 for the minimal addition chain): formula_8 Like an addition chain, an addition-subtraction chain can be used for addition-chain exponentiation: given the addition-subtraction chain of length "L" for "n", the power formula_9 can be computed by multiplying or dividing by "x" "L" times, where the subtractions correspond to divisions. This is potentially efficient in problems where division is an inexpensive operation, most notably for exponentiation on elliptic curves where division corresponds to a mere sign change (as proposed by Morain and Olivos, 1990). Some hardware multipliers multiply by "n" using an addition chain described by n in binary: n = 31 = 0 0 0 1 1 1 1 1 (binary). Other hardware multipliers multiply by "n" using an addition-subtraction chain described by n in Booth encoding: n = 31 = 0 0 1 0 0 0 0 −1 (Booth encoding).
[ { "math_id": 0, "text": "a_0 = 1, \\," }, { "math_id": 1, "text": "\\text{for }k > 0,\\ a_k = a_i \\pm a_j\\text{ for some }0 \\leq i,j < k." }, { "math_id": 2, "text": "a_L = n" }, { "math_id": 3, "text": "a_0=1" }, { "math_id": 4, "text": "a_1=2=1+1" }, { "math_id": 5, "text": "a_2=4=2+2" }, { "math_id": 6, "text": "a_3=3=4-1" }, { "math_id": 7, "text": "a_2=3=2+1" }, { "math_id": 8, "text": "a_0=1,\\ a_1=2=1+1,\\ a_2=4=2+2,\\ a_3=8=4+4,\\ a_4=16=8+8,\\ a_5=32=16+16,\\ a_6=31=32-1." }, { "math_id": 9, "text": "x^n" } ]
https://en.wikipedia.org/wiki?curid=11042759
11044843
Hayashi limit
Value in astrophysics The Hayashi limit is a theoretical constraint upon the maximum radius of a star for a given mass. When a star is fully within hydrostatic equilibrium—a condition where the inward force of gravity is matched by the outward pressure of the gas—the star can not exceed the radius defined by the Hayashi limit. This has important implications for the evolution of a star, both during the formulative contraction period and later when the star has consumed most of its hydrogen supply through nuclear fusion. A Hertzsprung-Russell diagram displays a plot of a star's surface temperature against the luminosity. On this diagram, the Hayashi limit forms a nearly vertical line at about 3,500 K. The outer layers of low temperature stars are always convective, and models of stellar structure for fully convective stars do not provide a solution to the right of this line. Thus in theory, stars are constrained to remain to the left of this limit during all periods when they are in hydrostatic equilibrium, and the region to the right of the line forms a type of "forbidden zone". Note, however, that there are exceptions to the Hayashi limit. These include collapsing protostars, as well as stars with magnetic fields that interfere with the internal transport of energy through convection. Red giants are stars that have expanded their outer envelope in order to support the nuclear fusion of helium. This moves them up and to the right on the H-R diagram. However, they are constrained by the Hayashi limit not to expand beyond a certain radius. Stars that find themselves across the Hayashi limit have large convection currents in their interior driven by massive temperature gradients. Additionally, those stars states are unstable so the stars rapidly adjust their states, moving in the Hertzprung-Russel diagram until they reach the Hayashi limit. When lower mass stars in the main sequence start expanding and becoming a red giant the stars revisit the Hayashi track. The Hayashi limit constrains the asymptotic giant branch evolution of stars which is important in the late evolution of stars and can be observed, for example, in the ascending branches of the Hertzsprung–Russell diagrams of globular clusters, which have stars of approximately the same age and composition. The Hayashi limit is named after Chūshirō Hayashi, a Japanese astrophysicist. Despite its importance to protostars and late stage main sequence stars, the Hayashi limit was only recognized in Hayashi’s paper in 1961. This late recognition may be because the properties of the Hayashi track required numerical calculations that were not fully developed before. Derivation of the limit. We can derive the relation between the luminosity, temperature and pressure for a simple model for a fully convective star and from the form of this relation we can infer the Hayashi limit. This is an extremely crude model of what occurs in convective stars, but it has good qualitative agreement with the full model with less complications. We follow the derivation in Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution. Nearly all of the interior part of convective stars has an adiabatic stratification (corrections to this are small for fully convective regions), such that formula_0, which holds for an adiabatic expansion of an ideal gas. We assume that this relation holds from the interior to the surface of the star—the surface is called photosphere. We assume \grad_{adiabatic} to be constant throughout the interior of the star with value 0.4. However, we obtain the correct distinctive behavior. For the interior we consider a simple polytropic relation between P and T: formula_1 With the index formula_2. We assume the relation above to hold until the photosphere where we assume to have a simple absorption law formula_3 Then, we use the hydrostatic equilibrium equation and integrate it with respect to the radius to give us formula_4 For the solution in the interior we set formula_5 ; formula_6 in the P-T relation and then eliminate pressure of this equation. Luminosity is given by the Stephan-Boltzmann law applied to a perfect black body: formula_7. Thus, any value of R corresponds to a certain point in the Hertzsprung–Russell diagram. Finally, after some algebra this is the equation for the Hayashi limit in the Hertzsprung–Russell diagram: formula_8 With coefficients formula_9, formula_10 Takeaways from plugin in formula_11 and formula_12 for a cool hydrogen ion dominated atmosphere oppacity model (formula_13): These predictions are supported by numerical simulations of stars. What happens when stars cross the limit. Until now we have made no claims on the stability of locale to the left, right or at the Hayashi limit in the Hertzsprung–Russell diagram. To the left of the Hayashi limit, we have formula_14 and some part of the model is radiative. The model is fully convective at the Hayashi limit with formula_15. Models to the right of the Hayashi limit should have formula_16. If a star is formed such that some region in its deep interior has large formula_17 large convective fluxes with velocities formula_18. The convective fluxes of energy cooldown the interior rapidly until formula_19 and the star has moved to the Hayashi limit. In fact, it can be shown from the mixing length model that even a small excess can transport energy from the deep interior to the surface by convective fluxes. This will happen within the short timescale for the adjustment of convection which is still larger than timescales for non-equilibrium processes in the star such as hydrodynamic adjustment associated with the thermal time scale. Hence, the limit between an “allowed” stable region (left) and a “forbidden” unstable region (right) for stars of given M and composition that are in hydrostatic equilibrium and have a fully adjusted convection is the Hayashi limit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\delta ln T}{\\delta ln P} = \\nabla_{adiabatic} = 0.4" }, { "math_id": 1, "text": "P = C T^{(1+n)}" }, { "math_id": 2, "text": "n = 3/2" }, { "math_id": 3, "text": "\\kappa = \\kappa_0 P^a T^b" }, { "math_id": 4, "text": "P_0 = Constant * \\left(\\frac{M}{R^2} T_{eff}^{-b} \\right)^{\\frac{1}{1+a}}" }, { "math_id": 5, "text": "P = P_0" }, { "math_id": 6, "text": "T = T_{eff}" }, { "math_id": 7, "text": "L = 4 \\pi R^2 \\sigma \\, T_{eff}^4 " }, { "math_id": 8, "text": " \\log (T_eff) = A \\log (L) + B \\log (M) + constant " }, { "math_id": 9, "text": "A = \\frac{0.75a - 0.25}{b-5.5a +1.5} " }, { "math_id": 10, "text": " B = \\frac{0.5a - 1.5}{b-5.5a +1.5} " }, { "math_id": 11, "text": "a \\approx 1" }, { "math_id": 12, "text": "b \\approx 3" }, { "math_id": 13, "text": "T < 5000 K" }, { "math_id": 14, "text": "\\nabla < \\nabla{adiabatic}" }, { "math_id": 15, "text": "\\nabla = \\nabla{adiabatic}" }, { "math_id": 16, "text": "\\nabla > \\nabla_{adiabatic}" }, { "math_id": 17, "text": "\\nabla - \\nabla_{adiabatic}>0" }, { "math_id": 18, "text": "v_{convective} \\approx (\\nabla - \\nabla_{adiabatic}) /2" }, { "math_id": 19, "text": "\\nabla = \\nabla_{adiabatic}" } ]
https://en.wikipedia.org/wiki?curid=11044843
1104697
Homotopy groups of spheres
How spheres of various dimensions can wrap around each other In the mathematical field of algebraic topology, the homotopy groups of spheres describe how spheres of various dimensions can wrap around each other. They are examples of topological invariants, which reflect, in algebraic terms, the structure of spheres viewed as topological spaces, forgetting about their precise geometry. Unlike homology groups, which are also topological invariants, the homotopy groups are surprisingly complex and difficult to compute. The n-dimensional unit sphere — called the n-sphere for brevity, and denoted as "S""n" — generalizes the familiar circle ("S"1) and the ordinary sphere ("S"2). The n-sphere may be defined geometrically as the set of points in a Euclidean space of dimension "n" + 1 located at a unit distance from the origin. The i-th "homotopy group" π"i"("S""n") summarizes the different ways in which the i-dimensional sphere "S""i" can be mapped continuously into the sphere "S""n". This summary does not distinguish between two mappings if one can be continuously deformed to the other; thus, only equivalence classes of mappings are summarized. An "addition" operation defined on these equivalence classes makes the set of equivalence classes into an abelian group. The problem of determining π"i"("S""n") falls into three regimes, depending on whether i is less than, equal to, or greater than n: "n", every map from "S""n" to itself has a degree that measures how many times the sphere is wrapped around itself. This degree identifies the homotopy group π"n"("S""n") with the group of integers under addition. For example, every point on a circle can be mapped continuously onto a point of another circle; as the first point is moved around the first circle, the second point may cycle several times around the second circle, depending on the particular mapping. The question of computing the homotopy group π"n"+"k"("S""n") for positive k turned out to be a central question in algebraic topology that has contributed to development of many of its fundamental techniques and has served as a stimulating focus of research. One of the main discoveries is that the homotopy groups π"n"+"k"("S""n") are independent of n for "n" ≥ "k" + 2. These are called the stable homotopy groups of spheres and have been computed for values of k up to 90. The stable homotopy groups form the coefficient ring of an extraordinary cohomology theory, called stable cohomotopy theory. The unstable homotopy groups (for "n" &lt; "k" + 2) are more erratic; nevertheless, they have been tabulated for "k" &lt; 20. Most modern computations use spectral sequences, a technique first applied to homotopy groups of spheres by Jean-Pierre Serre. Several important patterns have been established, yet much remains unknown and unexplained. Background. The study of homotopy groups of spheres builds on a great deal of background material, here briefly reviewed. Algebraic topology provides the larger context, itself built on topology and abstract algebra, with homotopy groups as a basic example. n-sphere. An ordinary sphere in three-dimensional space—the surface, not the solid ball—is just one example of what a sphere means in topology. Geometry defines a sphere rigidly, as a shape. Here are some alternatives. 1 This is the set of points in 3-dimensional Euclidean space found exactly one unit away from the origin. It is called the 2-sphere, "S"2, for reasons given below. The same idea applies for any dimension n; the equation "x" + "x" + ⋯ + "x" 1 produces the n-sphere as a geometric object in ("n" + 1)-dimensional space. For example, the 1-sphere "S"1 is a circle. This construction moves from geometry to pure topology. The disk "D"2 is the region contained by a circle, described by the inequality "x" + "x" ≤ 1, and its rim (or "boundary") is the circle "S"1, described by the equality "x" + "x" 1. If a balloon is punctured and spread flat it produces a disk; this construction repairs the puncture, like pulling a drawstring. The slash, pronounced "modulo", means to take the topological space on the left (the disk) and in it join together as one all the points on the right (the circle). The region is 2-dimensional, which is why topology calls the resulting topological space a 2-sphere. Generalized, "D""n"/"S""n"−1 produces "S""n". For example, "D"1 is a line segment, and the construction joins its ends to make a circle. An equivalent description is that the boundary of an n-dimensional disk is glued to a point, producing a CW complex. This construction, though simple, is of great theoretical importance. Take the circle "S"1 to be the equator, and sweep each point on it to one point above (the North Pole), producing the northern hemisphere, and to one point below (the South Pole), producing the southern hemisphere. For each positive integer n, the n-sphere "x" + "x" + ⋯ + "x" 1 has as equator the ("n" − 1)-sphere "x" + "x" + ⋯ + "x" 1, and the suspension Σ"S""n"−1 produces "S""n". Some theory requires selecting a fixed point on the sphere, calling the pair (sphere, point) a "pointed sphere". For some spaces the choice matters, but for a sphere all points are equivalent so the choice is a matter of convenience. For spheres constructed as a repeated suspension, the point (1, 0, 0, ..., 0), which is on the equator of all the levels of suspension, works well; for the disk with collapsed rim, the point resulting from the collapse of the rim is another obvious choice. Homotopy group. The distinguishing feature of a topological space is its continuity structure, formalized in terms of open sets or neighborhoods. A continuous map is a function between spaces that preserves continuity. A homotopy is a continuous path between continuous maps; two maps connected by a homotopy are said to be homotopic. The idea common to all these concepts is to discard variations that do not affect outcomes of interest. An important practical example is the residue theorem of complex analysis, where "closed curves" are continuous maps from the circle into the complex plane, and where two closed curves produce the same integral result if they are homotopic in the topological space consisting of the plane minus the points of singularity. The first homotopy group, or fundamental group, π1("X") of a (path connected) topological space X thus begins with continuous maps from a pointed circle ("S"1,"s") to the pointed space ("X","x"), where maps from one pair to another map s into x. These maps (or equivalently, closed curves) are grouped together into equivalence classes based on homotopy (keeping the "base point" x fixed), so that two maps are in the same class if they are homotopic. Just as one point is distinguished, so one class is distinguished: all maps (or curves) homotopic to the constant map "S"1↦"x" are called null homotopic. The classes become an abstract algebraic group with the introduction of addition, defined via an "equator pinch". This pinch maps the equator of a pointed sphere (here a circle) to the distinguished point, producing a "bouquet of spheres" — two pointed spheres joined at their distinguished point. The two maps to be added map the upper and lower spheres separately, agreeing on the distinguished point, and composition with the pinch gives the sum map. More generally, the i-th homotopy group, π"i"("X") begins with the pointed i-sphere ("S""i", "s"), and otherwise follows the same procedure. The null homotopic class acts as the identity of the group addition, and for X equal to "S""n" (for positive n) — the homotopy groups of spheres — the groups are abelian and finitely generated. If for some i all maps are null homotopic, then the group π"i" consists of one element, and is called the trivial group. A continuous map between two topological spaces induces a group homomorphism between the associated homotopy groups. In particular, if the map is a continuous bijection (a homeomorphism), so that the two spaces have the same topology, then their i-th homotopy groups are isomorphic for all i. However, the real plane has exactly the same homotopy groups as a solitary point (as does a Euclidean space of any dimension), and the real plane with a point removed has the same groups as a circle, so groups alone are not enough to distinguish spaces. Although the loss of discrimination power is unfortunate, it can also make certain computations easier. Low-dimensional examples. The low-dimensional examples of homotopy groups of spheres provide a sense of the subject, because these special cases can be visualized in ordinary 3-dimensional space. However, such visualizations are not mathematical proofs, and do not capture the possible complexity of maps between spheres. ===π1("S"1) Z=== The simplest case concerns the ways that a circle (1-sphere) can be wrapped around another circle. This can be visualized by wrapping a rubber band around one's finger: it can be wrapped once, twice, three times and so on. The wrapping can be in either of two directions, and wrappings in opposite directions will cancel out after a deformation. The homotopy group π1("S"1) is therefore an infinite cyclic group, and is isomorphic to the group Z of integers under addition: a homotopy class is identified with an integer by counting the number of times a mapping in the homotopy class wraps around the circle. This integer can also be thought of as the winding number of a loop around the origin in the plane. The identification (a group isomorphism) of the homotopy group with the integers is often written as an equality: thus π1("S"1) Z. ===π2("S"2) Z=== Mappings from a 2-sphere to a 2-sphere can be visualized as wrapping a plastic bag around a ball and then sealing it. The sealed bag is topologically equivalent to a 2-sphere, as is the surface of the ball. The bag can be wrapped more than once by twisting it and wrapping it back over the ball. (There is no requirement for the continuous map to be injective and so the bag is allowed to pass through itself.) The twist can be in one of two directions and opposite twists can cancel out by deformation. The total number of twists after cancellation is an integer, called the "degree" of the mapping. As in the case mappings from the circle to the circle, this degree identifies the homotopy group with the group of integers, Z. These two results generalize: for all "n" &gt; 0, π"n"("S""n") Z (see below). ===π1("S"2) 0=== Any continuous mapping from a circle to an ordinary sphere can be continuously deformed to a one-point mapping, and so its homotopy class is trivial. One way to visualize this is to imagine a rubber-band wrapped around a frictionless ball: the band can always be slid off the ball. The homotopy group is therefore a trivial group, with only one element, the identity element, and so it can be identified with the subgroup of Z consisting only of the number zero. This group is often denoted by 0. Showing this rigorously requires more care, however, due to the existence of space-filling curves. This result generalizes to higher dimensions. All mappings from a lower-dimensional sphere into a sphere of higher dimension are similarly trivial: if "i" &lt; "n", then π"i"("S""n") 0. This can be shown as a consequence of the cellular approximation theorem. ===π2("S"1) 0=== All the interesting cases of homotopy groups of spheres involve mappings from a higher-dimensional sphere onto one of lower dimension. Unfortunately, the only example which can easily be visualized is not interesting: there are no nontrivial mappings from the ordinary sphere to the circle. Hence, π2("S"1) 0. This is because "S"1 has the real line as its universal cover which is contractible (it has the homotopy type of a point). In addition, because "S"2 is simply connected, by the lifting criterion, any map from "S"2 to "S"1 can be lifted to a map into the real line and the nullhomotopy descends to the downstairs space (via composition). ===π3("S"2) Z=== The first nontrivial example with "i" &gt; "n" concerns mappings from the 3-sphere to the ordinary 2-sphere, and was discovered by Heinz Hopf, who constructed a nontrivial map from "S"3 to "S"2, now known as the Hopf fibration. This map generates the homotopy group π3("S"2) Z. History. In the late 19th century Camille Jordan introduced the notion of homotopy and used the notion of a homotopy group, without using the language of group theory. A more rigorous approach was adopted by Henri Poincaré in his 1895 set of papers "Analysis situs" where the related concepts of homology and the fundamental group were also introduced. Higher homotopy groups were first defined by Eduard Čech in 1932. (His first paper was withdrawn on the advice of Pavel Sergeyevich Alexandrov and Heinz Hopf, on the grounds that the groups were commutative so could not be the right generalizations of the fundamental group.) Witold Hurewicz is also credited with the introduction of homotopy groups in his 1935 paper and also for the Hurewicz theorem which can be used to calculate some of the groups. An important method for calculating the various groups is the concept of stable algebraic topology, which finds properties that are independent of the dimensions. Typically these only hold for larger dimensions. The first such result was Hans Freudenthal's suspension theorem, published in 1937. Stable algebraic topology flourished between 1945 and 1966 with many important results. In 1953 George W. Whitehead showed that there is a metastable range for the homotopy groups of spheres. Jean-Pierre Serre used spectral sequences to show that most of these groups are finite, the exceptions being π"n"("S""n") and π4"n"−1("S"2"n"). Others who worked in this area included José Adem, Hiroshi Toda, Frank Adams, J. Peter May, Mark Mahowald, Daniel Isaksen, Guozhen Wang, and Zhouli Xu. The stable homotopy groups π"n"+"k"("S""n") are known for k up to 90, and, as of 2023, unknown for larger k. General theory. As noted already, when i is less than n, π"i"("S""n") 0, the trivial group. The reason is that a continuous mapping from an i-sphere to an n-sphere with "i" &lt; "n" can always be deformed so that it is not surjective. Consequently, its image is contained in "S""n" with a point removed; this is a contractible space, and any mapping to such a space can be deformed into a one-point mapping. The case "i" "n" has also been noted already, and is an easy consequence of the Hurewicz theorem: this theorem links homotopy groups with homology groups, which are generally easier to calculate; in particular, it shows that for a simply-connected space "X", the first nonzero homotopy group π"k"("X"), with "k" &gt; 0, is isomorphic to the first nonzero homology group "H""k"("X"). For the n-sphere, this immediately implies that for "n" ≥ 2, π"n"("S""n") "H""n"("S""n") Z. The homology groups "H""i"("S""n"), with "i" &gt; "n", are all trivial. It therefore came as a great surprise historically that the corresponding homotopy groups are not trivial in general. This is the case that is of real importance: the higher homotopy groups π"i"("S""n"), for "i" &gt; "n", are surprisingly complex and difficult to compute, and the effort to compute them has generated a significant amount of new mathematics. Table. The following table gives an idea of the complexity of the higher homotopy groups even for spheres of dimension 8 or less. In this table, the entries are either the trivial group 0, the infinite cyclic group Z, finite cyclic groups of order n (written as Z"n"), or direct products of such groups (written, for example, as Z24×Z3 or Z = Z2×Z2). Extended tables of homotopy groups of spheres are given at the end of the article. The first row of this table is straightforward. The homotopy groups π"i"("S"1) of the 1-sphere are trivial for "i" &gt; 1, because the universal covering space, formula_0, which has the same higher homotopy groups, is contractible. Beyond the first row, the higher homotopy groups ("i" &gt; "n") appear to be chaotic, but in fact there are many patterns, some obvious and some very subtle. π"i"("S3") for "i" ≥ 3). This isomorphism is induced by the Hopf fibration "S"3 → "S"2. These patterns follow from many different theoretical results. Stable and unstable groups. The fact that the groups below the jagged line in the table above are constant along the diagonals is explained by the suspension theorem of Hans Freudenthal, which implies that the suspension homomorphism from π"n"+"k"("S""n") to π"n"+"k"+1("S""n"+1) is an isomorphism for "n" &gt; "k" + 1. The groups π"n"+"k"("S""n") with "n" &gt; "k" + 1 are called the "stable homotopy groups of spheres", and are denoted π: they are finite abelian groups for "k" ≠ 0, and have been computed in numerous cases, although the general pattern is still elusive. For "n" ≤ "k"+1, the groups are called the "unstable homotopy groups of spheres". Hopf fibrations. The classical Hopf fibration is a fiber bundle: formula_1 The general theory of fiber bundles "F" → "E" → "B" shows that there is a long exact sequence of homotopy groups formula_2 For this specific bundle, each group homomorphism π"i"("S"1) → π"i"("S"3), induced by the inclusion "S"1 → "S"3, maps all of π"i"("S"1) to zero, since the lower-dimensional sphere "S"1 can be deformed to a point inside the higher-dimensional one "S"3. This corresponds to the vanishing of π1("S"3). Thus the long exact sequence breaks into short exact sequences, formula_3 Since "S""n"+1 is a suspension of "S""n", these sequences are split by the suspension homomorphism π"i"−1("S"1) → π"i"("S"2), giving isomorphisms formula_4 Since π"i"−1("S"1) vanishes for i at least 3, the first row shows that π"i"("S"2) and π"i"("S"3) are isomorphic whenever i is at least 3, as observed above. The Hopf fibration may be constructed as follows: pairs of complex numbers ("z"0,"z"1) with form a 3-sphere, and their ratios cover the complex plane plus infinity, a 2-sphere. The Hopf map "S"3 → "S"2 sends any such pair to its ratio. Similarly (in addition to the Hopf fibration formula_5, where the bundle projection is a double covering), there are generalized Hopf fibrations formula_6 formula_7 constructed using pairs of quaternions or octonions instead of complex numbers. Here, too, π3("S"7) and π7("S"15) are zero. Thus the long exact sequences again break into families of split short exact sequences, implying two families of relations. formula_8 formula_9 The three fibrations have base space "S""n" with "n" 2"m", for "m" 1, 2, 3. A fibration does exist for "S"1 ("m" 0) as mentioned above, but not for "S"16 ("m" 4) and beyond. Although generalizations of the relations to "S"16 are often true, they sometimes fail; for example, formula_10 Thus there can be no fibration formula_11 the first non-trivial case of the Hopf invariant one problem, because such a fibration would imply that the failed relation is true. Framed cobordism. Homotopy groups of spheres are closely related to cobordism classes of manifolds. In 1938 Lev Pontryagin established an isomorphism between the homotopy group π"n"+"k"("S""n") and the group Ω("S""n"+"k") of cobordism classes of differentiable k-submanifolds of "S""n"+"k" which are "framed", i.e. have a trivialized normal bundle. Every map "f" : "S""n"+"k" → "S""n" is homotopic to a differentiable map with "M""k" = "f"−1(1, 0, ..., 0) ⊂ "S""n"+"k" a framed k-dimensional submanifold. For example, π"n"("S""n") Z is the cobordism group of framed 0-dimensional submanifolds of "S""n", computed by the algebraic sum of their points, corresponding to the degree of maps "f" : "S""n" → "S""n". The projection of the Hopf fibration "S"3 → "S"2 represents a generator of π3("S"2) Ω("S"3) Z which corresponds to the framed 1-dimensional submanifold of "S"3 defined by the standard embedding "S"1 ⊂ "S"3 with a nonstandard trivialization of the normal 2-plane bundle. Until the advent of more sophisticated algebraic methods in the early 1950s (Serre) the Pontrjagin isomorphism was the main tool for computing the homotopy groups of spheres. In 1954 the Pontrjagin isomorphism was generalized by René Thom to an isomorphism expressing other groups of cobordism classes (e.g. of all manifolds) as homotopy groups of spaces and spectra. In more recent work the argument is usually reversed, with cobordism groups computed in terms of homotopy groups. Finiteness and torsion. In 1951, Jean-Pierre Serre showed that homotopy groups of spheres are all finite except for those of the form π"n"("S""n") or π4"n"−1("S"2"n") (for positive n), when the group is the product of the infinite cyclic group with a finite abelian group. In particular the homotopy groups are determined by their p-components for all primes p. The 2-components are hardest to calculate, and in several ways behave differently from the p-components for odd primes. In the same paper, Serre found the first place that p-torsion occurs in the homotopy groups of n dimensional spheres, by showing that π"n"+"k"("S""n") has no p-torsion if "k" &lt; 2"p" − 3, and has a unique subgroup of order p if "n" ≥ 3 and "k" 2"p" − 3. The case of 2-dimensional spheres is slightly different: the first p-torsion occurs for "k" 2"p" − 3 + 1. In the case of odd torsion there are more precise results; in this case there is a big difference between odd and even dimensional spheres. If p is an odd prime and "n" 2"i" + 1, then elements of the p-component of π"n"+"k"("S""n") have order at most "p""i". This is in some sense the best possible result, as these groups are known to have elements of this order for some values of k. Furthermore, the stable range can be extended in this case: if n is odd then the double suspension from π"k"("S""n") to π"k"+2("S""n"+2) is an isomorphism of p-components if "k" &lt; "p"("n" + 1) − 3, and an epimorphism if equality holds. The p-torsion of the intermediate group π"k"+1("S""n"+1) can be strictly larger. The results above about odd torsion only hold for odd-dimensional spheres: for even-dimensional spheres, the James fibration gives the torsion at odd primes p in terms of that of odd-dimensional spheres, formula_12 (where ("p") means take the p-component). This exact sequence is similar to the ones coming from the Hopf fibration; the difference is that it works for all even-dimensional spheres, albeit at the expense of ignoring 2-torsion. Combining the results for odd and even dimensional spheres shows that much of the odd torsion of unstable homotopy groups is determined by the odd torsion of the stable homotopy groups. For stable homotopy groups there are more precise results about p-torsion. For example, if "k" &lt; 2"p"("p" − 1) − 2 for a prime p then the p-primary component of the stable homotopy group π vanishes unless "k" + 1 is divisible by 2("p" − 1), in which case it is cyclic of order p. The J-homomorphism. An important subgroup of π"n"+"k"("S""n"), for "k" ≥ 2, is the image of the J-homomorphism "J" : π"k"(SO("n")) → π"n"+"k"("S""n"), where SO("n") denotes the special orthogonal group. In the stable range "n" ≥ "k" + 2, the homotopy groups π"k"(SO("n")) only depend on "k" (mod 8). This period 8 pattern is known as Bott periodicity, and it is reflected in the stable homotopy groups of spheres via the image of the J-homomorphism which is: 4"m" − 1 ≡ 3 (mod 4). This last case accounts for the elements of unusually large finite order in π"n"+"k"("S""n") for such values of k. For example, the stable groups π"n"+11("S""n") have a cyclic subgroup of order 504, the denominator of The stable homotopy groups of spheres are the direct sum of the image of the J-homomorphism, and the kernel of the Adams e-invariant, a homomorphism from these groups to formula_13. Roughly speaking, the image of the J-homomorphism is the subgroup of "well understood" or "easy" elements of the stable homotopy groups. These well understood elements account for most elements of the stable homotopy groups of spheres in small dimensions. The quotient of π by the image of the J-homomorphism is considered to be the "hard" part of the stable homotopy groups of spheres . (Adams also introduced certain order 2 elements μ"n" of π for "n" ≡ 1 or 2 (mod 8), and these are also considered to be "well understood".) Tables of homotopy groups of spheres sometimes omit the "easy" part im("J") to save space. Ring structure. The direct sum formula_14 of the stable homotopy groups of spheres is a supercommutative graded ring, where multiplication is given by composition of representing maps, and any element of non-zero degree is nilpotent; the nilpotence theorem on complex cobordism implies Nishida's theorem. Example: If η is the generator of π (of order 2), then "η"2 is nonzero and generates π, and "η"3 is nonzero and 12 times a generator of π, while "η"4 is zero because the group π is trivial. If f and g and h are elements of π with "f" "g" 0 and "g"⋅"h" 0, there is a Toda bracket of these elements. The Toda bracket is not quite an element of a stable homotopy group, because it is only defined up to addition of products of certain other elements. Hiroshi Toda used the composition product and Toda brackets to label many of the elements of homotopy groups. There are also higher Toda brackets of several elements, defined when suitable lower Toda brackets vanish. This parallels the theory of Massey products in cohomology. Every element of the stable homotopy groups of spheres can be expressed using composition products and higher Toda brackets in terms of certain well known elements, called Hopf elements. Computational methods. If X is any finite simplicial complex with finite fundamental group, in particular if X is a sphere of dimension at least 2, then its homotopy groups are all finitely generated abelian groups. To compute these groups, they are often factored into their p-components for each prime p, and calculating each of these p-groups separately. The first few homotopy groups of spheres can be computed using ad hoc variations of the ideas above; beyond this point, most methods for computing homotopy groups of spheres are based on spectral sequences. This is usually done by constructing suitable fibrations and taking the associated long exact sequences of homotopy groups; spectral sequences are a systematic way of organizing the complicated information that this process generates. The computation of the homotopy groups of "S"2 has been reduced to a combinatorial group theory question. identify these homotopy groups as certain quotients of the Brunnian braid groups of "S"2. Under this correspondence, every nontrivial element in π"n"("S"2) for "n" &gt; 2 may be represented by a Brunnian braid over "S"2 that is not Brunnian over the disk "D"2. For example, the Hopf map "S"3 → "S"2 corresponds to the Borromean rings. Applications. Z) can be used to prove the fundamental theorem of algebra, which states that every non-constant complex polynomial has a zero. Z implies the Brouwer fixed point theorem that every continuous map from the n-dimensional ball to itself has a fixed point. formula_15 where "bP""n"+1 is the cyclic subgroup represented by homotopy spheres that bound a parallelizable manifold, π is the nth stable homotopy group of spheres, and J is the image of the J-homomorphism. This is an isomorphism unless n is of the form 2"k" − 2, in which case the image has index 1 or 2. 62. (This was the smallest value of k for which the question was open at the time.) Table of homotopy groups. Tables of homotopy groups of spheres are most conveniently organized by showing π"n"+"k"("S""n"). The following table shows many of the groups π"n"+"k"("S""n"). The stable homotopy groups are highlighted in blue, the unstable ones in red. Each homotopy group is the product of the cyclic groups of the orders given in the table, using the following conventions: Example: π19("S"10) π9+10("S"10) Z×Z2×Z2×Z2, which is denoted by ∞⋅23 in the table. Table of stable homotopy groups. The stable homotopy groups π are the products of cyclic groups of the infinite or prime power orders shown in the table. (For largely historical reasons, stable homotopy groups are usually given as products of cyclic groups of prime power order, while tables of unstable homotopy groups often give them as products of the smallest number of cyclic groups.) For "p" &gt; 5, the part of the p-component that is accounted for by the J-homomorphism is cyclic of order p if 2("p" − 1) divides "k" + 1 and 0 otherwise. The mod 8 behavior of the table comes from Bott periodicity via the J-homomorphism, whose image is underlined. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "S^1\\hookrightarrow S^3\\rightarrow S^2." }, { "math_id": 2, "text": " \\cdots \\to \\pi_i(F) \\to \\pi_i(E) \\to \\pi_i(B) \\to \\pi_{i-1}(F) \\to \\cdots." }, { "math_id": 3, "text": "0\\rightarrow \\pi_i(S^3)\\rightarrow \\pi_i(S^2)\\rightarrow \\pi_{i-1}(S^1)\\rightarrow 0 ." }, { "math_id": 4, "text": "\\pi_i(S^2)= \\pi_i(S^3)\\oplus \\pi_{i-1}(S^1) ." }, { "math_id": 5, "text": "S^0\\hookrightarrow S^1\\rightarrow S^1" }, { "math_id": 6, "text": "S^3\\hookrightarrow S^7\\rightarrow S^4" }, { "math_id": 7, "text": "S^7\\hookrightarrow S^{15}\\rightarrow S^8" }, { "math_id": 8, "text": "\\pi_i(S^4)= \\pi_i(S^7)\\oplus \\pi_{i-1}(S^3) ," }, { "math_id": 9, "text": "\\pi_i(S^8)= \\pi_i(S^{15})\\oplus \\pi_{i-1}(S^7) ." }, { "math_id": 10, "text": "\\pi_{30}(S^{16})\\neq \\pi_{30}(S^{31})\\oplus \\pi_{29}(S^{15}) ." }, { "math_id": 11, "text": "S^{15}\\hookrightarrow S^{31}\\rightarrow S^{16} ," }, { "math_id": 12, "text": "\\pi_{2m+k}(S^{2m})(p) = \\pi_{2m+k-1}(S^{2m-1})(p)\\oplus \\pi_{2m+k}(S^{4m-1})(p)" }, { "math_id": 13, "text": "\\mathbb{Q} / \\mathbb{Z}" }, { "math_id": 14, "text": "\\pi_{\\ast}^S=\\bigoplus_{k\\ge 0}\\pi_k^S" }, { "math_id": 15, "text": "\\Theta_n / bP_{n+1} \\to \\pi_n^S / J ," } ]
https://en.wikipedia.org/wiki?curid=1104697
1104704
Covariance and contravariance (computer science)
Programming language concept Many programming language type systems support subtyping. For instance, if the type is a subtype of , then an expression of type should be substitutable wherever an expression of type is used. Variance is how subtyping between more complex types relates to subtyping between their components. For example, how should a list of s relate to a list of s? Or how should a function that returns relate to a function that returns ? Depending on the variance of the type constructor, the subtyping relation of the simple types may be either preserved, reversed, or ignored for the respective complex types. In the OCaml programming language, for example, "list of Cat" is a subtype of "list of Animal" because the list type constructor is covariant. This means that the subtyping relation of the simple types is preserved for the complex types. On the other hand, "function from Animal to String" is a subtype of "function from Cat to String" because the function type constructor is contravariant in the parameter type. Here, the subtyping relation of the simple types is reversed for the complex types. A programming language designer will consider variance when devising typing rules for language features such as arrays, inheritance, and generic datatypes. By making type constructors covariant or contravariant instead of invariant, more programs will be accepted as well-typed. On the other hand, programmers often find contravariance unintuitive, and accurately tracking variance to avoid runtime type errors can lead to complex typing rules. In order to keep the type system simple and allow useful programs, a language may treat a type constructor as invariant even if it would be safe to consider it variant, or treat it as covariant even though that could violate type safety. Formal definition. Suppose codice_0 and codice_1 are types, and codice_2 denotes application of a type constructor codice_2 with type argument codice_4. Within the type system of a programming language, a typing rule for a type constructor codice_2 is: The article considers how this applies to some common type constructors. C# examples. For example, in C#, if is a subtype of , then: The variance of a C# generic interface is declared by placing the (covariant) or (contravariant) attribute on (zero or more of) its type parameters. The above interfaces are declared as , , and . Types with more than one type parameter may specify different variances on each type parameter. For example, the delegate type represents a function with a contravariant input parameter of type and a covariant return value of type . The compiler checks that all types are defined and used consistently with their annotations, and otherwise signals a compilation error. The typing rules for interface variance ensure type safety. For example, an represents a first-class function expecting an argument of type , and a function that can handle any type of animal can always be used instead of one that can only handle cats. Arrays. Read-only data types (sources) can be covariant; write-only data types (sinks) can be contravariant. Mutable data types which act as both sources and sinks should be invariant. To illustrate this general phenomenon, consider the array type. For the type we can make the type , which is an "array of animals". For the purposes of this example, this array supports both reading and writing elements. We have the option to treat this as either: is an is a is not a and a is not an If we wish to avoid type errors, then only the third choice is safe. Clearly, not every can be treated as if it were a , since a client reading from the array will expect a , but an may contain e.g. a . So the contravariant rule is not safe. Conversely, a cannot be treated as an . It should always be possible to put a into an . With covariant arrays this cannot be guaranteed to be safe, since the backing store might actually be an array of cats. So the covariant rule is also not safe—the array constructor should be "invariant". Note that this is only an issue for mutable arrays; the covariant rule is safe for immutable (read-only) arrays. Likewise, the contravariant rule would be safe for write-only arrays. Covariant arrays in Java and C#. Early versions of Java and C# did not include generics, also termed parametric polymorphism. In such a setting, making arrays invariant rules out useful polymorphic programs. For example, consider writing a function to shuffle an array, or a function that tests two arrays for equality using the . method on the elements. The implementation does not depend on the exact type of element stored in the array, so it should be possible to write a single function that works on all types of arrays. It is easy to implement functions of type: boolean equalArrays(Object[] a1, Object[] a2); void shuffleArray(Object[] a); However, if array types were treated as invariant, it would only be possible to call these functions on an array of exactly the type . One could not, for example, shuffle an array of strings. Therefore, both Java and C# treat array types covariantly. For instance, in Java is a subtype of , and in C# is a subtype of . As discussed above, covariant arrays lead to problems with writes into the array. Java and C# deal with this by marking each array object with a type when it is created. Each time a value is stored into an array, the execution environment will check that the run-time type of the value is equal to the run-time type of the array. If there is a mismatch, an (Java) or (C#) is thrown: // a is a single-element array of String String[] a = new String[1]; // b is an array of Object Object[] b = a; // Assign an Integer to b. This would be possible if b really were // an array of Object, but since it really is an array of String, // we will get a java.lang.ArrayStoreException. b[0] = 1; In the above example, one can "read" from the array (b) safely. It is only trying to "write" to the array that can lead to trouble. One drawback to this approach is that it leaves the possibility of a run-time error that a stricter type system could have caught at compile-time. Also, it hurts performance because each write into an array requires an additional run-time check. With the addition of generics, Java and C# now offer ways to write this kind of polymorphic function without relying on covariance. The array comparison and shuffling functions can be given the parameterized types &lt;T&gt; boolean equalArrays(T[] a1, T[] a2); &lt;T&gt; void shuffleArray(T[] a); Alternatively, to enforce that a C# method accesses a collection in a read-only way, one can use the interface instead of passing it an array . Function types. Languages with first-class functions have function types like "a function expecting a Cat and returning an Animal" (written in OCaml syntax or in C# syntax). Those languages also need to specify when one function type is a subtype of another—that is, when it is safe to use a function of one type in a context that expects a function of a different type. It is safe to substitute a function "f" for a function "g" if "f" accepts a more general type of argument and returns a more specific type than "g". For example, functions of type , , and can be used wherever a was expected. (One can compare this to the robustness principle of communication: "be liberal in what you accept and conservative in what you produce.") The general rule is: formula_0 if formula_1 and formula_2. Using inference rule notation the same rule can be written as: formula_3 In other words, the → type constructor is "contravariant in the parameter (input) type" and "covariant in the return (output) type". This rule was first stated formally by John C. Reynolds, and further popularized in a paper by Luca Cardelli. When dealing with functions that take functions as arguments, this rule can be applied several times. For example, by applying the rule twice, we see that formula_4 if formula_5. In other words, the type formula_6 is "covariant" in the position of formula_7. For complicated types it can be confusing to mentally trace why a given type specialization is or isn't type-safe, but it is easy to calculate which positions are co- and contravariant: a position is covariant if it is on the left side of an even number of arrows applying to it. Inheritance in object-oriented languages. When a subclass overrides a method in a superclass, the compiler must check that the overriding method has the right type. While some languages require that the type exactly matches the type in the superclass (invariance), it is also type safe to allow the overriding method to have a "better" type. By the usual subtyping rule for function types, this means that the overriding method should return a more specific type (return type covariance), and accept a more general argument (parameter type contravariance). In UML notation, the possibilities are as follows (where Class B is the subclass that extends Class A which is the superclass): For a concrete example, suppose we are writing a class to model an animal shelter. We assume that is a subclass of , and that we have a base class (using Java syntax) class AnimalShelter { Animal getAnimalForAdoption() { void putAnimal(Animal animal) { Now the question is: if we subclass , what types are we allowed to give to and Covariant method return type. In a language which allows covariant return types, a derived class can override the method to return a more specific type: class CatShelter extends AnimalShelter { Cat getAnimalForAdoption() { return new Cat(); Among mainstream OO languages, Java, C++ and C# (as of version 9.0 ) support covariant return types. Adding the covariant return type was one of the first modifications of the C++ language approved by the standards committee in 1998. Scala and D also support covariant return types. Contravariant method parameter type. Similarly, it is type safe to allow an overriding method to accept a more general argument than the method in the base class: class CatShelter extends AnimalShelter { void putAnimal(Object animal) { Only a few object-oriented languages actually allow this (for example, Python when typechecked with mypy). C++, Java and most other languages that support overloading and/or shadowing would interpret this as a method with an overloaded or shadowed name. However, Sather supported both covariance and contravariance. Calling convention for overridden methods are covariant with "out" parameters and return values, and contravariant with normal parameters (with the mode "in"). Covariant method parameter type. A couple of mainstream languages, Eiffel and Dart allow the parameters of an overriding method to have a "more" specific type than the method in the superclass (parameter type covariance). Thus, the following Dart code would type check, with overriding the method in the base class: class CatShelter extends AnimalShelter { void putAnimal(covariant Cat animal) { This is not type safe. By up-casting a to an , one can try to place a dog in a cat shelter. That does not meet parameter restrictions, and will result in a runtime error. The lack of type safety (known as the "catcall problem" in the Eiffel community, where "cat" or "CAT" is a Changed Availability or Type) has been a long-standing issue. Over the years, various combinations of global static analysis, local static analysis, and new language features have been proposed to remedy it, and these have been implemented in some Eiffel compilers. Despite the type safety problem, the Eiffel designers consider covariant parameter types crucial for modeling real world requirements. The cat shelter illustrates a common phenomenon: it is "a kind of" animal shelter but has "additional restrictions", and it seems reasonable to use inheritance and restricted parameter types to model this. In proposing this use of inheritance, the Eiffel designers reject the Liskov substitution principle, which states that objects of subclasses should always be less restricted than objects of their superclass. One other instance of a mainstream language allowing covariance in method parameters is PHP in regards to class constructors. In the following example, the __construct() method is accepted, despite the method parameter being covariant to the parent's method parameter. Were this method anything other than __construct(), an error would occur: class Pet class PetDog extends Pet public function __construct(DogInterface $dog) parent::__construct($dog); Another example where covariant parameters seem helpful is so-called binary methods, i.e. methods where the parameter is expected to be of the same type as the object the method is called on. An example is the method: checks whether comes before or after in some ordering, but the way to compare, say, two rational numbers will be different from the way to compare two strings. Other common examples of binary methods include equality tests, arithmetic operations, and set operations like subset and union. In older versions of Java, the comparison method was specified as an interface interface Comparable { int compareTo(Object o); The drawback of this is that the method is specified to take an argument of type . A typical implementation would first down-cast this argument (throwing an error if it is not of the expected type): class RationalNumber implements Comparable { int numerator; int denominator; public int compareTo(Object other) { RationalNumber otherNum = (RationalNumber)other; return Integer.compare(numerator * otherNum.denominator, otherNum.numerator * denominator); In a language with covariant parameters, the argument to could be directly given the desired type , hiding the typecast. (Of course, this would still give a runtime error if was then called on e.g. a Avoiding the need for covariant parameter types. Other language features can provide the apparent benefits of covariant parameters while preserving Liskov substitutability. In a language with "generics" (a.k.a. parametric polymorphism) and bounded quantification, the previous examples can be written in a type-safe way. Instead of defining , we define a parameterized class . (One drawback of this is that the implementer of the base class needs to foresee which types will need to be specialized in the subclasses.) class Shelter&lt;T extends Animal&gt; { T getAnimalForAdoption() { void putAnimal(T animal) { class CatShelter extends Shelter&lt;Cat&gt; { Cat getAnimalForAdoption() { void putAnimal(Cat animal) { Similarly, in recent versions of Java the interface has been parameterized, which allows the downcast to be omitted in a type-safe way: class RationalNumber implements Comparable&lt;RationalNumber&gt; { int numerator; int denominator; public int compareTo(RationalNumber otherNum) { return Integer.compare(numerator * otherNum.denominator, otherNum.numerator * denominator); Another language feature that can help is "multiple dispatch". One reason that binary methods are awkward to write is that in a call like , selecting the correct implementation of really depends on the runtime type of both and , but in a conventional OO language only the runtime type of is taken into account. In a language with Common Lisp Object System (CLOS)-style multiple dispatch, the comparison method could be written as a generic function where both arguments are used for method selection. Giuseppe Castagna observed that in a typed language with multiple dispatch, a generic function can have some parameters which control dispatch and some "left-over" parameters which do not. Because the method selection rule chooses the most specific applicable method, if a method overrides another method, then the overriding method will have more specific types for the controlling parameters. On the other hand, to ensure type safety the language still must require the left-over parameters to be at least as general. Using the previous terminology, types used for runtime method selection are covariant while types not used for runtime method selection of the method are contravariant. Conventional single-dispatch languages like Java also obey this rule: only one argument is used for method selection (the receiver object, passed along to a method as the hidden argument ), and indeed the type of is more specialized inside overriding methods than in the superclass. Castagna suggests that examples where covariant parameter types are superior (particularly, binary methods) should be handled using multiple dispatch; which is naturally covariant. However, most programming languages do not support multiple dispatch. Summary of variance and inheritance. The following table summarizes the rules for overriding methods in the languages discussed above. Generic types. In programming languages that support generics (a.k.a. parametric polymorphism), the programmer can extend the type system with new constructors. For example, a C# interface like makes it possible to construct new types like or . The question then arises what the variance of these type constructors should be. There are two main approaches. In languages with "declaration-site variance annotations" (e.g., C#), the programmer annotates the definition of a generic type with the intended variance of its type parameters. With "use-site variance annotations" (e.g., Java), the programmer instead annotates the places where a generic type is instantiated. Declaration-site variance annotations. The most popular languages with declaration-site variance annotations are C# and Kotlin (using the keywords and ), and Scala and OCaml (using the keywords and ). C# only allows variance annotations for interface types, while Kotlin, Scala and OCaml allow them for both interface types and concrete data types. Interfaces. In C#, each type parameter of a generic interface can be marked covariant (), contravariant (), or invariant (no annotation). For example, we can define an interface of read-only iterators, and declare it to be covariant (out) in its type parameter. interface IEnumerator&lt;out T&gt; bool MoveNext(); With this declaration, will be treated as covariant in its type parameter, e.g. is a subtype of . The type checker enforces that each method declaration in an interface only mentions the type parameters in a way consistent with the / annotations. That is, a parameter that was declared covariant must not occur in any contravariant positions (where a position is contravariant if it occurs under an odd number of contravariant type constructors). The precise rule is that the return types of all methods in the interface must be "valid covariantly" and all the method parameter types must be "valid contravariantly", where "valid S-ly" is defined as follows: As an example of how these rules apply, consider the interface. interface IList&lt;T&gt; void Insert(int index, T item); IEnumerator&lt;T&gt; GetEnumerator(); The parameter type of must be valid contravariantly, i.e. the type parameter must not be tagged . Similarly, the result type of must be valid covariantly, i.e. (since is a covariant interface) the type must be valid covariantly, i.e. the type parameter must not be tagged . This shows that the interface is not allowed to be marked either co- or contravariant. In the common case of a generic data structure such as , these restrictions mean that an parameter can only be used for methods getting data out of the structure, and an parameter can only be used for methods putting data into the structure, hence the choice of keywords. Data. C# allows variance annotations on the parameters of interfaces, but not the parameters of classes. Because fields in C# classes are always mutable, variantly parameterized classes in C# would not be very useful. But languages which emphasize immutable data can make good use of covariant data types. For example, in all of Scala, Kotlin and OCaml the immutable list type is covariant: List[Cat] is a subtype of List[Animal]. Scala's rules for checking variance annotations are essentially the same as C#'s. However, there are some idioms that apply to immutable datastructures in particular. They are illustrated by the following (excerpt from the) definition of the List[A] class. sealed abstract class List[+A] extends AbstractSeq[A] { def head: A def tail: List[A] /** Adds an element at the beginning of this list. */ def ::[B &gt;: A] (x: B): List[B] = new scala.collection.immutable.::(x, this) First, class members that have a variant type must be immutable. Here, head has the type A, which was declared covariant (+), and indeed head was declared as a method (def). Trying to declare it as a mutable field (var) would be rejected as type error. Second, even if a data structure is immutable, it will often have methods where the parameter type occurs contravariantly. For example, consider the method :: which adds an element to the front of a list. (The implementation works by creating a new object of the similarly named "class" ::, the class of nonempty lists.) The most obvious type to give it would be def :: (x: A): List[A] However, this would be a type error, because the covariant parameter A appears in a contravariant position (as a function parameter). But there is a trick to get around this problem. We give :: a more general type, which allows adding an element of any type B as long as B is a supertype of A. Note that this relies on List being covariant, since this has type List[A] and we treat it as having type List[B]. At first glance it may not be obvious that the generalized type is sound, but if the programmer starts out with the simpler type declaration, the type errors will point out the place that needs to be generalized. Inferring variance. It is possible to design a type system where the compiler automatically infers the best possible variance annotations for all datatype parameters. However, the analysis can get complex for several reasons. First, the analysis is nonlocal since the variance of an interface depends on the variance of all interfaces that mentions. Second, in order to get unique best solutions the type system must allow "bivariant" parameters (which are simultaneously co- and contravariant). And finally, the variance of type parameters should arguably be a deliberate choice by the designer of an interface, not something that just happens. For these reasons most languages do very little variance inference. C# and Scala do not infer any variance annotations at all. OCaml can infer the variance of parameterized concrete datatypes, but the programmer must explicitly specify the variance of abstract types (interfaces). For example, consider an OCaml datatype which wraps a function type ('a, 'b) t = T of ('a -&gt; 'b) The compiler will automatically infer that is contravariant in the first parameter, and covariant in the second. The programmer can also provide explicit annotations, which the compiler will check are satisfied. Thus the following declaration is equivalent to the previous one: type (-'a, +'b) t = T of ('a -&gt; 'b) Explicit annotations in OCaml become useful when specifying interfaces. For example, the standard library interface for association tables include an annotation saying that the map type constructor is covariant in the result type. module type S = sig type key type (+'a) t val empty: 'a t val mem: key -&gt; 'a t -&gt; bool end This ensures that e.g. is a subtype of . Use-site variance annotations (wildcards). One drawback of the declaration-site approach is that many interface types must be made invariant. For example, we saw above that needed to be invariant, because it contained both and . In order to expose more variance, the API designer could provide additional interfaces which provide subsets of the available methods (e.g. an "insert-only list" which only provides ). However this quickly becomes unwieldy. Use-site variance means the desired variance is indicated with an annotation at the specific site in the code where the type will be used. This gives users of a class more opportunities for subtyping without requiring the designer of the class to define multiple interfaces with different variance. Instead, at the point a generic type is instantiated to an actual parameterized type, the programmer can indicate that only a subset of its methods will be used. In effect, each definition of a generic class also makes available interfaces for the covariant and contravariant "parts" of that class. Java provides use-site variance annotations through wildcards, a restricted form of bounded existential types. A parameterized type can be instantiated by a wildcard together with an upper or lower bound, e.g. or . An unbounded wildcard like is equivalent to . Such a type represents for some unknown type which satisfies the bound. For example, if has type , then the type checker will accept Animal a = l.get(3); because the type is known to be a subtype of , but l.add(new Animal()); will be rejected as a type error since an is not necessarily an . In general, given some interface , a reference to an forbids using methods from the interface where occurs contravariantly in the type of the method. Conversely, if had type one could call but not While non-wildcard parameterized types in Java are invariant (e.g. there is no subtyping relationship between and ), wildcard types can be made more specific by specifying a tighter bound. For example, is a subtype of . This shows that wildcard types are "covariant in their upper bounds" (and also "contravariant in their lower bounds"). In total, given a wildcard type like , there are three ways to form a subtype: by specializing the class , by specifying a tighter bound , or by replacing the wildcard with a specific type (see figure). By applying two of the above three forms of subtyping, it becomes possible to, for example, pass an argument of type to a method expecting a . This is the kind of expressiveness that results from covariant interface types. The type acts as an interface type containing only the covariant methods of , but the implementer of did not have to define it ahead of time. In the common case of a generic data structure , covariant parameters are used for methods getting data out of the structure, and contravariant parameters for methods putting data into the structure. The mnemonic for Producer Extends, Consumer Super (PECS), from the book "Effective Java" by Joshua Bloch gives an easy way to remember when to use covariance and contravariance. Wildcards are flexible, but there is a drawback. While use-site variance means that API designers need not consider variance of type parameters to interfaces, they must often instead use more complicated method signatures. A common example involves the interface. Suppose we want to write a function that finds the biggest element in a collection. The elements need to implement the method, so a first try might be &lt;T extends Comparable&lt;T» T max(Collection&lt;T&gt; coll); However, this type is not general enough—one can find the max of a , but not a . The problem is that does not implement , but instead the (better) interface . In Java, unlike in C#, is not considered a subtype of . Instead the type of has to be modified: &lt;T extends Comparable&lt;? super T» T max(Collection&lt;T&gt; coll); The bounded wildcard conveys the information that calls only contravariant methods from the interface. This particular example is frustrating because "all" the methods in are contravariant, so that condition is trivially true. A declaration-site system could handle this example with less clutter by annotating only the definition of The method can be changed even further by using an upper bounded wildcard for the method parameter: &lt;T extends Comparable&lt;? super T» T max(Collection&lt;? extends T&gt; coll); Comparing declaration-site and use-site annotations. Use-site variance annotations provide additional flexibility, allowing more programs to type check. However, they have been criticized for the complexity they add to the language, leading to complicated type signatures and error messages. One way to assess whether the extra flexibility is useful is to see if it is used in existing programs. A survey of a large set of Java libraries found that 39% of wildcard annotations could have been directly replaced by declaration-site annotations. Thus the remaining 61% is an indication of places where Java benefits from having the use-site system available. In a declaration-site language, libraries must either expose less variance, or define more interfaces. For example, the Scala Collections library defines three separate interfaces for classes which employ covariance: a covariant base interface containing common methods, an invariant mutable version which adds side-effecting methods, and a covariant immutable version which may specialize the inherited implementations to exploit structural sharing. This design works well with declaration-site annotations, but the large number of interfaces carry a complexity cost for clients of the library. And modifying the library interface may not be an option—in particular, one goal when adding generics to Java was to maintain binary backwards compatibility. On the other hand, Java wildcards are themselves complex. In a conference presentation Joshua Bloch criticized them as being too hard to understand and use, stating that when adding support for closures "we simply cannot afford another "wildcards"". Early versions of Scala used use-site variance annotations but programmers found them difficult to use in practice, while declaration-site annotations were found to be very helpful when designing classes. Later versions of Scala added Java-style existential types and wildcards; however, according to Martin Odersky, if there were no need for interoperability with Java then these would probably not have been included. Ross Tate argues that part of the complexity of Java wildcards is due to the decision to encode use-site variance using a form of existential types. The original proposals used special-purpose syntax for variance annotations, writing instead of Java's more verbose Since wildcards are a form of existential types they can be used for more things than just variance. A type like ("a list of unknown type") lets objects be passed to methods or stored in fields without exactly specifying their type parameters. This is particularly valuable for classes such as where most of the methods do not mention the type parameter. However, type inference for existential types is a difficult problem. For the compiler implementer, Java wildcards raise issues with type checker termination, type argument inference, and ambiguous programs. In general it is undecidable whether a Java program using generics is well-typed or not, so any type checker will have to go into an infinite loop or time out for some programs. For the programmer, it leads to complicated type error messages. Java type checks wildcard types by replacing the wildcards with fresh type variables (so-called "capture conversion"). This can make error messages harder to read, because they refer to type variables that the programmer did not directly write. For example, trying to add a to a will give an error like method List.add (capture#1) is not applicable (actual argument Cat cannot be converted to capture#1 by method invocation conversion) where capture#1 is a fresh type-variable: capture#1 extends Animal from capture of ? extends Animal Since both declaration-site and use-site annotations can be useful, some type systems provide both. Etymology. These terms come from the notion of covariant and contravariant functors in category theory. Consider the category formula_8 whose objects are types and whose morphisms represent the subtype relationship ≤. (This is an example of how any partially ordered set can be considered as a category.) Then for example the function type constructor takes two types "p" and "r" and creates a new type "p" → "r"; so it takes objects in formula_9 to objects in formula_8. By the subtyping rule for function types this operation reverses ≤ for the first parameter and preserves it for the second, so it is a contravariant functor in the first parameter and a covariant functor in the second.
[ { "math_id": 0, "text": "(P_1 \\rightarrow R_1) \\leq (P_2 \\rightarrow R_2)" }, { "math_id": 1, "text": "P_1 \\geq P_2" }, { "math_id": 2, "text": "R_1 \\leq R_2" }, { "math_id": 3, "text": "\\frac{P_1 \\geq P_2 \\quad R_1 \\leq R_2}{(P_1 \\rightarrow R_1) \\leq (P_2 \\rightarrow R_2)}" }, { "math_id": 4, "text": "((P_1 \\to R) \\to R) \\le ((P_2 \\to R) \\to R)" }, { "math_id": 5, "text": "P_1 \\le P_2" }, { "math_id": 6, "text": "((A \\to B) \\to C)" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "C^2" } ]
https://en.wikipedia.org/wiki?curid=1104704
11047538
Earth–ionosphere waveguide
The Earth–ionosphere waveguide is the phenomenon in which certain radio waves can propagate in the space between the ground and the boundary of the ionosphere. Because the ionosphere contains charged particles, it can behave as a conductor. The earth operates as a ground plane, and the resulting cavity behaves as a large waveguide. Extremely low frequency (ELF) (&lt; 3 kHz) and very low frequency (VLF) (3–30 kHz) signals can propagate efficiently in this waveguide. For instance, lightning strikes launch a signal called radio atmospherics, which can travel many thousands of kilometers, because they are confined between the Earth and the ionosphere. The round-the-world nature of the waveguide produces resonances, like a cavity, which are at ~7 Hz. Introduction. Radio propagation within the ionosphere depends on frequency, angle of incidence, time of day, season, Earth's magnetic field, and solar activity. At vertical incidence, waves with frequencies larger than the electron plasma frequency (formula_0 in Hz) of the F-layer maximum (formula_1 in formula_2 is the electron density) can propagate through the ionosphere nearly undisturbed. Waves with frequencies smaller than formula_0 are reflected within the ionospheric D-, E-, and F-layers. formula_0 is of the order of 8–15 MHz during day time conditions. For oblique incidence, the critical frequency becomes larger. Very low frequencies (VLF: 3–30 kHz), and extremely low frequencies (ELF: &lt;3  kHz) are reflected at the ionospheric D- and lower E-layer. An exception is whistler propagation of lightning signals along the geomagnetic field lines. The wavelengths of VLF waves (10–100 km) are already comparable with the height of the ionospheric D-layer (about 70 km during the day, and 90  km during the night). Therefore, ray theory is only applicable for propagation over short distances, while mode theory must be used for larger distances. The region between Earth's surface and the ionospheric D-layer behaves thus like a waveguide for VLF- and ELF-waves. In the presence of the ionospheric plasma and the geomagnetic field, electromagnetic waves exist for frequencies which are larger than the gyrofrequency of the ions (about 1 Hz). Waves with frequencies smaller than the gyrofrequency are called hydromagnetic waves. The geomagnetic pulsations with periods of seconds to minutes as well as Alfvén waves belong to that type of waves. Transfer function. The prototype of a short vertical rod antenna is a vertical electric Hertz dipole in which electric alternating currents of frequency f flow. Its radiation of electromagnetic waves within the Earth-ionospheric waveguide can be described by a transfer function T(ρ,ω): where Ez is the vertical component of the electric field at the receiver in a distance ρ from the transmitter, Eo is the electric field of a Hertzian dipole in free space, and formula_3 the angular frequency. In free space, it is formula_4. Evidently, the Earth–ionosphere waveguide is dispersive because the transfer function depends on frequency. This means that phase- and group velocity of the waves are frequency dependent. Ray theory. In the VLF range, the transfer function is the sum of a ground wave which arrives directly at the receiver and multihop sky waves reflected at the ionospheric D-layer (Figure 1). For the real Earth's surface, the ground wave becomes dissipated and depends on the orography along the ray path. For VLF waves at shorter distances, this effect is, however, of minor importance, and the reflection factor of the Earth is formula_5, in a first approximation. At shorter distances, only the first hop sky wave is of importance. The D-layer can be simulated by a magnetic wall (formula_6) with a fixed boundary at a virtual height h, which means a phase jump of 180° at the reflection point. In reality, the electron density of the D-layer increases with altitude, and the wave is bounded as shown in Figure 2. The sum of ground wave and first hop wave displays an interference pattern with interference minima if the difference between the ray paths of ground and first sky wave is half a wavelength (or a phase difference of 180°). The last interference minimum on the ground (z = 0) between the ground wave and the first sky wave is at a horizontal distance of with c the velocity of light. In the example of Figure 3, this is at about 500 km distance. Wave mode theory. The theory of ray propagation of VLF waves breaks down at larger distances because in the sum of these waves successive multihop sky waves are involved, and the sum diverges. In addition, it becomes necessary to take into account the spherical Earth. Mode theory which is the sum of eigen-modes in the Earth–ionosphere waveguide is valid in this range of distances. The wave modes have fixed vertical structures of their vertical electric field components with maximum amplitudes at the bottom and zero amplitudes at the top of the waveguide. In the case of the fundamental first mode, it is a quarter wavelength. With decreasing frequency, the eigenvalue becomes imaginary at the cutoff frequency, where the mode changes to an evanescent wave. For the first mode, this happens at below which that mode will not propagate (Figure 4). The attenuation of the modes increases with wavenumber n. Therefore, essentially only the first two modes are involved in the wave propagation The first interference minimum between these two modes is at the same distance as that of the last interference minimum of ray theory (Eq. 3) indicating the equivalence of both theories As seen in Figure 3, the spacing between the mode interference minima is constant and about 1000 km in this example. The first mode becomes dominant at distances greater than about 1500 km, because the second mode is more strongly attenuated than the first mode. In the range of ELF waves, only mode theory is appropriate. The fundamental mode is the zeroth mode (Figure 4). The D-layer becomes here an electric wall (Ri = 1). Its vertical structure is simply a vertical electric field constant with altitude. In particular, a resonance zeroth mode exists for waves which are an integral part of the Earth's circumference and has the frequency with the Earth's radius. The first resonance peaks are at 7.5, 15, and 22,5 Hz. These are the Schumann resonances. The spectral signals from lightning are amplified at those frequencies. Waveguide characteristics. The above discussion merely illustrates a simple picture of mode and ray theory. More detailed treatments require a large computer program. In particular, it is difficult to solve the problem of the horizontal and vertical inhomogeneities of the waveguide. The effect of the Earth's curvature is, that near the antipode the field strength slightly increases. Due to the influence of the Earth' magnetic field, the medium becomes anisotropic so that the ionospheric reflection factor in reality is a matrix. This means that a vertically polarized incident wave after reflection at the ionospheric D-layer converses to a vertically and a horizontally polarized wave. Moreover, the geomagnetic field gives rise to a nonreciprocity of VLF waves. Waves propagating from east to west are more strongly attenuated than vice versa. There appears a phase slipping near the distance of the deep interference minimum of Eq. 3. During the times of sunrise and/or sunset, there is sometimes a phase gain or loss of 360° because of the irreversible behavior of the first sky wave. The dispersion characteristics of the Earth-ionospheric waveguide can be used for locating thunderstorm activity by measurements of the difference of the group time delay of lightning signals (sferics) at adjacent frequencies up to distances of 10000 km. The Schumann resonances allow to determine the global lightning activity. References and notes. "Notes" &lt;templatestyles src="Reflist/styles.css" /&gt; "Citations" &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_e" }, { "math_id": 1, "text": "N_e" }, { "math_id": 2, "text": "m^{-3}" }, { "math_id": 3, "text": "\\omega = 2\\pi f" }, { "math_id": 4, "text": "T = 1" }, { "math_id": 5, "text": "R_e = 1" }, { "math_id": 6, "text": "R_i = -1" } ]
https://en.wikipedia.org/wiki?curid=11047538