id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
66452088
Chemical graph generator
Software for visualizing chemical structures A chemical graph generator is a software package to generate computer representations of chemical structures adhering to certain boundary conditions. The development of such software packages is a research topic of cheminformatics. Chemical graph generators are used in areas such as virtual library generation in drug design, in molecular design with specified properties, called inverse QSAR/QSPR, as well as in organic synthesis design, retrosynthesis or in systems for computer-assisted structure elucidation (CASE). CASE systems again have regained interest for the structure elucidation of unknowns in computational metabolomics, a current area of computational biology. History. Molecular structure generation is a branch of graph generation problems. Molecular structures are graphs with chemical constraints such as valences, bond multiplicity and fragments. These generators are the core of CASE systems. In a generator, the molecular formula is the basic input. If fragments are obtained from the experimental data, they can also be used as inputs to accelerate structure generation. The first structure generators were versions of graph generators modified for chemical purposes. One of the first structure generators was CONGEN, originally developed for the DENDRAL project, the first artificial intelligence project in organic chemistry. DENDRAL was developed as a part of the Mariner program launched by the NASA to search for life on Mars. CONGEN dealt well with overlaps in substructures. The overlaps among substructures rather than atoms were used as the building blocks. For the case of stereoisomers, symmetry group calculations were performed for duplicate detection. After DENDRAL, another mathematical method, MASS, a tool for mathematical synthesis and analysis of molecular structures, was reported. As with CONGEN, the MASS algorithm worked as an adjacency matrix generator. Many mathematical generators are descendants of efficient branch-and-bound methods from Igor Faradjev and Ronald C. Read's orderly generation method. Although their reports are from the 1970s, these studies are still the fundamental references for structure generators. In the orderly generation method, specific order-check functions are performed on graph representatives, such as vectors. For example, MOLGEN performs a descending order check while filling rows of adjacency matrices. This descending order check is based on an input valence distribution. The literature classifies generators into two major types: structure assembly and structure reduction. The algorithmic complexity and the run time are the criteria used for comparison. Structure assembly. The generation process starts with a set of atoms from the molecular formula. In structure assembly, atoms are combinatorically connected to consider all possible extensions. If substructures are obtained from the experimental data, the generation starts with these substructures. These substructures provide known bonds in the molecule. One of the earliest attempts was made by Hidetsugu Abe in 1975 using a pattern recognition-based structure generator. The algorithm had two steps: first, the prediction of the substructure from low-resolution spectral data; second, the assembly of these substructures based on a set of construction rules. Hidetsugu Abe and the other contributors published the first paper on CHEMICS, which is a CASE tool comprising several structure generation methods. The program relies on a predefined non-overlapping fragment library. CHEMICS generates different types of component sets ranked from primary to tertiary based on component complexity. The primary set contains atoms, i.e., C, N, O and S, with their hybridization. The secondary and tertiary component sets are built layer-by-layer starting with these primary components. These component sets are represented as vectors and are used as building blocks in the process. Substantial contributions were made by Craig Shelley and Morton Munk, who published a large number of CASE papers in this field. The first of these papers reported a structure generator, ASSEMBLE. The algorithm is considered one of the earliest assembly methods in the field. As the name indicates, the algorithm assembles substructures with overlaps to construct structures. ASSEMBLE overcomes overlapping by including a “neighbouring atom tag”. The generator is purely mathematical and does not involve the interpretation of any spectral data. Spectral data are used for structure scoring and substructure information. Based on the molecular formula, the generator forms bonds between pairs of atoms, and all the extensions are checked against the given constraints. If the process is considered as a tree, the first node of the tree is an atom set with substructures if any are provided by the spectral data. By extending the molecule with a bond, an intermediate structure is built. Each intermediate structure can be represented by a node in the generation tree. ASSEMBLE was developed with a user-friendly interface to facilitate use. The second version of ASSEMBLE was released in 2000. Another assembly method is GENOA. Compared to ASSEMBLE and many other generators, GENOA is a constructive substructure search-based algorithm, and it assembles different substructures by also considering the overlaps. The efficiency and exhaustivity of generators are also related to the data structures. Unlike previous methods, AEGIS was a list-processing generator. Compared to adjacency matrices, list data requires less memory. As no spectral data was interpreted in this system, the user needed to provide substructures as inputs. Structure generators can also vary based on the type of data used, such as HMBC, HSQC and other NMR data. LUCY is an open-source structure elucidation method based on the HMBC data of unknown molecules, and involves an exhaustive 2-step structure generation process where first all combinations of interpretations of HMBC signals are implemented in a connectivity matrix, which is then completed by a deterministic generator filling in missing bond information. This platform could generate structures with any arbitrary size of molecules; however, molecular formulas with more than 30 heavy atoms are too time consuming for practical applications. This limitation highlighted the need for a new CASE system. SENECA was developed to eliminate the shortcomings of LUCY. To overcome the limitations of the exhaustive approach, SENECA was developed as a stochastic method to find optimal solutions. The systems comprise two stochastic methods: simulated annealing and genetic algorithms. First, a random structure is generated; then, its energy is calculated to evaluate the structure and its spectral properties. By transforming this structure into another structure, the process continues until the optimum energy is reached. In the generation, this transformation relies on equations based on Jean-Loup Faulon's rules. LSD (Logic for Structure Determination) is an important contribution from French scientists. The tool uses spectral data information such as HMBC and COSY data to generate all possible structures. LSD is an open source structure generator released under the General Public License (GPL). A well-known commercial CASE system, StrucEluc, also features a NMR based generator. This tool is from ACD Labs and, notably, one of the developers of MASS, Mikhail Elyashberg. COCON is another NMR based structure generator, relying on theoretical data sets for structure generation. Except J-HMBC and J-COSY, all NMR types can be used as inputs. In 1994, Hu and Xu reported an integer partition-based structure generator. The decomposition of the molecular formula into fragments, components and segments was performed as an application of integer partitioning. These fragments were then used as building blocks in the structure generator. This structure generator was part of a CASE system, ESESOC. A series of stochastic generators was reported by Jean-Loup Faulon. The software, MOLSIG, was integrated into this stochastic generator for canonical labelling and duplicate checks. As for many other generators, the tree approach is the skeleton of Jean-Loup Faulon's structure generators. However, considering all possible extensions leads to a combinatorial explosion. Orderly generation is performed to cope with this exhaustivity. Many assembly algorithms, such as OMG, MOLGEN and Jean-Loup Faulon's structure generator, are orderly generation methods. Jean-Loup Faulon's structure generator relies on equivalence classes over atoms. Atoms with the same interaction type and element are grouped in the same equivalence class. Rather than extending all atoms in a molecule, one atom from each class is connected with other atoms. Similar to the former generator, Julio Peironcely's structure generator, OMG, takes atoms and substructures as inputs and extends the structures using a breadth-first search method. This tree extension terminates when all the branches reach saturated structures. OMG generates structures based on the canonical augmentation method from Brendan McKay's NAUTY package. The algorithm calculates canonical labelling and then extends structures by adding one bond. To keep the extension canonical, canonical bonds are added. Although NAUTY is an efficient tool for graph canonical labelling, OMG is approximately 2000 times slower than MOLGEN. The problem is the storage of all the intermediate structures. OMG has since been parallelized, and the developers released PMG (Parallel Molecule Generator). MOLGEN outperforms PMG using only 1 core; however, PMG outperforms MOLGEN by increasing the number of cores to 10. A constructive search algorithm is a branch-and-bound method, such as Igor Faradjev's algorithm, and an additional solution to memory problems. Branch-and-bound methods are matrix generation algorithms. In contrast to previous methods, these methods build all the connectivity matrices without building intermediate structures. In these algorithms, canonicity criteria and isomorphism checks are based on automorphism groups from mathematical group theory. MASS, SMOG and Ivan Bangov's algorithm are good examples in the literature. MASS is a method of mathematical synthesis. First, it builds all incidence matrices for a given molecular formula. The atom valences are then used as the input for matrix generation. The matrices are generated by considering all the possible interactions among atoms with respect to the constraints and valences. The benefit of constructive search algorithms is their low memory usage. SMOG is a successor of MASS. Unlike previous methods, MOLGEN is the only maintained efficient generic structure generator, developed as a closed-source platform by a group of mathematicians as an application of computational group theory. MOLGEN is an orderly generation method. Many different versions of MOLGEN have been developed, and they provide various functions. Based on the users' needs, different types of inputs can be used. For example, MOLGEN-MS allows users to input mass spectrometry data of an unknown molecule. Compared to many other generators, MOLGEN approaches the problem from different angles. The key feature of MOLGEN is generating structures without building all the intermediate structures and without generating duplicates. In the field, the studies recent to 2021 are from Kimito Funatsu's research group. As a type of assembly method, building blocks, such as ring systems and atom fragments, are used in the structure generation. Every intermediate structure is extended by adding building blocks in all possible ways. To reduce the number of duplicates, Brendan McKay's canonical path augmentation method is used. To overcome the combinatorial explosion in the generation, applicability domain and ring systems are detected based on inverse QSPR/QSAR analysis. The applicability domain, or target area, is described based on given biological as well as pharmaceutical activity information from QSPR/QSAR. In that study, monotonically changed descriptors (MCD) are used to describe applicability domains. For every extension in intermediate structures, the MCDs are updated. The usage of MCDs reduces the search space in the generation process. In the QSPR/QSAR based structure generation, there is the lack of synthesizability of the generated structures. Usage of retrosynthesis paths in the generation makes the generation process more efficient. For example, a well-known tool called RetroPath is used for molecular structure enumeration and virtual screening based on the given reaction rules. Its core algorithm is a breadth-first method, generating structures by applying reaction rules to each source compound. Structure generation and enumeration are performed based on Brendan McKay's canonical augmentation method. RetroPath 2.0 provides a variety of workflows such as isomer transformation, enumeration, QSAR and metabolomics. Besides these mathematical structure generation methods, the implementations of neural networks, such as generative autoencoder models, are the novel directions of the field. Structure reduction. Unlike these assembly methods, reduction methods make all the bonds between atom pairs, generating a hypergraph. Then, the size of the graph is reduced with respect to the constraints. First, the existence of substructures in the hypergraph is checked. Unlike assembly methods, the generation tree starts with the hypergraph, and the structures decrease in size at each step. Bonds are deleted based on the substructures. If a substructure is no longer in the hypergraph, the substructure is removed from the constraints. Overlaps in the substructures were also considered due to the hypergraphs. The earliest reduction-based structure generator is COCOA, an exhaustive and recursive bond-removal method. Generated fragments are described as atom-centred fragments to optimize storage, comparable to circular fingerprints and atom signatures. Rather than storing structures, only the list of first neighbours of each atom is stored. The main disadvantage of reduction methods is the massive size of the hypergraphs. Indeed, for molecules with unknown structures, the size of the hyper structure becomes extremely large, resulting in a proportional increase in the run time. The structure generator GEN by Simona Bohanec combines two tasks: structure assembly and structure reduction. Like COCOA, the initial state of the problem is a hyper structure. Both assembly and reduction methods have advantages and disadvantages, and the GEN tool avoids these disadvantages in the generation step. In other words, structure reduction is efficient when structural constraints are provided, and structure assembly is faster without constraints. First, the useless connections are eliminated, and then the substructures are assembled to build structures. Thus, GEN copes with the constraints in a more efficient way by combining these methods. GEN removes the connections creating the forbidden structures, and then the connection matrices are filled based on substructure information. The method does not accept overlaps among substructures. Once the structure is built in the matrix representation, the saturated molecule is stored in the output list. The COCOA method was further improved and a new generator was built, HOUDINI. It relies on two data structures: a square matrix of compounds representing all bonds in a hyper structure is constructed, and second, substructure representation is used to list atom-centred fragments. In the structure generation, HOUDINI maps all the atom-centred fragments onto the hyper structure. Mathematical basis. Chemical graphs. In a graph representing a chemical structure, the vertices and edges represent atoms and bonds, respectively. The bond order corresponds to the edge multiplicity, and as a result, chemical graphs are vertex and edge-labelled graphs. A vertex and edge-labelled graph formula_0 is described as a chemical graph where formula_1 is the set of vertices, i.e., atoms, and formula_2 is the set of edges, which represents the bonds. In graph theory, the degree of a vertex is its number of connections. In a chemical graph, the maximum degree of an atom is its valence, and the maximum number of bonds a chemical element can make. For example, carbon's valence is 4. In a chemical graph, an atom is saturated if it reaches its valence. A graph is connected if there is at least one path between each pair of vertices. Although chemical mixtures are one of the main interests of many chemists, due to the computational explosion, many structure generators output only connected chemical graphs. Thus, the connectivity check is one of the mandatory intermediate steps in structure generation because the aim is to generate fully saturated molecules. A molecule is saturated if all its atoms are saturated. Symmetry groups for molecular graphs. For a set of elements, a permutation is a rearrangement of these elements. An example is given below: The second line of this table shows a permutation of the first line. The multiplication of permutations, formula_3 and formula_4, is defined as a function composition, as shown below. formula_5 The combination of two permutations is also a permutation. A group, formula_6, is a set of elements together with an associative binary operation formula_7 defined on formula_6 such that the following are true: The order of a group is the number of elements in the group. Let us assume formula_13 is a set of integers. Under the function composition operation, formula_14 is a symmetry group, the set of all permutations over X. If the size of formula_13 is formula_15, then the order of formula_14 is formula_16. Set systems consist of a finite set formula_13 and its subsets, called blocks of the set. The set of permutations preserving the set system is used to build the automorphisms of the graph. An automorphism permutes the vertices of a graph; in other words, it maps a graph onto itself. This action is edge-vertex preserving. If formula_17 is an edge of the graph, formula_18, and formula_3 is a permutation of formula_1, then formula_19 A permutation formula_3 of formula_1 is an automorphism of the graph formula_20 if formula_21 is an element of formula_2, if formula_22 is an element of formula_2. The automorphism group of a graph formula_6, denoted formula_23, is the set of all automorphisms on formula_1. In molecular graphs, canonical labelling and molecular symmetry detection are implementations of automorphism groups. Although there are well known canonical labelling methods in the field, such as InChI and ALATIS, NAUTY is a commonly used software package for automorphism group calculations and canonical labelling. List of available structure generators. The available software packages and their links are listed below. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "G = (V,E) " }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "(a*b)(x)=a(b(x))" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "*" }, { "math_id": 8, "text": "I" }, { "math_id": 9, "text": "g*I=g" }, { "math_id": 10, "text": "g" }, { "math_id": 11, "text": " g^{-1}" }, { "math_id": 12, "text": " g*g^{-1}" }, { "math_id": 13, "text": "X" }, { "math_id": 14, "text": "Sym(X)" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "n!" }, { "math_id": 17, "text": "(u,v)" }, { "math_id": 18, "text": "G=(V,E)" }, { "math_id": 19, "text": "a({u,v})=(a(u),a(v))" }, { "math_id": 20, "text": "G=(E,V)" }, { "math_id": 21, "text": "a(u,v)" }, { "math_id": 22, "text": "{(u,v)}" }, { "math_id": 23, "text": "Aut(G)" } ]
https://en.wikipedia.org/wiki?curid=66452088
664556
Colorfulness
Perceived intensity of a specific color Colorfulness, chroma and saturation are attributes of perceived color relating to chromatic intensity. As defined formally by the International Commission on Illumination (CIE) they respectively describe three different aspects of chromatic intensity, but the terms are often used loosely and interchangeably in contexts where these aspects are not clearly distinguished. The precise meanings of the terms vary by what other functions they are dependent on. As colorfulness, chroma, and saturation are defined as attributes of perception, they can not be physically measured as such, but they can be quantified in relation to psychometric scales intended to be perceptually even—for example, the chroma scales of the Munsell system. While the chroma and lightness of an object are its colorfulness and brightness judged in proportion to the same thing ("the brightness of a similarly illuminated area that appears white or highly transmitting"), the saturation of the light coming from that object is in effect the chroma of the object judged in proportion to its lightness. On a Munsell hue page, lines of uniform saturation thus tend to radiate from near the black point, while lines of uniform chroma are vertical. Chroma. The naïve definition of saturation does not specify its response function. In the CIE XYZ and RGB color spaces, the saturation is defined in terms of additive color mixing, and has the property of being proportional to any scaling centered at white or the white point illuminant. However, both color spaces are non-linear in terms of psychovisually perceived color differences. It is also possible — and sometimes desirable — to define a saturation-like quantity that is linearized in term of the psychovisual perception. In the CIE 1976 LAB and LUV color spaces, the unnormalized chroma is the radial component of the cylindrical coordinate CIE LCh (lightness, chroma, hue) representation of the LAB and LUV color spaces, also denoted as CIE LCh(ab) or CIE LCh for short, and CIE LCh(uv). The transformation of formula_0 to formula_1 is given by: formula_2 formula_3 and analogously for CIE LCh(uv). The chroma in the CIE LCh(ab) and CIE LCh(uv) coordinates has the advantage of being more psychovisually linear, yet they are non-linear in terms of linear component color mixing. And therefore, chroma in CIE 1976 Lab and LUV color spaces is very much different from the traditional sense of "saturation". In color appearance models. Another, psychovisually even more accurate, but also more complex method to obtain or specify the saturation is to use a color appearance model like CIECAM02. Here, the chroma color appearance parameter might (depending on the color appearance model) be intertwined with e.g. the physical brightness of the illumination or the characteristics of the emitting/reflecting surface, which is more sensible psychovisually. The CIECAM02 chroma formula_4 for example, is computed from a lightness formula_5 in addition to a naively evaluated color magnitude formula_6 In addition, a colorfulness formula_7 parameter exists alongside the chroma formula_8 It is defined as formula_9 where formula_10 is dependent on the viewing condition. Saturation. The saturation of a color is determined by a combination of light intensity and how much it is distributed across the spectrum of different wavelengths. The purest (most saturated) color is achieved by using just one wavelength at a high intensity, such as in laser light. If the intensity drops, then as a result the saturation drops. To desaturate a color of given intensity in a subtractive system (such as watercolor), one can add white, black, gray, or the hue's complement. Various correlates of saturation follow. CIELUV and CIELAB. In CIELUV, saturation is equal to the "chroma" normalized by the "lightness": formula_11 where formula_12 is the chromaticity of the white point, and chroma is defined below. By analogy, in CIELAB this would yield: formula_13 The CIE has not formally recommended this equation since CIELAB has no chromaticity diagram, and this definition therefore lacks direct connection with older concepts of saturation. Nevertheless, this equation provides a reasonable predictor of saturation, and demonstrates that adjusting the lightness in CIELAB while holding ("a"*, "b"*) fixed does affect the saturation. But the following verbal definition of Manfred Richter and the corresponding formula proposed by Eva Lübbe are in agreement with the human perception of saturation: Saturation is the proportion of pure chromatic color in the total color sensation. formula_14 where formula_15 is the saturation, formula_16 the lightness and formula_17 is the chroma of the color. CIECAM02. In CIECAM02, saturation equals the square root of the "colorfulness" divided by the "brightness": formula_18 This definition is inspired by experimental work done with the intention of remedying CIECAM97s's poor performance. formula_7 is proportional to the chroma formula_4 thus the CIECAM02 definition bears some similarity to the CIELUV definition. HSL and HSV. Saturation is also one of three coordinates in the HSL and HSV color spaces. However, in the HSL color space saturation exists independently of lightness. That is, both a very light color and a very dark color can be heavily saturated in HSL; whereas in the previous definitions—as well as in the HSV color space—colors approaching white all feature low saturation. Excitation purity. The excitation purity (purity for short) of a stimulus is the difference from the illuminant's white point to the furthest point on the chromaticity diagram with the same dominant wavelength; using the CIE 1931 color space: formula_19 where formula_20 is the chromaticity of the white point and formula_21 is the point on the perimeter whose line segment to the white point contains the chromaticity of the stimulus. Different color spaces, such as CIELAB or CIELUV may be used, and will yield different results. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "(a, b)" }, { "math_id": 1, "text": "\\left(C_{ab}, h_{ab}\\right)" }, { "math_id": 2, "text": "C_{ab}^* = \\sqrt{a^{*2} + b^{*2}}" }, { "math_id": 3, "text": "h_{ab} = \\operatorname{atan2}\\left({b^\\star},{a^\\star}\\right)" }, { "math_id": 4, "text": "C," }, { "math_id": 5, "text": "J" }, { "math_id": 6, "text": "t." }, { "math_id": 7, "text": "M" }, { "math_id": 8, "text": "C." }, { "math_id": 9, "text": "M = CF_B^{0.25}," }, { "math_id": 10, "text": "F_L" }, { "math_id": 11, "text": "s_{uv} = \\frac{C^*_{uv}}{L^*} = 13 \\sqrt{(u' - u'_n)^2 + (v' - v'_n)^2}" }, { "math_id": 12, "text": "\\left(u_n, v_n\\right)" }, { "math_id": 13, "text": "s_{ab} = \\frac{C^*_{ab}}{L^*} = \\frac{\\sqrt{{a^*}^2 + {b^*}^2}}{L^*}" }, { "math_id": 14, "text": "S_{ab} = \\frac{C^*_{ab}}{\\sqrt{{C^*_{ab} }^2 + {L^*}^2}} 100\\%" }, { "math_id": 15, "text": "S_{ab}" }, { "math_id": 16, "text": "L^*" }, { "math_id": 17, "text": "C^*_{ab}" }, { "math_id": 18, "text": "s = \\sqrt\\frac{M}{Q}" }, { "math_id": 19, "text": "p_e = \\sqrt{\\frac{\\left(x - x_n\\right)^2 + \\left(y - y_n\\right)^2}{\\left(x_I - x_n\\right)^2 + \\left(y_I - y_n\\right)^2}}" }, { "math_id": 20, "text": "\\left(x_n, y_n\\right)" }, { "math_id": 21, "text": "\\left(x_I, y_I\\right)" } ]
https://en.wikipedia.org/wiki?curid=664556
6646221
Alternatives to general relativity
Proposed theories of gravity Alternatives to general relativity are physical theories that attempt to describe the phenomenon of gravitation in competition with Einstein's theory of general relativity. There have been many different attempts at constructing an ideal theory of gravity. These attempts can be split into four broad categories based on their scope. In this article, straightforward alternatives to general relativity are discussed, which do not involve quantum mechanics or force unification. Other theories which do attempt to construct a theory using the principles of quantum mechanics are known as theories of quantized gravity. Thirdly, there are theories which attempt to explain gravity and other forces at the same time; these are known as classical unified field theories. Finally, the most ambitious theories attempt to both put gravity in quantum mechanical terms and unify forces; these are called theories of everything. None of these alternatives to general relativity have gained wide acceptance. General relativity has withstood many tests, remaining consistent with all observations so far. In contrast, many of the early alternatives have been definitively disproven. However, some of the alternative theories of gravity are supported by a minority of physicists, and the topic remains the subject of intense study in theoretical physics. Motivations. After general relativity, attempts were made either to improve on theories developed before general relativity, or to improve general relativity itself. Many different strategies were attempted, for example the addition of spin to general relativity, combining a general relativity-like metric with a spacetime that is static with respect to the expansion of the universe, getting extra freedom by adding another parameter. At least one theory was motivated by the desire to develop an alternative to general relativity that is free of singularities. Experimental tests improved along with the theories. Many of the different strategies that were developed soon after general relativity were abandoned, and there was a push to develop more general forms of the theories that survived, so that a theory would be ready when any test showed a disagreement with general relativity. By the 1980s, the increasing accuracy of experimental tests had all confirmed general relativity; no competitors were left except for those that included general relativity as a special case. Further, shortly after that, theorists switched to string theory which was starting to look promising, but has since lost popularity. In the mid-1980s a few experiments were suggesting that gravity was being modified by the addition of a fifth force (or, in one case, of a fifth, sixth and seventh force) acting in the range of a few meters. Subsequent experiments eliminated these. Motivations for the more recent alternative theories are almost all cosmological, associated with or replacing such constructs as "inflation", "dark matter" and "dark energy". Investigation of the Pioneer anomaly has caused renewed public interest in alternatives to general relativity. Notation in this article. formula_0 is the speed of light, formula_1 is the gravitational constant. "Geometric variables" are not used. Latin indices go from 1 to 3, Greek indices go from 0 to 3. The Einstein summation convention is used. formula_2 is the Minkowski metric. formula_3 is a tensor, usually the metric tensor. These have signature (−,+,+,+). Partial differentiation is written formula_4 or formula_5. Covariant differentiation is written formula_6 or formula_7. General relativity. For comparison with alternatives, the formulas of General Relativity are: formula_8 formula_9 formula_10 which can also be written formula_11 The Einstein–Hilbert action for general relativity is: formula_12 where formula_13 is Newton's gravitational constant, formula_14 is the Ricci curvature of space, formula_15 and formula_16 is the action due to mass. General relativity is a tensor theory, the equations all contain tensors. Nordström's theories, on the other hand, are scalar theories because the gravitational field is a scalar. Other proposed alternatives include scalar–tensor theories that contain a scalar field in addition to the tensors of general relativity, and other variants containing vector fields as well have been developed recently. Classification of theories. Theories of gravity can be classified, loosely, into several categories. Most of the theories described here have: If a theory has a Lagrangian density for gravity, say formula_17, then the gravitational part of the action formula_18 is the integral of that: formula_19. In this equation it is usual, though not essential, to have formula_20 at spatial infinity when using Cartesian coordinates. For example, the Einstein–Hilbert action uses formula_21 where "R" is the scalar curvature, a measure of the curvature of space. Almost every theory described in this article has an action. It is the most efficient known way to guarantee that the necessary conservation laws of energy, momentum and angular momentum are incorporated automatically; although it is easy to construct an action where those conservation laws are violated. Canonical methods provide another way to construct systems that have the required conservation laws, but this approach is more cumbersome to implement. The original 1983 version of MOND did not have an action. A few theories have an action but not a Lagrangian density. A good example is Whitehead, the action there is termed non-local. A theory of gravity is a "metric theory" if and only if it can be given a mathematical representation in which two conditions hold: "Condition 1": There exists a symmetric metric tensor formula_22 of signature (−, +, +, +), which governs proper-length and proper-time measurements in the usual manner of special and general relativity: formula_23 where there is a summation over indices formula_24 and formula_25. "Condition 2": Stressed matter and fields being acted upon by gravity respond in accordance with the equation: formula_26 where formula_27 is the stress–energy tensor for all matter and non-gravitational fields, and where formula_28 is the covariant derivative with respect to the metric and formula_29 is the Christoffel symbol. The stress–energy tensor should also satisfy an energy condition. Metric theories include (from simplest to most complex): Non-metric theories include A word here about Mach's principle is appropriate because a few of these theories rely on Mach's principle (e.g. Whitehead), and many mention it in passing (e.g. Einstein–Grossmann, Brans–Dicke). Mach's principle can be thought of a half-way-house between Newton and Einstein. It goes this way: Theories from 1917 to the 1980s. At the time it was published in the 17th century, Isaac Newton's theory of gravity was the most accurate theory of gravity. Since then, a number of alternatives were proposed. The theories which predate the formulation of general relativity in 1915 are discussed in history of gravitational theory. This section includes alternatives to general relativity published after general relativity but before the observations of galaxy rotation that led to the hypothesis of "dark matter". Those considered here include (see Will Lang): These theories are presented here without a cosmological constant or added scalar or vector potential unless specifically noted, for the simple reason that the need for one or both of these was not recognized before the supernova observations by the Supernova Cosmology Project and High-Z Supernova Search Team. How to add a cosmological constant or quintessence to a theory is discussed under Modern Theories (see also Einstein–Hilbert action). Scalar field theories. The scalar field theories of Nordström have already been discussed. Those of Littlewood, Bergman, Yilmaz, Whitrow and Morduch and Page and Tupper follow the general formula give by Page and Tupper. According to Page and Tupper, who discuss all these except Nordström, the general scalar field theory comes from the principle of least action: formula_30 where the scalar field is, formula_31 and c may or may not depend on formula_32. In Nordström, formula_33 In Littlewood and Bergmann, formula_34 In Whitrow and Morduch, formula_35 In Whitrow and Morduch, formula_36 In Page and Tupper, formula_37 Page and Tupper matches Yilmaz's theory to second order when formula_38. The gravitational deflection of light has to be zero when "c" is constant. Given that variable c and zero deflection of light are both in conflict with experiment, the prospect for a successful scalar theory of gravity looks very unlikely. Further, if the parameters of a scalar theory are adjusted so that the deflection of light is correct then the gravitational redshift is likely to be wrong. Ni summarized some theories and also created two more. In the first, a pre-existing special relativity space-time and universal time coordinate acts with matter and non-gravitational fields to generate a scalar field. This scalar field acts together with all the rest to generate the metric. The action is: formula_39 formula_40 Misner et al. gives this without the formula_41 term. formula_42 is the matter action. formula_43 t is the universal time coordinate. This theory is self-consistent and complete. But the motion of the solar system through the universe leads to serious disagreement with experiment. In the second theory of Ni there are two arbitrary functions formula_44 and formula_45 that are related to the metric by: formula_46 formula_47 Ni quotes Rosen as having two scalar fields formula_32 and formula_48 that are related to the metric by: formula_49 In Papapetrou the gravitational part of the Lagrangian is: formula_50 In Papapetrou there is a second scalar field formula_51. The gravitational part of the Lagrangian is now: formula_52 Bimetric theories. Bimetric theories contain both the normal tensor metric and the Minkowski metric (or a metric of constant curvature), and may contain other scalar or vector fields. Rosen (1975) bimetric theory The action is: formula_53 formula_54 Lightman–Lee developed a metric theory based on the non-metric theory of Belinfante and Swihart. The result is known as BSLL theory. Given a tensor field formula_55, formula_56, and two constants formula_57 and formula_58 the action is: formula_59 and the stress–energy tensor comes from: formula_60 In Rastall, the metric is an algebraic function of the Minkowski metric and a Vector field. The Action is: formula_61 where formula_62 and formula_63 (see Will for the field equation for formula_64 and formula_65). Quasilinear theories. In Whitehead, the physical metric formula_66 is constructed (by Synge) algebraically from the Minkowski metric formula_67 and matter variables, so it doesn't even have a scalar field. The construction is: formula_68 where the superscript (−) indicates quantities evaluated along the past formula_67 light cone of the field point formula_69 and formula_70 Nevertheless, the metric construction (from a non-metric theory) using the "length contraction" ansatz is criticised. Deser and Laurent and Bollini–Giambiagi–Tiomno are Linear Fixed Gauge theories. Taking an approach from quantum field theory, combine a Minkowski spacetime with the gauge invariant action of a spin-two tensor field (i.e. graviton) formula_71 to define formula_72 The action is: formula_73 The Bianchi identity associated with this partial gauge invariance is wrong. Linear Fixed Gauge theories seek to remedy this by breaking the gauge invariance of the gravitational action through the introduction of auxiliary gravitational fields that couple to formula_71. A cosmological constant can be introduced into a quasilinear theory by the simple expedient of changing the Minkowski background to a de Sitter or anti-de Sitter spacetime, as suggested by G. Temple in 1923. Temple's suggestions on how to do this were criticized by C. B. Rayner in 1955. Tensor theories. Einstein's general relativity is the simplest plausible theory of gravity that can be based on just one symmetric tensor field (the metric tensor). Others include: Starobinsky (R+R^2) gravity, Gauss–Bonnet gravity, f(R) gravity, and Lovelock theory of gravity. Starobinsky. Starobinsky gravity, proposed by Alexei Starobinsky has the Lagrangian formula_74 and has been used to explain inflation, in the form of Starobinsky inflation. Here formula_75 is a constant. Gauss–Bonnet. Gauss–Bonnet gravity has the action formula_76 where the coefficients of the extra terms are chosen so that the action reduces to general relativity in 4 spacetime dimensions and the extra terms are only non-trivial when more dimensions are introduced. Stelle's 4th derivative gravity. Stelle's 4th derivative gravity, which is a generalization of Gauss–Bonnet gravity, has the action formula_77 f(R). f(R) gravity has the action formula_78 and is a family of theories, each defined by a different function of the Ricci scalar. Starobinsky gravity is actually an formula_79 theory. Infinite derivative gravity. Infinite derivative gravity is a covariant theory of gravity, quadratic in curvature, torsion free and parity invariant, formula_80 and formula_81 in order to make sure that only massless spin −2 and spin −0 components propagate in the graviton propagator around Minkowski background. The action becomes non-local beyond the scale formula_82, and recovers to general relativity in the infrared, for energies below the non-local scale formula_82. In the ultraviolet regime, at distances and time scales below non-local scale, formula_83, the gravitational interaction weakens enough to resolve point-like singularity, which means Schwarzschild's singularity can be potentially resolved in infinite derivative theories of gravity. Lovelock. Lovelock gravity has the action formula_84 and can be thought of as a generalization of general relativity. Scalar–tensor theories. These all contain at least one free parameter, as opposed to general relativity which has no free parameters. Although not normally considered a Scalar–Tensor theory of gravity, the 5 by 5 metric of Kaluza–Klein reduces to a 4 by 4 metric and a single scalar. So if the 5th element is treated as a scalar gravitational field instead of an electromagnetic field then Kaluza–Klein can be considered the progenitor of Scalar–Tensor theories of gravity. This was recognized by Thiry. Scalar–Tensor theories include Thiry, Jordan, Brans and Dicke, Bergman, Nordtveldt (1970), Wagoner, Bekenstein and Barker. The action formula_85 is based on the integral of the Lagrangian formula_86. formula_87 formula_88 formula_89 formula_90 where formula_91 is a different dimensionless function for each different scalar–tensor theory. The function formula_92 plays the same role as the cosmological constant in general relativity. formula_93 is a dimensionless normalization constant that fixes the present-day value of formula_1. An arbitrary potential can be added for the scalar. The full version is retained in Bergman and Wagoner. Special cases are: Nordtvedt, formula_94 Since formula_95 was thought to be zero at the time anyway, this would not have been considered a significant difference. The role of the cosmological constant in more modern work is discussed under Cosmological constant. Brans–Dicke, formula_96 is constant Bekenstein variable mass theory Starting with parameters formula_97 and formula_98, found from a cosmological solution, formula_99 determines function formula_100 then formula_101 Barker constant G theory formula_102 Adjustment of formula_91 allows Scalar Tensor Theories to tend to general relativity in the limit of formula_103 in the current epoch. However, there could be significant differences from general relativity in the early universe. So long as general relativity is confirmed by experiment, general Scalar–Tensor theories (including Brans–Dicke) can never be ruled out entirely, but as experiments continue to confirm general relativity more precisely and the parameters have to be fine-tuned so that the predictions more closely match those of general relativity. The above examples are particular cases of Horndeski's theory, the most general Lagrangian constructed out of the metric tensor and a scalar field leading to second order equations of motion in 4-dimensional space. Viable theories beyond Horndeski (with higher order equations of motion) have been shown to exist. Vector–tensor theories. Before we start, Will (2001) has said: "Many alternative metric theories developed during the 1970s and 1980s could be viewed as "straw-man" theories, invented to prove that such theories exist or to illustrate particular properties. Few of these could be regarded as well-motivated theories from the point of view, say, of field theory or particle physics. Examples are the vector–tensor theories studied by Will, Nordtvedt and Hellings." Hellings and Nordtvedt and Will and Nordtvedt are both vector–tensor theories. In addition to the metric tensor there is a timelike vector field formula_104 The gravitational action is: formula_105 where formula_106 are constants and formula_107 (See Will for the field equations for formula_108 and formula_104) Will and Nordtvedt is a special case where formula_109 Hellings and Nordtvedt is a special case where formula_110 These vector–tensor theories are semi-conservative, which means that they satisfy the laws of conservation of momentum and angular momentum but can have preferred frame effects. When formula_111 they reduce to general relativity so, so long as general relativity is confirmed by experiment, general vector–tensor theories can never be ruled out. Other metric theories. Others metric theories have been proposed; that of Bekenstein is discussed under Modern Theories. Non-metric theories. Cartan's theory is particularly interesting both because it is a non-metric theory and because it is so old. The status of Cartan's theory is uncertain. Will claims that all non-metric theories are eliminated by Einstein's Equivalence Principle. Will (2001) tempers that by explaining experimental criteria for testing non-metric theories against Einstein's Equivalence Principle. Misner et al. claims that Cartan's theory is the only non-metric theory to survive all experimental tests up to that date and Turyshev lists Cartan's theory among the few that have survived all experimental tests up to that date. The following is a quick sketch of Cartan's theory as restated by Trautman. Cartan suggested a simple generalization of Einstein's theory of gravitation. He proposed a model of space time with a metric tensor and a linear "connection" compatible with the metric but not necessarily symmetric. The torsion tensor of the connection is related to the density of intrinsic angular momentum. Independently of Cartan, similar ideas were put forward by Sciama, by Kibble in the years 1958 to 1966, culminating in a 1976 review by Hehl et al. The original description is in terms of differential forms, but for the present article that is replaced by the more familiar language of tensors (risking loss of accuracy). As in general relativity, the Lagrangian is made up of a massless and a mass part. The Lagrangian for the massless part is: formula_112 The formula_113 is the linear connection. formula_114 is the completely antisymmetric pseudo-tensor (Levi-Civita symbol) with formula_115, and formula_116 is the metric tensor as usual. By assuming that the linear connection is metric, it is possible to remove the unwanted freedom inherent in the non-metric theory. The stress–energy tensor is calculated from: formula_117 The space curvature is not Riemannian, but on a Riemannian space-time the Lagrangian would reduce to the Lagrangian of general relativity. Some equations of the non-metric theory of Belinfante and Swihart have already been discussed in the section on bimetric theories. A distinctively non-metric theory is given by gauge theory gravity, which replaces the metric in its field equations with a pair of gauge fields in flat spacetime. On the one hand, the theory is quite conservative because it is substantially equivalent to Einstein–Cartan theory (or general relativity in the limit of vanishing spin), differing mostly in the nature of its global solutions. On the other hand, it is radical because it replaces differential geometry with geometric algebra. Modern theories 1980s to present. This section includes alternatives to general relativity published after the observations of galaxy rotation that led to the hypothesis of "dark matter". There is no known reliable list of comparison of these theories. Those considered here include: Bekenstein, Moffat, Moffat, Moffat. These theories are presented with a cosmological constant or added scalar or vector potential. Motivations. Motivations for the more recent alternatives to general relativity are almost all cosmological, associated with or replacing such constructs as "inflation", "dark matter" and "dark energy". The basic idea is that gravity agrees with general relativity at the present epoch but may have been quite different in the early universe. In the 1980s, there was a slowly dawning realisation in the physics world that there were several problems inherent in the then-current big-bang scenario, including the horizon problem and the observation that at early times when quarks were first forming there was not enough space on the universe to contain even one quark. Inflation theory was developed to overcome these difficulties. Another alternative was constructing an alternative to general relativity in which the speed of light was higher in the early universe. The discovery of unexpected rotation curves for galaxies took everyone by surprise. Could there be more mass in the universe than we are aware of, or is the theory of gravity itself wrong? The consensus now is that the missing mass is "cold dark matter", but that consensus was only reached after trying alternatives to general relativity, and some physicists still believe that alternative models of gravity may hold the answer. In the 1990s, supernova surveys discovered the accelerated expansion of the universe, now usually attributed to dark energy. This led to the rapid reinstatement of Einstein's cosmological constant, and quintessence arrived as an alternative to the cosmological constant. At least one new alternative to general relativity attempted to explain the supernova surveys' results in a completely different way. The measurement of the speed of gravity with the gravitational wave event GW170817 ruled out many alternative theories of gravity as explanations for the accelerated expansion. Another observation that sparked recent interest in alternatives to General Relativity is the Pioneer anomaly. It was quickly discovered that alternatives to general relativity could explain this anomaly. This is now believed to be accounted for by non-uniform thermal radiation. Cosmological constant and quintessence. The cosmological constant formula_118 is a very old idea, going back to Einstein in 1917. The success of the Friedmann model of the universe in which formula_119 led to the general acceptance that it is zero, but the use of a non-zero value came back when data from supernovae indicated that the expansion of the universe is accelerating. In Newtonian gravity, the addition of the cosmological constant changes the Newton–Poisson equation from: formula_120 to formula_121 In general relativity, it changes the Einstein–Hilbert action from formula_122 to formula_123 which changes the field equation from: formula_124 to: formula_125 In alternative theories of gravity, a cosmological constant can be added to the action in the same way. More generally a scalar potential formula_92 can be added to scalar tensor theories. This can be done in every alternative the general relativity that contains a scalar field formula_126 by adding the term formula_92 inside the Lagrangian for the gravitational part of the action, the formula_86 part of formula_127 Because formula_92 is an arbitrary function of the scalar field rather than a constant, it can be set to give an acceleration that is large in the early universe and small at the present epoch. This is known as quintessence. A similar method can be used in alternatives to general relativity that use vector fields, including Rastall and vector–tensor theories. A term proportional to formula_128 is added to the Lagrangian for the gravitational part of the action. Farnes' theories. In December 2018, the astrophysicist Jamie Farnes from the University of Oxford proposed a dark fluid theory, related to notions of gravitationally repulsive negative masses that were presented earlier by Albert Einstein. The theory may help to better understand the considerable amounts of unknown dark matter and dark energy in the universe. The theory relies on the concept of negative mass and reintroduces Fred Hoyle's creation tensor in order to allow matter creation for only negative mass particles. In this way, the negative mass particles surround galaxies and apply a pressure onto them, thereby resembling dark matter. As these hypothesised particles mutually repel one another, they push apart the Universe, thereby resembling dark energy. The creation of matter allows the density of the exotic negative mass particles to remain constant as a function of time, and so appears like a cosmological constant. Einstein's field equations are modified to: formula_129 According to Occam's razor, Farnes' theory is a simpler alternative to the conventional LambdaCDM model, as both dark energy and dark matter (two hypotheses) are solved using a single negative mass fluid (one hypothesis). The theory will be directly testable using the world's largest radio telescope, the Square Kilometre Array which should come online in 2022. Relativistic MOND. The original theory of MOND by Milgrom was developed in 1983 as an alternative to "dark matter". Departures from Newton's law of gravitation are governed by an acceleration scale, not a distance scale. MOND successfully explains the Tully–Fisher observation that the luminosity of a galaxy should scale as the fourth power of the rotation speed. It also explains why the rotation discrepancy in dwarf galaxies is particularly large. There were several problems with MOND in the beginning. By 1984, problems 2 and 3 had been solved by introducing a Lagrangian (AQUAL). A relativistic version of this based on scalar–tensor theory was rejected because it allowed waves in the scalar field to propagate faster than light. The Lagrangian of the non-relativistic form is: formula_130 The relativistic version of this has: formula_131 with a nonstandard mass action. Here formula_132 and formula_133 are arbitrary functions selected to give Newtonian and MOND behaviour in the correct limits, and formula_134 is the MOND length scale. By 1988, a second scalar field (PCC) fixed problems with the earlier scalar–tensor version but is in conflict with the perihelion precession of Mercury and gravitational lensing by galaxies and clusters. By 1997, MOND had been successfully incorporated in a stratified relativistic theory [Sanders], but as this is a preferred frame theory it has problems of its own. Bekenstein introduced a tensor–vector–scalar model (TeVeS). This has two scalar fields formula_32 and formula_135 and vector field formula_136. The action is split into parts for gravity, scalars, vector and mass. formula_137 The gravity part is the same as in general relativity. formula_138 where formula_139 formula_140 formula_141 are constants, square brackets in indices formula_142 represent anti-symmetrization, formula_95 is a Lagrange multiplier (calculated elsewhere), and L is a Lagrangian translated from flat spacetime onto the metric formula_143. Note that G need not equal the observed gravitational constant formula_144. F is an arbitrary function, and formula_145 is given as an example with the right asymptotic behaviour; note how it becomes undefined when formula_146 The Parametric post-Newtonian parameters of this theory are calculated in, which shows that all its parameters are equal to general relativity's, except for formula_147 both of which expressed in geometric units where formula_148; so formula_149 Moffat's theories. J. W. Moffat developed a non-symmetric gravitation theory. This is not a metric theory. It was first claimed that it does not contain a black hole horizon, but Burko and Ori have found that nonsymmetric gravitational theory can contain black holes. Later, Moffat claimed that it has also been applied to explain rotation curves of galaxies without invoking "dark matter". Damour, Deser & MaCarthy have criticised nonsymmetric gravitational theory, saying that it has unacceptable asymptotic behaviour. The mathematics is not difficult but is intertwined so the following is only a brief sketch. Starting with a non-symmetric tensor formula_3, the Lagrangian density is split into formula_150 where formula_151 is the same as for matter in general relativity. formula_152 where formula_153 is a curvature term analogous to but not equal to the Ricci curvature in general relativity, formula_154 and formula_155 are cosmological constants, formula_156 is the antisymmetric part of formula_157. formula_158 is a connection, and is a bit difficult to explain because it's defined recursively. However, formula_159 Haugan and Kauffmann used polarization measurements of the light emitted by galaxies to impose sharp constraints on the magnitude of some of nonsymmetric gravitational theory's parameters. They also used Hughes-Drever experiments to constrain the remaining degrees of freedom. Their constraint is eight orders of magnitude sharper than previous estimates. Moffat's metric-skew-tensor-gravity (MSTG) theory is able to predict rotation curves for galaxies without either dark matter or MOND, and claims that it can also explain gravitational lensing of galaxy clusters without dark matter. It has variable formula_1, increasing to a final constant value about a million years after the big bang. The theory seems to contain an asymmetric tensor formula_160 field and a source current formula_161 vector. The action is split into: formula_162 Both the gravity and mass terms match those of general relativity with cosmological constant. The skew field action and the skew field matter coupling are: formula_163 formula_164 where formula_165 and formula_166 is the Levi-Civita symbol. The skew field coupling is a Pauli coupling and is gauge invariant for any source current. The source current looks like a matter fermion field associated with baryon and lepton number. Scalar–tensor–vector gravity. Moffat's Scalar–tensor–vector gravity contains a tensor, vector and three scalar fields. But the equations are quite straightforward. The action is split into: formula_167 with terms for gravity, vector field formula_168 scalar fields formula_169 and mass. formula_170 is the standard gravity term with the exception that formula_171 is moved inside the integral. formula_172 formula_173 The potential function for the vector field is chosen to be: formula_174 where formula_175 is a coupling constant. The functions assumed for the scalar potentials are not stated. Infinite derivative gravity. In order to remove ghosts in the modified propagator, as well as to obtain asymptotic freedom, Biswas, Mazumdar and Siegel (2005) considered a string-inspired infinite set of higher derivative terms formula_176 where formula_177 is the exponential of an entire function of the D'Alembertian operator. This avoids a black hole singularity near the origin, while recovering the 1/r fall of the general relativity potential at large distances. Lousto and Mazzitelli (1997) found an exact solution to this theories representing a gravitational shock-wave. General relativity self-interaction (GRSI). The General Relativity Self-interaction or GRSI model is an attempt to explain astrophysical and cosmological observations without dark matter, dark energy by adding self-interaction terms when calculating the gravitational effects in general relativity, analogous to the self-interaction terms in quantum chromodynamics. Additionally, the model explains the Tully-Fisher relation, the radial acceleration relation, observations that are currently challenging to understand within Lambda-CDM. Testing of alternatives to general relativity. Any putative alternative to general relativity would need to meet a variety of tests for it to become accepted. For in-depth coverage of these tests, see Misner et al. Ch.39, Will Table 2.1, and Ni. Most such tests can be categorized as in the following subsections. Self-consistency. Self-consistency among non-metric theories includes eliminating theories allowing tachyons, ghost poles and higher order poles, and those that have problems with behaviour at infinity. Among metric theories, self-consistency is best illustrated by describing several theories that fail this test. The classic example is the spin-two field theory of Fierz and Pauli; the field equations imply that gravitating bodies move in straight lines, whereas the equations of motion insist that gravity deflects bodies away from straight line motion. Yilmaz (1971) contains a tensor gravitational field used to construct a metric; it is mathematically inconsistent because the functional dependence of the metric on the tensor field is not well defined. Completeness. To be complete, a theory of gravity must be capable of analysing the outcome of every experiment of interest. It must therefore mesh with electromagnetism and all other physics. For instance, any theory that cannot predict from first principles the movement of planets or the behaviour of atomic clocks is incomplete. Many early theories are incomplete in that it is unclear whether the density formula_178 used by the theory should be calculated from the stress–energy tensor formula_179 as formula_180 or as formula_181, where formula_182 is the four-velocity, and formula_183 is the Kronecker delta. The theories of Thirry (1948) and Jordan are incomplete unless Jordan's parameter formula_67 is set to -1, in which case they match the theory of Brans–Dicke and so are worthy of further consideration. Milne is incomplete because it makes no gravitational red-shift prediction. The theories of Whitrow and Morduch, Kustaanheimo and Kustaanheimo and Nuotio are either incomplete or inconsistent. The incorporation of Maxwell's equations is incomplete unless it is assumed that they are imposed on the flat background space-time, and when that is done they are inconsistent, because they predict zero gravitational redshift when the wave version of light (Maxwell theory) is used, and nonzero redshift when the particle version (photon) is used. Another more obvious example is Newtonian gravity with Maxwell's equations; light as photons is deflected by gravitational fields (by half that of general relativity) but light as waves is not. Classical tests. There are three "classical" tests (dating back to the 1910s or earlier) of the ability of gravity theories to handle relativistic effects; they are gravitational redshift, gravitational lensing (generally tested around the Sun), and anomalous perihelion advance of the planets. Each theory should reproduce the observed results in these areas, which have to date always aligned with the predictions of general relativity. In 1964, Irwin I. Shapiro found a fourth test, called the Shapiro delay. It is usually regarded as a "classical" test as well. Agreement with Newtonian mechanics and special relativity. As an example of disagreement with Newtonian experiments, Birkhoff theory predicts relativistic effects fairly reliably but demands that sound waves travel at the speed of light. This was the consequence of an assumption made to simplify handling the collision of masses. The Einstein equivalence principle. Einstein's Equivalence Principle has three components. The first is the uniqueness of free fall, also known as the Weak Equivalence Principle. This is satisfied if inertial mass is equal to gravitational mass. "η" is a parameter used to test the maximum allowable violation of the Weak Equivalence Principle. The first tests of the Weak Equivalence Principle were done by Eötvös before 1900 and limited "η" to less than 5×10-9. Modern tests have reduced that to less than 5×10-13. The second is Lorentz invariance. In the absence of gravitational effects the speed of light is constant. The test parameter for this is "δ". The first tests of Lorentz invariance were done by Michelson and Morley before 1890 and limited "δ" to less than 5×10-3. Modern tests have reduced this to less than 1×10-21. The third is local position invariance, which includes spatial and temporal invariance. The outcome of any local non-gravitational experiment is independent of where and when it is performed. Spatial local position invariance is tested using gravitational redshift measurements. The test parameter for this is "α". Upper limits on this found by Pound and Rebka in 1960 limited "α" to less than 0.1. Modern tests have reduced this to less than 1×10-4. Schiff's conjecture states that any complete, self-consistent theory of gravity that embodies the Weak Equivalence Principle necessarily embodies Einstein's Equivalence Principle. This is likely to be true if the theory has full energy conservation. Metric theories satisfy the Einstein Equivalence Principle. Extremely few non-metric theories satisfy this. For example, the non-metric theory of Belinfante & Swihart is eliminated by the "THεμ" formalism for testing Einstein's Equivalence Principle. Gauge theory gravity is a notable exception, where the strong equivalence principle is essentially the minimal coupling of the gauge covariant derivative. Parametric post-Newtonian formalism. See also Tests of general relativity, Misner et al. and Will for more information. Work on developing a standardized rather than ad hoc set of tests for evaluating alternative gravitation models began with Eddington in 1922 and resulted in a standard set of Parametric post-Newtonian numbers in Nordtvedt and Will and Will and Nordtvedt. Each parameter measures a different aspect of how much a theory departs from Newtonian gravity. Because we are talking about deviation from Newtonian theory here, these only measure weak-field effects. The effects of strong gravitational fields are examined later. These ten are: formula_184 Strong gravity and gravitational waves. Parametric post-Newtonian is only a measure of weak field effects. Strong gravity effects can be seen in compact objects such as white dwarfs, neutron stars, and black holes. Experimental tests such as the stability of white dwarfs, spin-down rate of pulsars, orbits of binary pulsars and the existence of a black hole horizon can be used as tests of alternative to general relativity. General relativity predicts that gravitational waves travel at the speed of light. Many alternatives to general relativity say that gravitational waves travel faster than light, possibly breaking causality. After the multi-messaging detection of the GW170817 coalescence of neutron stars, where light and gravitational waves were measured to travel at the same speed with an error of 1/1015, many of those modified theories of gravity were excluded. Cosmological tests. Useful cosmological scale tests are just beginning to become available. Given the limited astronomical data and the complexity of the theories, comparisons involve complex parameters. For example, Reyes et al., analyzed 70,205 luminous red galaxies with a cross-correlation involving galaxy velocity estimates and gravitational potentials estimated from lensing and yet results are still tentative. For those theories that aim to replace dark matter, observations like the galaxy rotation curve, the Tully–Fisher relation, the faster rotation rate of dwarf galaxies, and the gravitational lensing due to galactic clusters act as constraints. For those theories that aim to replace inflation, the size of ripples in the spectrum of the cosmic microwave background radiation is the strictest test. For those theories that incorporate or aim to replace dark energy, the supernova brightness results and the age of the universe can be used as tests. Another test is the flatness of the universe. With general relativity, the combination of baryonic matter, dark matter and dark energy add up to make the universe exactly flat. Results of testing theories. Parametric post-Newtonian parameters for a range of theories. General Relativity is now more than 100 years old, during which one alternative theory of gravity after another has failed to agree with ever more accurate observations. One illustrative example is Parameterized post-Newtonian formalism. The following table lists Parametric post-Newtonian values for a large number of theories. If the value in a cell matches that in the column heading then the full formula is too complicated to include here. † The theory is incomplete, and formula_191 can take one of two values. The value closest to zero is listed. All experimental tests agree with general relativity so far, and so Parametric post-Newtonian analysis immediately eliminates all the scalar field theories in the table. A full list of Parametric post-Newtonian parameters is not available for Whitehead, Deser-Laurent, Bollini–Giambiagi–Tiomino, but in these three cases formula_192, which is in strong conflict with general relativity and experimental results. In particular, these theories predict incorrect amplitudes for the Earth's tides. (A minor modification of Whitehead's theory avoids this problem. However, the modification predicts the Nordtvedt effect, which has been experimentally constrained.) Theories that fail other tests. The stratified theories of Ni, Lee Lightman and Ni are non-starters because they all fail to explain the perihelion advance of Mercury. The bimetric theories of Lightman and Lee, Rosen, Rastall all fail some of the tests associated with strong gravitational fields. The scalar–tensor theories include general relativity as a special case, but only agree with the Parametric post-Newtonian values of general relativity when they are equal to general relativity to within experimental error. As experimental tests get more accurate, the deviation of the scalar–tensor theories from general relativity is being squashed to zero. The same is true of vector–tensor theories, the deviation of the vector–tensor theories from general relativity is being squashed to zero. Further, vector–tensor theories are semi-conservative; they have a nonzero value for formula_190 which can have a measurable effect on the Earth's tides. Non-metric theories, such as Belinfante and Swihart, usually fail to agree with experimental tests of Einstein's equivalence principle. And that leaves, as a likely valid alternative to general relativity, nothing except possibly Cartan. That was the situation until cosmological discoveries pushed the development of modern alternatives. References. <templatestyles src="Reflist/styles.css" /> External links. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "c\\;" }, { "math_id": 1, "text": "G\\;" }, { "math_id": 2, "text": "\\eta_{\\mu\\nu}\\;" }, { "math_id": 3, "text": "g_{\\mu\\nu}\\;" }, { "math_id": 4, "text": "\\partial_\\mu \\varphi\\;" }, { "math_id": 5, "text": "\\varphi_{,\\mu}\\;" }, { "math_id": 6, "text": "\\nabla_\\mu \\varphi\\;" }, { "math_id": 7, "text": "\\varphi_{;\\mu}\\;" }, { "math_id": 8, "text": "\\delta \\int ds = 0 \\," }, { "math_id": 9, "text": "{ds}^2 = g_{\\mu \\nu} \\, dx^\\mu \\, dx^\\nu \\," }, { "math_id": 10, "text": "R_{\\mu\\nu} = \\frac{8 \\pi G}{c^4} \\left( T_{\\mu \\nu} - \\frac {1}{2} g_{\\mu \\nu}T \\right) \\," }, { "math_id": 11, "text": "T^{\\mu\\nu} = {c^4 \\over 8 \\pi G} \\left( R^{\\mu \\nu}-\\frac {1}{2} g^{\\mu \\nu} R \\right) \\,." }, { "math_id": 12, "text": "S = {c^4 \\over 16 \\pi G} \\int R \\sqrt{-g} \\ d^4 x + S_m \\," }, { "math_id": 13, "text": "G \\," }, { "math_id": 14, "text": "R = R_{\\mu}^{~\\mu} \\," }, { "math_id": 15, "text": "g = \\det ( g_{\\mu \\nu} ) \\," }, { "math_id": 16, "text": "S_m \\," }, { "math_id": 17, "text": "L\\," }, { "math_id": 18, "text": "S\\," }, { "math_id": 19, "text": "S = \\int L \\sqrt{-g} \\, \\mathrm{d}^4x " }, { "math_id": 20, "text": "g = -1\\," }, { "math_id": 21, "text": "L\\,\\propto\\, R " }, { "math_id": 22, "text": "g_{\\mu\\nu}\\," }, { "math_id": 23, "text": "{d\\tau}^2 = - g_{\\mu \\nu} \\, dx^\\mu \\, dx^\\nu \\," }, { "math_id": 24, "text": "\\mu" }, { "math_id": 25, "text": "\\nu" }, { "math_id": 26, "text": "0 = \\nabla_\\nu T^{\\mu \\nu} = {T^{\\mu \\nu}}_{,\\nu} + \\Gamma^{\\mu}_{\\sigma \\nu} T^{\\sigma \\nu} + \\Gamma^{\\nu}_{\\sigma \\nu} T^{\\mu \\sigma} \\," }, { "math_id": 27, "text": "T^{\\mu \\nu} \\," }, { "math_id": 28, "text": "\\nabla_{\\nu}" }, { "math_id": 29, "text": "\\Gamma^{\\alpha}_{\\sigma \\nu} \\," }, { "math_id": 30, "text": "\\delta\\int f \\left(\\tfrac{\\varphi}{c^2} \\right) \\, ds=0" }, { "math_id": 31, "text": "\\varphi = \\frac{GM} r" }, { "math_id": 32, "text": "\\varphi" }, { "math_id": 33, "text": "f(\\varphi/c^2)=\\exp(-\\varphi/c^2), \\qquad c=c_\\infty" }, { "math_id": 34, "text": "f\\left( \\frac \\varphi {c^2} \\right) = \\exp\\left(-\\frac{\\varphi}{c^2} - \\frac{(c/\\varphi^2)^2} 2 \\right) \\qquad c=c_\\infty\\," }, { "math_id": 35, "text": "f\\left(\\frac \\varphi {c^2} \\right) = 1, \\qquad c^2=c_\\infty^2-2\\varphi\\," }, { "math_id": 36, "text": "f\\left( \\frac \\varphi {c^2} \\right)=\\exp\\left(-\\frac \\varphi {c^2} \\right), \\qquad c^2=c_\\infty^2-2\\varphi\\," }, { "math_id": 37, "text": "f\\left( \\frac \\varphi {c^2} \\right) = \\frac \\varphi {c^2} + \\alpha\\left( \\frac \\varphi {c^2} \\right)^2, \\qquad \\frac{c_\\infty^2}{c^2} = 1+ 4 \\left( \\frac \\varphi {c_\\infty^2} \\right) + (15+2\\alpha) \\left( \\frac \\varphi {c_\\infty^2} \\right)^2" }, { "math_id": 38, "text": "\\alpha=-7/2" }, { "math_id": 39, "text": "S={1\\over 16\\pi G}\\int d^4 x \\sqrt{-g}L_\\varphi+S_m" }, { "math_id": 40, "text": "L_\\varphi=\\varphi R-2g^{\\mu\\nu} \\, \\partial_\\mu\\varphi \\, \\partial_\\nu\\varphi" }, { "math_id": 41, "text": "\\varphi R" }, { "math_id": 42, "text": "S_m" }, { "math_id": 43, "text": "\\Box\\varphi=4\\pi T^{\\mu\\nu} \\left [\\eta_{\\mu\\nu}e^{-2\\varphi}+ \\left (e^{2\\varphi}+e^{-2\\varphi} \\right ) \\, \\partial_\\mu t \\, \\partial_\\nu t \\right ]" }, { "math_id": 44, "text": "f(\\varphi)" }, { "math_id": 45, "text": "k(\\varphi)" }, { "math_id": 46, "text": "ds^2=e^{-2f(\\varphi)}dt^2-e^{2f(\\varphi)} \\left [dx^2+dy^2+dz^2 \\right ]" }, { "math_id": 47, "text": "\\eta^{\\mu\\nu}\\partial_\\mu\\partial_\\nu\\varphi=4\\pi\\rho^*k(\\varphi)" }, { "math_id": 48, "text": "\\psi" }, { "math_id": 49, "text": "ds^2=\\varphi^2 \\, dt^2-\\psi^2 \\left [dx^2+dy^2+dz^2 \\right ]" }, { "math_id": 50, "text": "L_\\varphi=e^\\varphi \\left(\\tfrac{1}{2} e^{-\\varphi} \\, \\partial_\\alpha \\varphi \\, \\partial_\\alpha\\varphi + \\tfrac{3}{2} e^{\\varphi} \\, \\partial_0\\varphi \\, \\partial_0\\varphi \\right )" }, { "math_id": 51, "text": "\\chi" }, { "math_id": 52, "text": "L_\\varphi=e^{\\frac{1}{2}(3\\varphi+\\chi)} \\left (-\\tfrac{1}{2} e^{-\\varphi} \\, \\partial_\\alpha \\varphi \\, \\partial_\\alpha\\varphi -e^{-\\varphi} \\, \\partial_\\alpha\\varphi \\, \\partial_\\chi\\varphi + \\tfrac{3}{2} e^{-\\chi} \\, \\partial_0 \\varphi \\, \\partial_0\\varphi \\right )\\," }, { "math_id": 53, "text": "S={1\\over 64\\pi G} \\int d^4 x \\, \\sqrt{-\\eta}\\eta^{\\mu\\nu}g^{\\alpha\\beta}g^{\\gamma\\delta} (g_{\\alpha\\gamma |\\mu} g_{\\alpha\\delta |\\nu} -\\textstyle\\frac{1}{2}g_{\\alpha\\beta |\\mu}g_{\\gamma\\delta |\\nu})+S_m" }, { "math_id": 54, "text": "\\Box_\\eta g_{\\mu\\nu}-g^{\\alpha\\beta}\\eta^{\\gamma\\delta}g_{\\mu\\alpha |\\gamma}g_{\\nu\\beta |\\delta}=-16\\pi G\\sqrt{g/\\eta}(T_{\\mu\\nu}-\\textstyle\\frac{1}{2}g_{\\mu\\nu} T)\\," }, { "math_id": 55, "text": "B_{\\mu\\nu}\\," }, { "math_id": 56, "text": "B=B_{\\mu\\nu}\\eta^{\\mu\\nu}\\," }, { "math_id": 57, "text": "a\\," }, { "math_id": 58, "text": "f\\," }, { "math_id": 59, "text": "S={1\\over 16\\pi G}\\int d^4 x\\sqrt{-\\eta}(aB^{\\mu\\nu|\\alpha}B_{\\mu\\nu|\\alpha} + fB_{,\\alpha} B^{,\\alpha}) + S_m" }, { "math_id": 60, "text": "a\\Box_\\eta B^{\\mu\\nu}+f\\eta^{\\mu\\nu}\\Box_\\eta B=-4\\pi G\\sqrt{g/\\eta} \\, T^{\\alpha\\beta} \\left( \\frac{\\partial g_{\\alpha\\beta}}{\\partial B_\\mu\\nu} \\right)" }, { "math_id": 61, "text": "S={1\\over 16\\pi G}\\int d^4 x \\, \\sqrt{-g} F(N)K^{\\mu;\\nu}K_{\\mu;\\nu}+S_m" }, { "math_id": 62, "text": "F(N)=- \\frac N {2+N} " }, { "math_id": 63, "text": "N=g^{\\mu\\nu} K_\\mu K_\\nu\\;" }, { "math_id": 64, "text": "T^{\\mu\\nu}\\;" }, { "math_id": 65, "text": "K_\\mu\\;" }, { "math_id": 66, "text": "g\\;" }, { "math_id": 67, "text": "\\eta\\;" }, { "math_id": 68, "text": "g_{\\mu\\nu}(x^\\alpha) = \\eta_{\\mu\\nu}-2\\int_{\\Sigma^-}{y_\\mu^- y_\\nu^-\\over(w^-)^3} \\left[ \\sqrt{-g}\\rho u^\\alpha \\, d\\Sigma_\\alpha \\right]^-" }, { "math_id": 69, "text": "x^\\alpha\\;" }, { "math_id": 70, "text": "\n\\begin{align}\n(y^\\mu)^-& =x^\\mu-(x^\\mu)^-, \\qquad (y^\\mu)^-(y_\\mu)^-=0,\\\\[5pt]\nw^- & =(y^\\mu)^-(u_\\mu)^-, \\qquad (u_\\mu) = \\frac{dx^\\mu}{d\\sigma}, \\\\[5pt]\nd\\sigma^2 & =\\eta_{\\mu\\nu} \\, dx^\\mu \\, dx^\\nu\n\\end{align}\n" }, { "math_id": 71, "text": "h_{\\mu\\nu}\\;" }, { "math_id": 72, "text": "g_{\\mu\\nu} = \\eta_{\\mu\\nu}+h_{\\mu\\nu}\\;" }, { "math_id": 73, "text": "S={1\\over 16\\pi G} \\int d^4 x\\sqrt{-\\eta} \\left[2h_{|\\nu}^{\\mu\\nu}h_{\\mu\\lambda}^{|\\lambda} -2h_{|\\nu}^{\\mu\\nu}h_{\\lambda|\\mu}^{\\lambda}+h_{\\nu|\\mu}^\\nu h_\\lambda^{\\lambda|\\mu} -h^{\\mu\\nu|\\lambda}h_{\\mu\\nu|\\lambda} \\right] + S_m\\;" }, { "math_id": 74, "text": "\\mathcal{L}= \\sqrt{-g}\\left[R+\\frac{R^2}{6M^2}\\right]\n" }, { "math_id": 75, "text": "M" }, { "math_id": 76, "text": "\n\\mathcal{L} =\\sqrt{-g}\\left[R+ R^2 - 4R^{\\mu\\nu}R_{\\mu\\nu} + R^{\\mu\\nu\\rho\\sigma}R_{\\mu\\nu\\rho\\sigma} \\right].\n" }, { "math_id": 77, "text": "\n\\mathcal{L} =\\sqrt{-g}\\left[ R +f_1 R^2 + f_2 R^{\\mu\\nu}R_{\\mu\\nu} + f_3 R^{\\mu\\nu\\rho\\sigma}R_{\\mu\\nu\\rho\\sigma} \\right].\n" }, { "math_id": 78, "text": "\n\\mathcal{L}= \\sqrt{-g} f(R)\n" }, { "math_id": 79, "text": "f(R)" }, { "math_id": 80, "text": "\n\\mathcal{L} =\\sqrt{-g} \\left[ M_p^2 R + Rf_1\\left( \\frac \\Box {M_s^2}\\right)R + R^{\\mu\\nu}f_2 \\left( \\frac \\Box {M_s^2} \\right) R_{\\mu\\nu} + R^{\\mu\\nu\\rho\\sigma} f_3\\left( \\frac \\Box {M_s^2}\\right) R_{\\mu\\nu\\rho\\sigma} \\right].\n" }, { "math_id": 81, "text": "\n2f_1 \\left( \\frac \\Box {M_s^2} \\right) + f_2 \\left( \\frac \\Box {M_s^2} \\right) + 2f_3 \\left( \\frac \\Box {M_s^2} \\right) = 0,\n" }, { "math_id": 82, "text": " M_s" }, { "math_id": 83, "text": "M_s^{-1}" }, { "math_id": 84, "text": "\n\\mathcal{L}=\\sqrt{-g}\\ (\\alpha _{0}+\\alpha _{1}R+\\alpha _{2}\\left(\nR^{2}+R_{\\alpha \\beta \\mu \\nu }R^{\\alpha \\beta \\mu \\nu }-4R_{\\mu \\nu }R^{\\mu\n\\nu }\\right) +\\alpha _{3}\\mathcal{O}(R^{3})),\n" }, { "math_id": 85, "text": "S\\;" }, { "math_id": 86, "text": "L_\\varphi\\;" }, { "math_id": 87, "text": "S={1\\over 16\\pi G}\\int d^4 x\\sqrt{-g}L_\\varphi+S_m\\;" }, { "math_id": 88, "text": "L_\\varphi=\\varphi R-{\\omega(\\varphi)\\over\\varphi} g^{\\mu\\nu} \\, \\partial_\\mu\\varphi \\, \\partial_\\nu\\varphi + 2\\varphi \\lambda(\\varphi)\\;" }, { "math_id": 89, "text": "S_m=\\int d^4 x \\, \\sqrt{g} \\, G_N L_m\\;" }, { "math_id": 90, "text": "T^{\\mu\\nu}\\ \\stackrel{\\mathrm{def}}{=}\\ {2\\over\\sqrt{g}}{\\delta S_m\\over\\delta g_{\\mu\\nu}}" }, { "math_id": 91, "text": "\\omega(\\varphi)\\;" }, { "math_id": 92, "text": "\\lambda(\\varphi)\\;" }, { "math_id": 93, "text": "G_N\\;" }, { "math_id": 94, "text": "\\lambda=0\\;" }, { "math_id": 95, "text": "\\lambda" }, { "math_id": 96, "text": "\\omega\\;" }, { "math_id": 97, "text": "r\\;" }, { "math_id": 98, "text": "q\\;" }, { "math_id": 99, "text": "\\varphi=[1-qf(\\varphi)]f(\\varphi)^{-r}\\;" }, { "math_id": 100, "text": "f\\;" }, { "math_id": 101, "text": "\\omega(\\varphi)=-\\textstyle\\frac{3}{2}-\\textstyle\\frac{1}{4}f(\\varphi)[(1-6q) qf(\\varphi)-1] [r+(1-r) qf(\\varphi)]^{-2}\\;" }, { "math_id": 102, "text": "\\omega(\\varphi)= \\frac{4-3\\varphi}{2\\varphi-2} " }, { "math_id": 103, "text": "\\omega\\rightarrow\\infty\\;" }, { "math_id": 104, "text": "K_\\mu." }, { "math_id": 105, "text": "S=\\frac{1}{16\\pi G}\\int d^4 x\\sqrt{-g}\\left [R+\\omega K_\\mu K^\\mu R+\\eta K^\\mu K^\\nu R_{\\mu\\nu}-\\epsilon F_{\\mu\\nu}F^{\\mu\\nu}+\\tau K_{\\mu;\\nu} K^{\\mu;\\nu} \\right ]+S_m" }, { "math_id": 106, "text": "\\omega, \\eta, \\epsilon, \\tau" }, { "math_id": 107, "text": "F_{\\mu\\nu}=K_{\\nu;\\mu}-K_{\\mu;\\nu}." }, { "math_id": 108, "text": "T^{\\mu\\nu}" }, { "math_id": 109, "text": "\\omega=\\eta=\\epsilon=0; \\quad \\tau=1" }, { "math_id": 110, "text": "\\tau=0; \\quad\\epsilon=1; \\quad \\eta=-2\\omega" }, { "math_id": 111, "text": "\\omega=\\eta=\\epsilon=\\tau=0" }, { "math_id": 112, "text": "\n\\begin{align}\nL & ={1\\over 32\\pi G}\\Omega_\\nu^\\mu g^{\\nu\\xi}x^\\eta x^\\zeta \\varepsilon_{\\xi\\mu\\eta\\zeta} \\\\[5pt]\n\\Omega_\\nu^\\mu & =d \\omega^\\mu_\\nu + \\omega^\\eta_\\xi \\\\[5pt]\n\\nabla x^\\mu & =-\\omega^\\mu_\\nu x^\\nu\n\\end{align}\n" }, { "math_id": 113, "text": "\\omega^\\mu_\\nu\\;" }, { "math_id": 114, "text": "\\varepsilon_{\\xi\\mu\\eta\\zeta}\\;" }, { "math_id": 115, "text": "\\varepsilon_{0123}=\\sqrt{-g}\\;" }, { "math_id": 116, "text": "g^{\\nu\\xi}\\," }, { "math_id": 117, "text": "T^{\\mu\\nu}={1\\over 16\\pi G} (g^{\\mu\\nu}\\eta^\\xi_\\eta-g^{\\xi\\mu}\\eta^\\nu_\\eta-g^{\\xi\\nu} \\eta^\\mu_\\eta) \\Omega^\\eta_\\xi\\;" }, { "math_id": 118, "text": "\\Lambda\\;" }, { "math_id": 119, "text": "\\Lambda=0\\;" }, { "math_id": 120, "text": "\\nabla^2\\varphi=4\\pi\\rho\\ G;" }, { "math_id": 121, "text": "\\nabla^2\\varphi + \\frac{1}{2}\\Lambda c^2 = 4\\pi\\rho\\ G;" }, { "math_id": 122, "text": "S={1\\over 16\\pi G}\\int R\\sqrt{-g} \\, d^4x \\, +S_m\\;" }, { "math_id": 123, "text": "S={1\\over 16\\pi G}\\int (R-2\\Lambda)\\sqrt{-g}\\,d^4x \\, +S_m\\;" }, { "math_id": 124, "text": "T^{\\mu\\nu}={1\\over 8\\pi G} \\left(R^{\\mu\\nu}-\\frac {1}{2} g^{\\mu\\nu} R \\right)\\;" }, { "math_id": 125, "text": "T^{\\mu\\nu}={1\\over 8\\pi G}\\left(R^{\\mu\\nu}-\\frac {1}{2} g^{\\mu\\nu} R + g^{\\mu\\nu} \\Lambda \\right)\\;" }, { "math_id": 126, "text": "\\varphi\\;" }, { "math_id": 127, "text": "S={1\\over 16\\pi G}\\int d^4x \\, \\sqrt{-g} \\, L_\\varphi+S_m\\;" }, { "math_id": 128, "text": "K^\\mu K^\\nu g_{\\mu\\nu}\\;" }, { "math_id": 129, "text": "R_{\\mu \\nu} - \\frac{1}{2} R g_{\\mu \\nu} = \\frac{8 \\pi G}{c^4} \\left( T_{\\mu \\nu}^{+} + T_{\\mu \\nu}^{-} + C_{\\mu \\nu} \\right)" }, { "math_id": 130, "text": "L=-{a_0^2\\over 8\\pi G}f\\left\\lbrack \\frac{|\\nabla\\varphi|^2}{a_0^2}\\right\\rbrack-\\rho\\varphi" }, { "math_id": 131, "text": "L=-{a_0^2\\over 8\\pi G}\\tilde f \\left( \\ell_0^2 g^{\\mu\\nu}\\,\\partial_\\mu\\varphi\\, \\partial_\\nu\\varphi \\right )" }, { "math_id": 132, "text": "f" }, { "math_id": 133, "text": "\\tilde f" }, { "math_id": 134, "text": "l_0 = c^2/a_0\\;" }, { "math_id": 135, "text": "\\sigma\\;" }, { "math_id": 136, "text": "U_\\alpha" }, { "math_id": 137, "text": "S=S_g+S_s+S_v+S_m" }, { "math_id": 138, "text": "\\begin{align}\nS_s &= -\\frac{1}{2}\\int \\left [\\sigma^2 h^{\\alpha\\beta}\\varphi_{,\\alpha}\\varphi_{,\\beta} + \\frac12G \\ell_0^{-2}\\sigma^4F(kG\\sigma^2)\\right ]\\sqrt{-g}\\,d^4x \\\\[5pt]\nS_v &= -\\frac{K}{32\\pi G}\\int \\left [g^{\\alpha\\beta}g^{\\mu\\nu}U_{[\\alpha,\\mu]}U_{[\\beta,\\nu]} -\\frac{2\\lambda}{K} \\left (g^{\\mu\\nu} U_\\mu U_\\nu+1 \\right ) \\right ]\\sqrt{-g}\\,d^4x \\\\[5pt]\nS_m &= \\int L \\left (\\tilde g_{\\mu\\nu},f^\\alpha,f^\\alpha_{|\\mu},\\ldots \\right)\\sqrt{-g}\\,d^4x\n\\end{align}" }, { "math_id": 139, "text": "h^{\\alpha\\beta} = g^{\\alpha\\beta}-U^\\alpha U^\\beta" }, { "math_id": 140, "text": "\\tilde g^{\\alpha\\beta}=e^{2\\varphi}g^{\\alpha\\beta}+2U^\\alpha U^\\beta\\sinh(2\\varphi)" }, { "math_id": 141, "text": "k, K" }, { "math_id": 142, "text": "U_{[\\alpha,\\mu]}" }, { "math_id": 143, "text": "\\tilde g^{\\alpha\\beta}" }, { "math_id": 144, "text": "G_{Newton}" }, { "math_id": 145, "text": "F(\\mu)=\\frac{3}{4}{\\mu^2(\\mu-2)^2\\over 1-\\mu}" }, { "math_id": 146, "text": "\\mu=1" }, { "math_id": 147, "text": "\\begin{align}\n\\alpha_1 &= \\frac{4G}{K} \\left ((2K-1) e^{-4\\varphi_0} - e^{4\\varphi_0} + 8 \\right ) - 8 \\\\[5pt]\n\\alpha_2 &= \\frac{6 G}{2 - K} - \\frac{2 G (K + 4) e^{4 \\varphi_0}}{(2 - K)^2} - 1\n\\end{align}" }, { "math_id": 148, "text": "c = G_{Newtonian} = 1" }, { "math_id": 149, "text": " G^{-1} = \\frac{2}{2-K} + \\frac{k}{4\\pi}." }, { "math_id": 150, "text": "L=L_R+L_M\\;" }, { "math_id": 151, "text": "L_M\\;" }, { "math_id": 152, "text": "L_R = \\sqrt{-g} \\left[R(W)-2\\lambda-\\frac14\\mu^2g^{\\mu\\nu}g_{[\\mu\\nu]}\\right] - \\frac16g^{\\mu\\nu}W_\\mu W_\\nu\\;" }, { "math_id": 153, "text": "R(W)\\;" }, { "math_id": 154, "text": "\\lambda\\;" }, { "math_id": 155, "text": "\\mu^2\\;" }, { "math_id": 156, "text": "g_{[\\nu\\mu]}\\;" }, { "math_id": 157, "text": "g_{\\nu\\mu}\\;" }, { "math_id": 158, "text": "W_\\mu\\;" }, { "math_id": 159, "text": "W_\\mu\\approx-2g^{,\\nu}_{[\\mu\\nu]}\\;" }, { "math_id": 160, "text": "A_{\\mu\\nu}\\;" }, { "math_id": 161, "text": "J_\\mu\\;" }, { "math_id": 162, "text": "S=S_G+S_F+S_{FM}+S_M\\;" }, { "math_id": 163, "text": "S_F=\\int d^4x\\,\\sqrt{-g} \\left( \\frac1{12}F_{\\mu\\nu\\rho}F^{\\mu\\nu\\rho} - \\frac14\\mu^2 A_{\\mu\\nu}A^{\\mu\\nu} \\right)\\;" }, { "math_id": 164, "text": "S_{FM}=\\int d^4x\\,\\epsilon^{\\alpha\\beta\\mu\\nu}A_{\\alpha\\beta}\\partial_\\mu J_\\nu\\;" }, { "math_id": 165, "text": "F_{\\mu\\nu\\rho}=\\partial_\\mu A_{\\nu\\rho}+\\partial_\\rho A_{\\mu\\nu}" }, { "math_id": 166, "text": "\\epsilon^{\\alpha\\beta\\mu\\nu}\\;" }, { "math_id": 167, "text": " S=S_G+S_K+S_S+S_M" }, { "math_id": 168, "text": "K_\\mu," }, { "math_id": 169, "text": "G, \\omega, \\mu" }, { "math_id": 170, "text": "S_G" }, { "math_id": 171, "text": "G" }, { "math_id": 172, "text": "S_K=-\\int d^4x\\,\\sqrt{-g}\\omega \\left( \\frac14 B_{\\mu\\nu} B^{\\mu\\nu} + V(K) \\right), \\qquad \\text{where } \\quad B_{\\mu\\nu}=\\partial_\\mu K_\\nu-\\partial_\\nu K_\\mu." }, { "math_id": 173, "text": "S_S = -\\int d^4x\\,\\sqrt{-g} \\frac{1}{G^3} \\left( \\frac12g^{\\mu\\nu}\\,\\nabla_\\mu G\\,\\nabla_\\nu G -V(G) \\right) + \\frac{1}{G} \\left( \\frac{1}{2} g^{\\mu\\nu}\\,\\nabla_\\mu\\omega\\,\\nabla_\\nu\\omega -V(\\omega) \\right) +{1\\over\\mu^2G} \\left( \\frac12g^{\\mu\\nu}\\,\\nabla_\\mu\\mu\\,\\nabla_\\nu\\mu - V(\\mu) \\right)." }, { "math_id": 174, "text": "V(K) = -\\frac12\\mu^2\\varphi^\\mu\\varphi_\\mu - \\frac14g \\left (\\varphi^\\mu \\varphi_\\mu \\right )^2" }, { "math_id": 175, "text": "g" }, { "math_id": 176, "text": "S = \\int \\mathrm{d}^4x \\sqrt{-g} \\left(\\frac{R}{2} + R F (\\Box) R \\right)" }, { "math_id": 177, "text": " F (\\Box)" }, { "math_id": 178, "text": "\\rho" }, { "math_id": 179, "text": "T" }, { "math_id": 180, "text": "\\rho=T_{\\mu\\nu}u^\\mu u^\\nu" }, { "math_id": 181, "text": "\\rho=T_{\\mu\\nu}\\delta^{\\mu \\nu}" }, { "math_id": 182, "text": "u" }, { "math_id": 183, "text": "\\delta" }, { "math_id": 184, "text": "\\gamma, \\beta,\\eta,\\alpha_1,\\alpha_2,\\alpha_3,\\zeta_1,\\zeta_2,\\zeta_3,\\zeta_4." }, { "math_id": 185, "text": "\\gamma" }, { "math_id": 186, "text": "\\beta" }, { "math_id": 187, "text": "\\eta" }, { "math_id": 188, "text": "\\alpha_1,\\alpha_2,\\alpha_3" }, { "math_id": 189, "text": "\\zeta_1,\\zeta_2,\\zeta_3,\\zeta_4,\\alpha_3" }, { "math_id": 190, "text": "\\alpha_2" }, { "math_id": 191, "text": "\\zeta_{ 4}" }, { "math_id": 192, "text": "\\beta=\\xi" } ]
https://en.wikipedia.org/wiki?curid=6646221
66468324
Sparsity matroid
A sparsity matroid is a mathematical structure that captures how densely a multigraph is populated with edges. To unpack this a little, sparsity is a measure of density of a graph that bounds the number of edges in any subgraph. The property of having a particular matroid as its density measure is invariant under graph isomorphisms and so it is a graph invariant. The graphs we are concerned with generalise simple directed graphs by allowing multiple same-oriented edges between pairs of vertices. Matroids are a quite general mathematical abstraction that describe the amount of indepdendence in, variously, points in geometric space and paths in a graph; when applied to characterising sparsity, matroids describe certain sets of sparse graphs. These matroids are connected to the structural rigidity of graphs and their ability to be decomposed into edge-disjoint spanning trees via the Tutte and Nash-Williams theorem. There is a family of efficient algorithms, known as pebble games, for determining if a multigraph meets the given sparsity condition. Definitions. formula_0-sparse multigraph. A multigraph formula_1 is formula_0-sparse, where formula_2 and formula_3 are non-negative integers, if for every subgraph formula_4 of formula_5, we have formula_6. formula_0-tight multigraph. A multigraph formula_1 is formula_0-tight if it is formula_0-sparse and formula_7. formula_8-sparse and tight multigraph. A multigraph formula_9 is formula_8-sparse if there exists a subset formula_10 such that the subgraph formula_11 is formula_12-sparse and the subgraph formula_13 is formula_14-sparse. The multigraph formula_5 is formula_8-tight if, additionally, formula_15. formula_0-sparsity matroid. The formula_0-sparsity matroid is a matroid whose ground set is the edge set of the complete multigraph on formula_16 vertices, with loop multiplicity formula_17 and edge multiplicity formula_18, and whose independent sets are formula_0-sparse multigraphs on formula_16 vertices. The bases of the matroid are the formula_0-tight multigraphs and the circuits are the formula_0-sparse multigraphs formula_1 that satisfy formula_19. The first examples of sparsity matroids can be found in. Not all pairs formula_0 induce a matroid. Pairs (k,l) that form a matroid. The following result provides sufficient restrictions on formula_0 for the existence of a matroid. Theorem. The formula_0-sparse multigraphs on formula_16 vertices are the independent sets of a matroid if Some consequences of this theorem are that formula_27-sparse multigraphs form a matroid while formula_28-sparse multigraphs do not. Hence, the bases, i.e., formula_27-tight multigraphs, must all have the same number of edges and can be constructed using techniques discussed below. On the other hand, without this matroidal structure, maximally formula_28-sparse multigraphs will have different numbers of edges, and it is interesting to identify the one with the maximum number of edges. Connections to rigidity and decomposition. Structural rigidity is about determining if almost all, i.e. generic, embeddings of a (simple or multi) graph in some formula_29-dimensional metric space are rigid. More precisely, this theory gives combinatorial characterizations of such graphs. In Euclidean space, Maxwell showed that independence in a sparsity matroid is necessary for a graph to be generically rigid in any dimension. Maxwell Direction. If a graph formula_5 is generically minimally rigid in formula_29-dimensions, then it is independent in the formula_30-sparsity matroid. The converse of this theorem was proved in formula_31-dimensions, yielding a complete combinatorial characterization of generically rigid graphs in formula_32. However, the converse is not true for formula_33, see combinatorial characterizations of generically rigid graphs. Other sparsity matroids have been used to give combinatorial characterizations of generically rigid multigraphs for various types of frameworks, see rigidity for other types of frameworks. The following table summarizes these results by stating the type of generic rigid framework in a given dimension and the equivalent sparsity condition. Let formula_34 be the multigraph obtained by duplicating the edges of a multigraph formula_5 formula_2 times. The Tutte and Nash-Williams theorem shows that formula_36-tight graphs are equivalent to graphs that can be decomposed into formula_2 edge-disjoint spanning trees, called formula_2-arborescences. A formula_37-arborescence is a multigraph formula_5 such that adding formula_38 edges to formula_5 yields a formula_2-arborescence. For formula_39, a formula_0-sparse multigraph is a formula_40-arborescence; this was first shown for formula_27 sparse graphs. Additionally, many of the rigidity and sparsity results above can be written in terms of edge-disjoint spanning trees. Constructing sparse multigraphs. This section gives methods to construct various sparse multigraphs using operations defined in constructing generically rigid graphs. Since these operations are defined for a given dimension, let a formula_41-extension be a formula_2-dimensional formula_42-extension, i.e., a formula_42-extension where the new vertex is connected to formula_2 distinct vertices. Likewise, a formula_43-extension is a formula_2-dimensional formula_44-extension. General (k,l)-sparse multigraphs. The first construction is for formula_45-tight graphs. A generalized formula_43-extension is a triple formula_46, where formula_47 edges are removed, for formula_48, and the new vertex formula_49 is connected to the vertices of these formula_47 edges and to formula_50 distinct vertices. The usual formula_43-extension is a formula_51-extension. Theorem. A multigraph is formula_36-tight if and only if it can be constructed from a single vertex via a sequence of formula_41- and formula_46-extensions. This theorem was then extended to general formula_0-tight graphs. Consider another generalization of a formula_44-extension denoted by formula_52, for formula_53, where formula_47 edges are removed, the new vertex formula_49 is connected to the vertices of these formula_47 edges, formula_54 loops are added to formula_49, and formula_49 is connected to formula_55 other distinct vertices. Also, let formula_56 denote a multigraph with a single node and formula_3 loops. Theorem. A multigraph formula_5 is formula_0-tight for Neither of these constructions are sufficient when the graph is simple. The next results are for formula_0-sparse hypergraphs. A hypergraph is formula_62-uniform if each of its edges contains exactly formula_62 vertices. First, conditions are established for the existence of formula_0-tight hypergraphs. Theorem. There exists an formula_63 such that for all formula_64, there exist formula_62-uniform hypergraphs on formula_16 vertices that are formula_0-tight. The next result extends the Tutte and Nash-Williams theorem to hypergraphs. Theorem. If formula_5 is a formula_0-tight hypergraph, for formula_65, then formula_5 is a formula_40 arborescence, where the formula_66 added edges contain at least two vertices. A map-hypergraph is a hypergraph that admits an orientation such that each vertex has an out-degree of formula_44. A formula_2-map-hypergraph is a map-hypergraph that can be decomposed into formula_2 edge-disjoint map-hypergraphs. Theorem. If formula_5 is a formula_0-tight hypergraph, for formula_67, then formula_68 is the union of an formula_3-arborescence and a formula_69-map-hypergraph. (2,3)-sparse graphs. The first result shows that formula_27-tight graphs, i.e., generically minimally rigid graphs in formula_32 have Henneberg-constructions. Theorem. A graph formula_5 is formula_27-tight if and only if it can be constructed from the complete graph formula_70 via a sequence of formula_42- and formula_44-extensions. The next result shows how to construct formula_27-circuits. In this setting, a formula_31-sum combines two graphs by identifying two formula_70 subgraphs in each and then removing the combined edge from the resulting graph. Theorem. A graph formula_5 is a formula_27-circuit if and only if it can be constructed from disjoint copies of the complete graph formula_71 via a sequence of formula_44-extensions within connected components and formula_31-sums of connected components. The method for constructing formula_72-connected formula_27-circuits is even simpler. Theorem. A graph formula_5 is a formula_72-connected formula_27-circuit if and only if it can be constructed from the complete graph formula_71 via a sequence of formula_44-extensions. These circuits also have the following combinatorial property. Theorem. If a graph formula_5 is a formula_27-circuit that is not the complete graph formula_71, then formula_5 has at least formula_72 vertices of degree formula_72 such that performing a formula_44-reduction on any one of these vertices yields another formula_27-circuit. (2,2)-sparse graphs. The next results shows how to construct formula_35-circuits using two different construction methods. For the first method, the base graphs are shown in Figure 2 and the three join operations are shows in Figure 2. A formula_44-join identifies a formula_70 subgraph in formula_73 with an edge of a formula_71 subgraph in formula_73, and removes the other two vertices of the formula_71. A formula_31-join identifies an edge of a formula_71 subgraph in formula_73 with an edge of a formula_71 subgraph in formula_74, and removes the other vertices on both formula_71 subgraphs. A formula_72-join takes a degree formula_72 vertex formula_75 in formula_73 and a degree formula_72 vertex formula_76 in formula_74 and removes them, then it adds formula_72 edges between formula_73 and formula_74 such that there is a bijection between the neighbors of formula_75 and formula_76. The second method uses formula_31-dimensional vertex-splitting, defined in the constructing generically rigid graphs, and a vertex-to-formula_71 operation, which replace a vertex formula_77 of a graph with a formula_71 graph and connects each neighbor of formula_77 to any vertex of the formula_71. Theorem. A graph formula_5 is a formula_35-circuit if and only if (2,1)-sparse graphs. The following result gives a construction method for formula_78-tight graphs and extends the Tutte and Nash-Williams theorem to these graphs. For the construction, the base graphs are formula_79 with an edge removed or the formula_31-sum of two formula_71 graphs (the shared edge is not removed), see the middle graph in Figure 2. Also, an edge-joining operation adds a single edge between two graphs. Theorem. A graph formula_5 is formula_78-tight if and only if Pebble games. There is a family of efficient network-flow based algorithms for identifying formula_0-sparse graphs, where formula_80. The first of these types of algorithms was for formula_27-sparse graphs. These algorithms are explained on the Pebble game page.
[ { "math_id": 0, "text": "(k,l)" }, { "math_id": 1, "text": "G=(V,E)" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "l" }, { "math_id": 4, "text": "G'=(V',E')" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "|E'| \\leq k|V'| - l" }, { "math_id": 7, "text": "|E|=k|V|-l" }, { "math_id": 8, "text": "[a,b]" }, { "math_id": 9, "text": "G=(V,E \\cup F)" }, { "math_id": 10, "text": "F' \\subset F" }, { "math_id": 11, "text": "G'=(V,E \\cup F')" }, { "math_id": 12, "text": "(a,a)" }, { "math_id": 13, "text": "G'' =(V,F \\setminus F')" }, { "math_id": 14, "text": "(b,b)" }, { "math_id": 15, "text": "|E \\cup F| = (a+b)|V|-(a+b)" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "k-l" }, { "math_id": 18, "text": "2k-l" }, { "math_id": 19, "text": "|E|=k|V|-l+1" }, { "math_id": 20, "text": "n \\geq 1" }, { "math_id": 21, "text": "l \\in [0,k]" }, { "math_id": 22, "text": "n \\geq 2" }, { "math_id": 23, "text": "l \\in (k,\\frac{3}{2}k)" }, { "math_id": 24, "text": "n = 2" }, { "math_id": 25, "text": "n \\geq \\frac{l}{2k-l}" }, { "math_id": 26, "text": "l \\in [\\frac{3}{2}k,2k)" }, { "math_id": 27, "text": "(2,3)" }, { "math_id": 28, "text": "(3,6)" }, { "math_id": 29, "text": "d" }, { "math_id": 30, "text": "\\left(d,{d+1 \\choose 2}\\right)" }, { "math_id": 31, "text": "2" }, { "math_id": 32, "text": "\\mathbb{R}^2" }, { "math_id": 33, "text": "d \\geq 3" }, { "math_id": 34, "text": "kG" }, { "math_id": 35, "text": "(2,2)" }, { "math_id": 36, "text": "(k,k)" }, { "math_id": 37, "text": "(k,a)" }, { "math_id": 38, "text": "a" }, { "math_id": 39, "text": "l \\in [k,2k)" }, { "math_id": 40, "text": "(k,l-k)" }, { "math_id": 41, "text": "(k,0)" }, { "math_id": 42, "text": "0" }, { "math_id": 43, "text": "(k,1)" }, { "math_id": 44, "text": "1" }, { "math_id": 45, "text": "\\left(k,k\\right)" }, { "math_id": 46, "text": "(k,1,j)" }, { "math_id": 47, "text": "j" }, { "math_id": 48, "text": "1 \\leq j \\leq k" }, { "math_id": 49, "text": "u" }, { "math_id": 50, "text": "k - j" }, { "math_id": 51, "text": "(k,1,1)" }, { "math_id": 52, "text": "(k,1,i,j)" }, { "math_id": 53, "text": "0 \\leq j \\leq i \\leq k" }, { "math_id": 54, "text": "i-j" }, { "math_id": 55, "text": "k-i" }, { "math_id": 56, "text": "P_l" }, { "math_id": 57, "text": "l \\in [1,k]" }, { "math_id": 58, "text": "P_{k-l}" }, { "math_id": 59, "text": "0 \\leq j \\leq i \\leq k-1" }, { "math_id": 60, "text": "i-j \\leq k-l" }, { "math_id": 61, "text": "l=0" }, { "math_id": 62, "text": "s" }, { "math_id": 63, "text": "n_0" }, { "math_id": 64, "text": "n \\geq n_0" }, { "math_id": 65, "text": "l \\geq k" }, { "math_id": 66, "text": "l-k" }, { "math_id": 67, "text": "k \\geq l" }, { "math_id": 68, "text": "g" }, { "math_id": 69, "text": "(k-l)" }, { "math_id": 70, "text": "K_2" }, { "math_id": 71, "text": "K_4" }, { "math_id": 72, "text": "3" }, { "math_id": 73, "text": "G_1" }, { "math_id": 74, "text": "G_2" }, { "math_id": 75, "text": "v_1" }, { "math_id": 76, "text": "v_2" }, { "math_id": 77, "text": "v" }, { "math_id": 78, "text": "(2,1)" }, { "math_id": 79, "text": "K_5" }, { "math_id": 80, "text": "l \\in [0,2k]" } ]
https://en.wikipedia.org/wiki?curid=66468324
66474041
Supernova neutrinos
Astronomical neutrinos produced during core-collapse supernova explosion Supernova neutrinos are weakly interactive elementary particles produced during a core-collapse supernova explosion. A massive star collapses at the end of its life, emitting on the order of 1058 neutrinos and antineutrinos in all lepton flavors. The luminosity of different neutrino and antineutrino species are roughly the same. They carry away about 99% of the gravitational energy of the dying star as a burst lasting tens of seconds. The typical supernova neutrino energies are . Supernovae are considered the strongest and most frequent source of cosmic neutrinos in the MeV energy range. Since neutrinos are generated in the core of a supernova, they play a crucial role in the star's collapse and explosion. Neutrino heating is believed to be a critical factor in supernova explosions. Therefore, observation of neutrinos from supernovae provides detailed information about core collapse and the explosion mechanism. Further, neutrinos undergoing collective flavor conversions in a supernova's dense interior offers opportunities to study neutrino-neutrino interactions. The only supernova neutrino event detected so far is SN 1987A. Nevertheless, with current detector sensitivities, it is expected that thousands of neutrino events from a galactic core-collapse supernova would be observed. The next generation of experiments are designed to be sensitive to neutrinos from supernova explosions as far as Andromeda or beyond. The observation of supernovae will broaden our understanding of various astrophysical and particle physics phenomena. Further, coincident detection of supernova neutrino in different experiments would provide an early alarm to astronomers about a supernova. History. Stirling A. Colgate and Richard H. White, and independently W. David Arnett, identified the role of neutrinos in core collapse, which resulted in the subsequent development of the theory of supernova explosion mechanism. In February 1987, the observation of supernova neutrinos experimentally verified the theoretical relationship between neutrinos and supernovae. The Nobel Prize-winning event, known as SN 1987A, was the collapse of a blue supergiant star Sanduleak -69° 202, in the Large Magellanic Cloud outside our Galaxy, 51 kpc away. About lightweight weakly-interacting neutrinos were produced, carrying away almost all of the energy of the supernova. Two kiloton-scale water Cherenkov detectors, Kamiokande II and IMB, along with a smaller Baksan Observatory, detected a total of 25 neutrino-events over a period of about 13 seconds. Only electron-type neutrinos were detected because neutrino energies were below the threshold of muon or tau production. The SN 1987A neutrino data, although sparse, confirmed the salient features of the basic supernova model of gravitational collapse and associated neutrino emission. It put strong constraints on neutrino properties such as charge and decay rate. The observation is considered a breakthrough in the field of supernova and neutrino physics. Properties. Neutrinos are fermions, i.e. elementary particles with a spin of 1/2. They interact only through weak interaction and gravity. A core-collapse supernova emits a burst of ~formula_0 neutrinos and antineutrinos on a time scale of tens of seconds. Supernova neutrinos carry away about 99% of the gravitational energy of the dying star in the form of kinetic energy. Energy is divided roughly equally between the three flavors of neutrinos and three flavors of antineutrinos. Their average energy is of the order 10 MeV. The neutrino luminosity of a supernova is typically on the order of formula_1 formula_2 or formula_3. The core-collapse events are the strongest and most frequent source of cosmic neutrinos in the MeV energy range. During a supernova, neutrinos are produced in enormous numbers inside the core. Therefore, they have a fundamental influence on the collapse and supernova explosions. Neutrino heating is predicted to be responsible for the supernova explosion. Neutrino oscillations during the collapse and explosion generate the gravitational wave bursts. Furthermore, neutrino interactions set the neutron-to-proton ratio, determining the nucleosynthesis outcome of heavier elements in the neutrino driven wind. Production. Supernova neutrinos are produced when a massive star collapses at the end of its life, ejecting its outer mantle in an explosion. Wilson's delayed neutrino explosion mechanism has been used for 30 years to explain core collapse supernova. Near the end of life, a massive star is made up of onion-layered shells of elements with an iron core. During the early stage of the collapse, electron neutrinos are created through electron-capture on protons bound inside iron-nuclei: formula_4 The above reaction produces neutron-rich nuclei, leading to neutronization of the core. Therefore, this is known as the "neutronization phase". Some of these nuclei undergo beta-decay and produce anti-electron neutrinos: formula_5 The above processes reduce the core energy and its lepton density. Hence, the electron degeneracy pressure is unable to stabilize the stellar core against the gravitational force, and the star collapses. When the density of the central region of collapse exceeds , the diffusion time of neutrinos exceeds the collapse time. Therefore, the neutrinos become trapped inside the core. When the central region of the core reaches nuclear densities (~ 1014 g/cm3), the nuclear pressure causes the collapse to deaccelerate. This generates a shock wave in the outer core (region of iron core), which triggers the supernova explosion. The trapped electron neutrinos are released in the form of "neutrino burst" in the first tens of milliseconds. It is found from simulations that the neutrino burst and iron photo-disintegration weaken the shock wave within milliseconds of propagation through the iron core. The weakening of the shock wave results in mass infall, which forms a neutron star. This is known as the "accretion phase" and lasts between few tens to few hundreds of milliseconds. The high-density region traps neutrinos. When the temperature reaches 10 MeV, thermal photons generate electron–positron pairs. Neutrinos and antineutrinos are created through weak-interaction of electron–positron pairs: formula_6 The luminosity of electron flavor is significantly higher than for non-electrons. As the neutrino temperature rises in the compressionally heated core, neutrinos energize the shock wave through charged current reactions with free nucleons: formula_7 formula_8 When the thermal pressure created by neutrino heating increases above the pressure of the infalling material, the stalled shock wave is rejuvenated, and neutrinos are released. The neutron star cools down as the neutrino-pair production and neutrino release continues. Therefore, it is known as the "cooling phase". The Luminosities of different neutrino and antineutrino species are roughly the same. The supernova neutrino luminosity drops significantly after several tens of seconds. Oscillation. The knowledge of flux and flavor content of the neutrinos behind the shock wave is essential to implement the neutrino-driven heating mechanism in computer simulations of supernova explosions. Neutrino oscillations in dense matter is an active field of research. Neutrinos undergo flavor conversions after they thermally decouple from the proto-neutron star. Within the neutrino-bulb model, neutrinos of all flavors decouple at a single sharp surface near the surface of the star. Also, the neutrinos travelling in different directions are assumed to travel the same path length in reaching a certain distance R from the center. This assumption is known as single angle approximation, which along with spherical symmetricity of the supernova, allows us to treat neutrinos emitted in the same flavor as an ensemble and describe their evolution only as a function of distance. The flavor evolution of neutrinos for each energy mode is described by the density matrix: formula_9 Here, formula_10is the initial neutrino luminosity at the surface of a proto-neutron star which drops exponentially. Assuming decay time by formula_11, the total energy emitted per unit time for a particular flavor can be given by formula_12. formula_13 represents average energy. Therefore, the fraction gives the number of neutrinos emitted per unit of time in that flavor. formula_14 is normalized energy distribution for the corresponding flavor. The same formula holds for antineutrinos too. Neutrino luminosity is found by the following relation: formula_15 The integral is multiplied by 6 because the released binding energy is divided equally between the three flavors of neutrinos and three flavors of antineutrinos. The evolution of the density operator is given by Liouville's equation: formula_16 The Hamiltonian formula_17 covers vacuum oscillations, charged current interaction of neutrinos from electrons and protons, as well as neutrino–neutrino interactions. Neutrino self-interactions are non-linear effects that result in collective flavor conversions. They are significant only when interaction frequency exceeds vacuum oscillation frequency. Typically, they become negligible after a few hundred kilometers from the center. Thereafter, Mikheyev–Smirnov–Wolfenstein resonances with the matter in the stellar envelope can describe the neutrino evolution. Detection. There are several different ways to observe supernova neutrinos. Almost all of them involves the inverse beta decay reaction for the detection of neutrinos. The reaction is a charged current weak interaction, where an electron antineutrino interacts with a proton produces a positron and a neutron: formula_8 The positron retains most of the energy of the incoming neutrino. It produces a cone of Cherenkov light, which is detected by photomultiplier tubes (PMT's) arrayed on the walls of the detector. Neutrino oscillations in the Earth matter may affect the supernova neutrino signals detected in experimental facilities. With current detector sensitivities, it is expected that thousands of neutrino events from a galactic core-collapse supernova would be observed. Large-scale detectors such as Hyper-Kamiokande or IceCube can detect up to formula_18 events. Unfortunately, SN 1987A is the only supernova neutrino event detected so far. There have not been any galactic supernova in the Milky Way in the last 120 years, despite the expected rate of 0.8-3 per century. Nevertheless, a supernova at 10 kPc distance will enable a detailed study of the neutrino signal, providing unique physics insights. Additionally, the next generation of underground experiments, like Hyper-Kamiokande, are designed to be sensitive to neutrinos from supernova explosions as far as Andromeda or beyond. Further they are speculated to have good supernova pointing capability too. Significance. Since supernova neutrinos originate deep inside the stellar core, they are a relatively reliable messenger of the supernova mechanism. Due to their weakly interacting nature, the neutrino signals from a galactic supernova can give information about the physical conditions at the center of core collapse, which would be otherwise inaccessible. Furthermore, they are the only source of information for core-collapse events which don't result in a supernova or when the supernova is in a dust-obscured region. Future observations of supernova neutrinos will constrain the different theoretical models of core collapse and explosion mechanism, by testing them against the direct empirical information from the supernova core. Due to their weakly interacting nature, near light speed neutrinos emerge promptly after the collapse. In contrast, there may be a delay of hours or days before the photon signal emerges from the stellar envelope. Therefore, a supernova will be observed in neutrino observatories before the optical signal, even after travelling millions of light years. The coincident detection of neutrino signals from different experiments would provide an early alarm to astronomers to direct telescopes to the right part of the sky to capture the supernova's light. The Supernova Early Warning System is a project which aims to connect neutrino detectors around the world, and trigger the electromagnetic counterpart experiments in case of a sudden influx of neutrinos in the detectors. The flavor evolution of neutrinos, propagating through the dense and turbulent interior of the supernova, is dominated by the collective behavior associated with neutrino-neutrino interactions. Therefore, supernova neutrinos offer an opportunity to examine neutrino flavor mixing under high-density conditions. Being sensitive to neutrino mass ordering and mass hierarchy, they can provide information about neutrino properties. Further, they can act as a standard candle to measure cosmic distance as the neutronization burst signal does not depend on its progenitor. Diffused supernova neutrino background. The Diffuse Supernova Neutrino Background (DSNB) is a cosmic background of (anti)neutrinos formed by the accumulation of neutrinos emitted from all past core-collapse supernovae. Their existence was predicted even before the observation of supernova neutrinos. DSNB can be used to study physics on the cosmological scale. They provide an independent test of the supernova rate. They can also give information about neutrino emission properties, stellar dynamics and failed progenitors. Super-Kamiokande has put the observational upper limit on the DSNB flux as formula_19 above 19.3 MeV of neutrino energy. The theoretically estimated flux is only half this value. Therefore, the DSNB signal is expected to be detected in the near future with detectors like JUNO and SuperK-Gd. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "10^{52}\n" }, { "math_id": 1, "text": "10^{45} \\text{J} " }, { "math_id": 2, "text": "\\text{s}^{-1}" }, { "math_id": 3, "text": "10^{45} \\text{W}" }, { "math_id": 4, "text": "\\mathrm{e}^- + \\mathrm{p} \\rightarrow \\nu_e + \\mathrm{n}\n" }, { "math_id": 5, "text": "\\mathrm{n} \\rightarrow \\mathrm{p} + \\mathrm{e}^- + \\bar\\nu_e\n" }, { "math_id": 6, "text": "\\mathrm{e}^- + \\mathrm{e}^+ \\rightarrow \\bar\\nu_\\alpha + \\nu_\\alpha\n" }, { "math_id": 7, "text": " \\nu_e + \\mathrm{n} \\rightarrow \\mathrm{p} + \\mathrm{e}^-\n" }, { "math_id": 8, "text": "\\bar\\nu_\\mathrm{e} + \\mathrm{p} \\rightarrow \\mathrm{e}^+ + \\mathrm{n}\n" }, { "math_id": 9, "text": "\\hat{\\rho}_t(E,R) = \\sum_{\\alpha=e, \\mu, \\tau} \\frac{L_{\\nu_\\alpha}e^{\\frac{-t}{\\tau}}}{\\langle E_{\\nu_\\alpha}\\rangle}f_{\\nu_\\alpha}(E)\n|\\nu_\\alpha \\rangle \\langle \\nu_\\alpha| " }, { "math_id": 10, "text": "L_{\\nu_\\alpha} " }, { "math_id": 11, "text": "\\tau " }, { "math_id": 12, "text": "L_{\\nu_\\alpha}e^{\\frac{-t}{\\tau}} " }, { "math_id": 13, "text": "\\langle E_{\\nu_\\alpha}\\rangle\n " }, { "math_id": 14, "text": "f_{\\nu_\\alpha}(E) " }, { "math_id": 15, "text": "E_B = 6 \\times \\int_0^\\infin L_{\\nu_\\alpha} e^{-t/\\tau}dt " }, { "math_id": 16, "text": "\\frac{d}{dr}\\hat{\\rho}_t(E,r) = -i[\\hat{H}_t(E,r),\\hat{\\rho}_t(E,r)] " }, { "math_id": 17, "text": "\\hat{H}_t(E,r) " }, { "math_id": 18, "text": "10^{5}" }, { "math_id": 19, "text": "5.5 \\;\\mathrm{cm}^{-2} \\mathrm{s}^{-1}" } ]
https://en.wikipedia.org/wiki?curid=66474041
66478194
Cayley configuration space
Possible distances in a bar-joint system In the mathematical theory of structural rigidity, the Cayley configuration space of a linkage over a set of its non-edges formula_0, called Cayley parameters, is the set of distances attained by formula_0 over all its frameworks, under some formula_1-norm. In other words, each framework of the linkage prescribes a unique set of distances to the non-edges of formula_2, so the set of all frameworks can be described by the set of distances attained by any subset of these non-edges. Note that this description may not be a bijection. The motivation for using distance parameters is to define a continuous quadratic branched covering from the configuration space of a linkage to a simpler, often convex, space. Hence, obtaining a framework from a Cayley configuration space of a linkage over some set of non-edges is often a matter of solving quadratic equations. Cayley configuration spaces have a close relationship to the flattenability and combinatorial rigidity of graphs. Definitions. Cayley configuration space. Definition via linkages. Consider a linkage formula_3, with graph formula_2 and formula_4-edge-length vector formula_5 (i.e., formula_1-distances raised to the formula_6 power, for some formula_1-norm) and a set of non-edges formula_0 of formula_2. The Cayley configuration space of formula_3 over formula_0 in formula_7 under the for some formula_1-norm, denoted by formula_8, is the set of formula_4-distance vectors formula_9 attained by the non-edges formula_0 over all frameworks of formula_3 in formula_7. In the presence of inequality formula_4-distance constraints, i.e., an interval formula_10, the Cayley configuration space formula_11 is defined analogously. In other words, formula_8 is the projection of the Cayley-Menger semialgebraic set, with fixed formula_3 or formula_12, onto the non-edges formula_0, called the Cayley parameters. Definition via projections of the distance cone. Consider the cone formula_13 of vectors of pairwise formula_4-distances between formula_14 points. Also consider the formula_15-stratum of this cone formula_16, i.e., the subset of vectors of formula_4-distances between formula_14 points in formula_7. For any graph formula_2, consider the projection formula_17 of formula_16 onto the edges of formula_2, i.e., the set of all vectors formula_18 of formula_4-distances for which the linkage formula_19 has a framework in formula_7. Next, for any point formula_18 in formula_17 and any set of non-edges formula_0 of formula_2, consider the fiber of formula_18 in formula_16 along the coordinates of formula_0, i.e., the set of vectors formula_20 of formula_4-distances for which the linkage formula_21 has a framework in formula_7. The Cayley configuration space formula_8 is the projection of this fiber onto the set of non-edges formula_0, i.e., the set of formula_4-distances attained by the non-edges in formula_0 over all frameworks of formula_19 in formula_7. In the presence of inequality formula_4-distance constraints, i.e., an interval formula_22, the Cayley configuration space formula_23 is the projection of a set of fibers onto the set of non-edges formula_0. Definition via branching covers. A Cayley configuration space of a linkage in formula_7 is the base space of a branching cover whose total space is the configuration space of the linkage in formula_7. Oriented Cayley configuration space. For a 1-dof tree-decomposable graph formula_2 with base non-edge formula_24, each point of a framework of a linkage formula_3 in formula_7 under the formula_25-norm can be placed iteratively according to an orientation vector formula_26, also called a realization type. The entries of formula_26 are local orientations of triples of points for all construction steps of the framework. A formula_26-oriented Cayley configuration space of formula_3 over formula_24, denoted by formula_27 is the Cayley configuration space of formula_3 over formula_24 restricted to frameworks respecting formula_26. In other words, for any value of formula_24 in formula_27, corresponding of frameworks of formula_3 respect formula_26 and are a subset of the frameworks in formula_28. Minimal complete Cayley vector. For a 1-dof tree-decomposable graph formula_29 with low Cayley complexity on a base non-edge formula_24, a minimal Cayley vector formula_0 is a list of formula_30 non-edges of formula_2 such that the graph formula_31 is generically globally rigid. Properties. Single interval property. A pair formula_32, consisting of a graph formula_2 and a non-edge formula_24, has the single interval property in formula_7 under some formula_1-norm if, for every linkage formula_3, the Cayley configuration space formula_33 is a single interval. Inherent convexity. A graph formula_2 has an inherent convex Cayley configuration space in formula_7 under some formula_1 norm if, for every partition of the edges of formula_2 into formula_34 and formula_0 and every linkage formula_35, the Cayley configuration space formula_36 is convex. Genericity with respect to convexity. Let formula_37 be a graph and formula_0 be a nonempty set of non-edges of formula_2. Also let formula_38 be a framework in formula_7 of any linkage whose constraint graph is formula_2 and consider its corresponding formula_4-edge-length vector formula_39 in the cone formula_13, where formula_40. As defined in Sitharam & Willoughby, the framework formula_38 is generic with respect to the property of convex Cayley configuration spaces if Theorem. Every generic framework of a graph formula_2 in formula_7 has a convex Cayley configuration space over a set of non-edges formula_0 if and only if every linkage formula_3 does. Theorem. Convexity of Cayley configuration spaces is not a generic property of frameworks. "Proof." Consider the graph in Figure 1. Also consider the framework formula_46 in formula_45 whose pairwise formula_25-distance vector formula_47 assigns distance 3 to the unlabeled edges, 4 to formula_48, and 1 to formula_49 and the 2-dimensional framework formula_50 whose pairwise formula_25-distance vector formula_51 assigns distance 3 to the unlabeled edges, 4 to formula_48, and 4 to formula_49. The Cayley configuration space formula_52 is 2 intervals: one interval represents frameworks with vertex formula_53 on the right side of the line defined by vertices formula_54 and formula_55 and the other interval represents frameworks with vertex formula_53 on the left side of this line. The intervals are disjoint due to the triangle inequalities induced by the distances assigned to the edges formula_48 and formula_49. Furthermore, formula_46 is a generic framework with respect to convex Cayley configuration spaces over formula_24 in formula_45: there is a neighborhood of frameworks around formula_46 whose Cayley configuration spaces formula_56 are 2 intervals. On the other hand, the Cayley configuration space formula_57 is a single interval: the triangle-inequalities induced by the quadrilateral containing formula_24 define a single interval that is contained in the interval defined by the triangle inequalities induced by the distances assigned to the edges formula_48 and formula_49. Furthermore, formula_50 is a generic framework with respect to convex Cayley configuration spaces over formula_24 in formula_45: there is a neighborhood of frameworks around formula_50 whose Cayley configuration spaces over formula_24 in formula_45 are a single interval. Thus, one generic framework has a convex Cayley configuration space while another does not. Generic completeness. A generically complete, or just complete, Cayley configuration space is a Cayley configuration of a linkage formula_3 over a set of non-edges formula_0 such that each point in this space generically corresponds to finitely many frameworks of formula_3 and the space has full measure. Equivalently, the graph formula_31 is generically minimally-rigid. Results for the Euclidean norm. The following results are for Cayley configuration spaces of linkages over non-edges under the formula_25-norm, also called the Euclidean norm. Single interval theorems. Let formula_2 be a graph. Consider a 2-sum decomposition of formula_2, i.e., recursively decomposing formula_2 into its 2-sum components. The minimal elements of this decomposition are called the minimal 2-sum components of formula_2. Theorem. For formula_59, the pair formula_32, consisting of a graph formula_2 and a non-edge formula_24, has the single interval property in formula_7 if and only if all minimal 2-sum components of formula_60 that contain formula_24 are partial 2-trees. The latter condition is equivalent to requiring that all minimal 2-sum components of formula_60 that contain formula_24 are 2-flattenable, as partial 2-trees are exactly the class of 2-flattenable graphs (see results on 2-flattenability). This result does not generalize for dimensions formula_61. The forbidden minors for 3-flattenability are the complete graph formula_62 and the 1-skeleton of the octahedron formula_63 (see results on 3-flattenability). Figure 2 shows counterexamples for formula_64. Denote the graph on the left by formula_2 and the graph on the right by formula_34. Both pairs formula_32 and formula_65 have the single interval property in formula_58: the vertices of formula_24 can rotate in 3-dimensions around a plane. Also, both formula_60 and formula_66 are themselves minimal 2-sum components containing formula_24. However, neither formula_60 nor formula_66 is 3-flattenable: contracting formula_24 in formula_60 yields formula_62 and contracting formula_24 in formula_66 yields formula_63. Example. Consider the graph formula_2 in Figure 3 whose non-edges are formula_67 and formula_68. The graph formula_2 is its own and only minimal 2-sum component containing either non-edge. Additionally, the graph formula_69 is a 2-tree, so formula_2 is a partial 2-tree. Hence, by the theorem above both pairs formula_70 and formula_71 have the single interval property in formula_45. The following conjecture characterizes pairs formula_32 with the single interval property in formula_7 for arbitrary formula_15. Conjecture. The pair formula_32, consisting of a graph formula_2 and a non-edge formula_24, has the single interval property in formula_7 if and only if for any minimal 2-sum component of formula_60 that contains formula_24 and is not formula_15-flattenable, formula_24 must be either removed, duplicated, or contracted to obtain a forbidden minor for formula_15-flattenability from formula_2. 1-dof tree-decomposable linkages in R2. The following results concern oriented Cayley configuration spaces of 1-dof tree-decomposable linkages over some base non-edge in formula_45. Refer to tree-decomposable graphs for the definition of generic linkages used below. Theorem. For a generic 1-dof tree-decomposable linkage formula_3 with base non-edge formula_24 the following hold: This theorem yields an algorithm to compute (oriented) Cayley configuration spaces of 1-dof tree-decomposable linkages over a base non-edge by simply constructing oriented frameworks of all extreme linkages. This algorithm can take time exponential in the size of the linkage and in the output Cayley configuration space. For a 1-dof tree-decomposable graph formula_2, three complexity measures of its oriented Cayley configuration spaces are: Bounds on these complexity measures are given in Sitharam, Wang & Gao. Another algorithm to compute these oriented Cayley configuration spaces achieves linear Cayley complexity in the size of the underlying graph. Theorem. For a generic 1-dof tree-decomposable linkage formula_3, where the graph formula_2 has low Cayley complexity on a base non-edge formula_24, the following hold: An algorithm is given in Sitharam, Wang & Gao to find these motion paths. The idea is to start from one framework located in one interval of the Cayley configuration space, travel along the interval to its endpoint, and jump to another interval, repeating these last two steps until the target framework is reached. This algorithm utilizes the following facts: (i) there is a continuous motion path between any two frameworks in the same interval, (ii) extreme linkages only exist at the endpoints of an interval, and (iii) during the motion, the low Cayley complexity linkage only changes its realization type when jumping to a new interval and exactly one local orientation changes sign during this jump. Example. Figure 4 shows an oriented framework of a 1-dof tree-decomposable linkage with base non-edge formula_74, located in an interval of the Cayley configuration space, and two other frameworks whose orientations are about to change. The vertices corresponding to construction steps are labelled in order of construction. More specifically, the first framework has the realization type formula_75. There is a continuous motion path to the second framework, which has the realization type formula_76. Hence, this framework corresponds to an interval endpoint and jumping to a new interval results in the realization type formula_77. Likewise, the third framework is corresponds to an interval endpoint with the realization type formula_78 and jumping to a new interval results in the realization type formula_79. Theorem. (1) For a generic 1-path, 1-dof tree-decomposable linkage formula_3 with low Cayley complexity, there exists a bijective correspondence between the set of frameworks of formula_3 and points on a 2-dimensional curve, whose points are the minimum complete Cayley distance vectors. (2) For a generic 1-dof tree-decomposable linkage formula_3 with low Cayley complexity, there exists a bijective correspondence between the set of frameworks of formula_3 and points on an formula_14-dimensional curve, whose points are the minimum complete Cayley distance vectors, where formula_14 is the number of last level vertices of the graph formula_2. Results for general "p"-norms. These results are extended to general formula_1-norms. Theorem. For general formula_1-norms, a graph formula_2 has an inherent convex Cayley configuration space in formula_7 if and only if formula_2 is formula_15-flattenable. The "only if" direction was proved in Sitharam & Gao using the fact that the formula_80 distance cone formula_81 is convex. As a direct consequence, formula_15-flattenable graphs and graphs with inherent convex Cayley configuration spaces in formula_7 have the same forbidden minor characterization. See Graph flattenability for results on these characterizations, as well as a more detailed discussion on the connection between Cayley configuration spaces and flattenability. Example. Consider the graph in Figure 3 with both non-edges added as edges. The resulting graph is a 2-tree, which is 2-flattenable under the formula_82 and formula_25 norms, see Graph flattenability. Hence, the theorem above indicates that the graph has an inherent convex Cayley configuration space in formula_45 under the formula_82 and formula_25 norms. In particular, the Cayley configuration space over one or both of the non-edges formula_67 and formula_68 is convex. Applications. The EASAL algorithm makes use of the techniques developed in Sitharam & Gao for dealing with convex Cayley configuration spaces to describe the dimensional, topological, and geometric structure of Euclidean configuration spaces in formula_58. More precisely, for two sets of formula_14 points formula_83 and formula_53 in formula_58 with interval distance constraints between pairs of points coming from different sets, EASAL outputs all the frameworks of this linkage such that no pair of constrained points is too close together and at least one pair of constrained points is sufficiently close together. This algorithm has applications in molecular self-assembly.
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "l_p" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "(G,\\delta)" }, { "math_id": 4, "text": "l^p_p" }, { "math_id": 5, "text": "\\delta" }, { "math_id": 6, "text": "p^{th}" }, { "math_id": 7, "text": "\\mathbb{R}^d" }, { "math_id": 8, "text": "\\Phi^d_{F,l_p} (G,\\delta_G)" }, { "math_id": 9, "text": "\\delta_F" }, { "math_id": 10, "text": "[\\delta_l,\\delta_r]" }, { "math_id": 11, "text": "\\Phi^d_{F,l_p} \\left(G,[\\delta^l_G,\\delta^r_G]\\right)" }, { "math_id": 12, "text": "\\left(G,[\\delta^l_G,\\delta^r_G]\\right)" }, { "math_id": 13, "text": "\\Phi_{n,l_p}" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "d" }, { "math_id": 16, "text": "\\Phi^d_{n,l_p}" }, { "math_id": 17, "text": "\\Phi^d_{G,l_p}" }, { "math_id": 18, "text": "\\delta_G" }, { "math_id": 19, "text": "(G,\\delta_G)" }, { "math_id": 20, "text": "\\delta_{G \\cup F}" }, { "math_id": 21, "text": "(G \\cup F,\\delta_{G \\cup F})" }, { "math_id": 22, "text": "[\\delta_{l,G},\\delta_{r,G}]" }, { "math_id": 23, "text": "\\Phi^d_{F,l_p} (G,[\\delta_{l,G},\\delta_{r,G}])" }, { "math_id": 24, "text": "f" }, { "math_id": 25, "text": "l_2" }, { "math_id": 26, "text": "\\sigma" }, { "math_id": 27, "text": "\\Phi^2_{f,\\sigma} (G,\\delta)" }, { "math_id": 28, "text": "\\Phi^2_{f} (G,\\delta)" }, { "math_id": 29, "text": "G = (V,E)" }, { "math_id": 30, "text": "O(|V|)" }, { "math_id": 31, "text": "G \\cup F" }, { "math_id": 32, "text": "(G,f)" }, { "math_id": 33, "text": "\\Phi^d_{f,l_p} (G,\\delta)" }, { "math_id": 34, "text": "H" }, { "math_id": 35, "text": "(H,\\delta)" }, { "math_id": 36, "text": "\\Phi^d_{F,l_p} (H,\\delta)" }, { "math_id": 37, "text": "G=(V,E)" }, { "math_id": 38, "text": "r" }, { "math_id": 39, "text": "\\delta_r" }, { "math_id": 40, "text": "n=|V|" }, { "math_id": 41, "text": "\\Omega" }, { "math_id": 42, "text": "\\Phi^d_{F,l_p} (G,\\delta_r)" }, { "math_id": 43, "text": "\\delta_q \\in \\Omega" }, { "math_id": 44, "text": "\\Phi^d_{F,l_p} (G,\\delta_q)" }, { "math_id": 45, "text": "\\mathbb{R}^2" }, { "math_id": 46, "text": "p" }, { "math_id": 47, "text": "\\delta_p" }, { "math_id": 48, "text": "x" }, { "math_id": 49, "text": "y" }, { "math_id": 50, "text": "q" }, { "math_id": 51, "text": "\\delta_q" }, { "math_id": 52, "text": "\\Phi^2_f (G,\\delta_p)" }, { "math_id": 53, "text": "B" }, { "math_id": 54, "text": "C" }, { "math_id": 55, "text": "D" }, { "math_id": 56, "text": "\\Phi^2_f (G,\\delta)" }, { "math_id": 57, "text": "\\Phi^2_f (G,\\delta_q)" }, { "math_id": 58, "text": "\\mathbb{R}^3" }, { "math_id": 59, "text": "d \\leq 2" }, { "math_id": 60, "text": "G \\cup f" }, { "math_id": 61, "text": "d \\geq 3" }, { "math_id": 62, "text": "K_5" }, { "math_id": 63, "text": "K_{2,2,2}" }, { "math_id": 64, "text": "d=3" }, { "math_id": 65, "text": "(H,f)" }, { "math_id": 66, "text": "H \\cup f" }, { "math_id": 67, "text": "d_1" }, { "math_id": 68, "text": "d_2" }, { "math_id": 69, "text": "G \\cup d_1 \\cup d_2" }, { "math_id": 70, "text": "(G,d_1)" }, { "math_id": 71, "text": "(G,d_2)" }, { "math_id": 72, "text": "v" }, { "math_id": 73, "text": "g" }, { "math_id": 74, "text": "(v_0,v'_0)" }, { "math_id": 75, "text": "(1,-1,-1,1)" }, { "math_id": 76, "text": "(0,-1,-1,1)" }, { "math_id": 77, "text": "(-1,-1,-1,1)" }, { "math_id": 78, "text": "(-1,-1,0,1)" }, { "math_id": 79, "text": "(-1,-1,1,1)" }, { "math_id": 80, "text": "l^2_2" }, { "math_id": 81, "text": "\\Phi_{n,l_2}" }, { "math_id": 82, "text": "l_1" }, { "math_id": 83, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=66478194
66487331
Graph flattenability
Flattenability in some formula_0-dimensional normed vector space is a property of graphs which states that any embedding, or drawing, of the graph in some high dimension formula_1 can be "flattened" down to live in formula_0-dimensions, such that the distances between pairs of points connected by edges are preserved. A graph formula_2 is formula_0-flattenable if every distance constraint system (DCS) with formula_2 as its constraint graph has a formula_0-dimensional framework. Flattenability was first called realizability, but the name was changed to avoid confusion with a graph having "some" DCS with a formula_0-dimensional framework. Flattenability has connections to structural rigidity, tensegrities, Cayley configuration spaces, and a variant of the graph realization problem. Definitions. A distance constraint system formula_3, where formula_4 is a graph and formula_5 is an assignment of distances onto the edges of formula_2, is formula_0-flattenable in some normed vector space formula_6 if there exists a framework of formula_3 in formula_0-dimensions. A graph formula_4 is formula_0-flattenable in formula_6 if every distance constraint system with formula_2 as its constraint graph is formula_0-flattenable. Flattenability can also be defined in terms of Cayley configuration spaces; see connection to Cayley configuration spaces below. Properties. Closure under subgraphs. Flattenability is closed under taking subgraphs. To see this, observe that for some graph formula_2, all possible embeddings of a subgraph formula_7 of formula_2 are contained in the set of all embeddings of formula_2. Minor-closed. Flattenability is a minor-closed property by a similar argument as above. Flattening dimension. The flattening dimension of a graph formula_2 in some normed vector space is the lowest dimension formula_0 such that formula_2 is formula_0-flattenable. The flattening dimension of a graph is closely related to its gram dimension. The following is an upper-bound on the flattening dimension of an arbitrary graph under the formula_8-norm. Theorem. The flattening dimension of a graph formula_9 under the formula_8-norm is at most formula_10. For a detailed treatment of this topic, see Chapter 11.2 of Deza & Laurent. Euclidean flattenability. This section concerns flattenability results in Euclidean space, where distance is measured using the formula_8 norm, also called the Euclidean norm. 1-flattenable graphs. The following theorem is folklore and shows that the only forbidden minor for 1-flattenability is the complete graph formula_11. Theorem. A graph is 1-flattenable if and only if it is a forest. "Proof." A proof can be found in Belk & Connelly. For one direction, a forest is a collection of trees, and any distance constraint system whose graph is a tree can be realized in 1-dimension. For the other direction, if a graph formula_2 is not a forest, then it has the complete graph formula_11 as a subgraph. Consider the DCS that assigns the distance 1 to the edges of the formula_12 subgraph and the distance 0 to all other edges. This DCS has a realization in 2-dimensions as the 1-skeleton of a triangle, but it has no realization in 1-dimension. This proof allowed for distances on edges to be 0, but the argument holds even when this is not allowed. See Belk & Connelly for a detailed explanation. 2-flattenable graphs. The following theorem is folklore and shows that the only forbidden minor for 2-flattenability is the complete graph formula_12. Theorem. A graph is 2-flattenable if and only if it is a partial 2-tree. "Proof." A proof can be found in Belk & Connelly. For one direction, since flattenability is closed under taking subgraphs, it is sufficient to show that 2-trees are 2-flattenable. A 2-tree with formula_13 vertices can be constructed recursively by taking a 2-tree with formula_14 vertices and connecting a new vertex to the vertices of an existing edge. The base case is the formula_11. Proceed by induction on the number of vertices formula_13. When formula_15, consider any distance assignment formula_16 on the edges formula_11. Note that if formula_16 does not obey the triangle inequality, then this DCS does not have a realization in any dimension. Without loss of generality, place the first vertex formula_17 at the origin and the second vertex formula_18 along the x-axis such that formula_19 is satisfied. The third vertex formula_20 can be placed at an intersection of the circles with centers formula_17 and formula_18 and radii formula_21 and formula_22 respectively. This method of placement is called a ruler and compass construction. Hence, formula_11 is 2-flattenable. Now, assume a 2-tree with formula_23 vertices is 2-flattenable. By definition, a 2-tree with formula_24 vertices is a 2-tree with formula_23 vertices, say formula_25, and an additional vertex formula_26 connected to the vertices of an existing edge in formula_25. By the inductive hypothesis, formula_25 is 2-flattenable. Finally, by a similar ruler and compass construction argument as in the base case, formula_26 can be placed such that it lies in the plane. Thus, 2-trees are 2-flattenable by induction. If a graph formula_2 is not a partial 2-tree, then it contains formula_12 as a minor. Assigning the distance of 1 to the edges of the formula_12 minor and the distance of 0 to all other edges yields a DCS with a realization in 3-dimensions as the 1-skeleton of a tetrahedra. However, this DCS has no realization in 2-dimensions: when attempting to place the fourth vertex using a ruler and compass construction, the three circles defined by the fourth vertex do not all intersect. Example. Consider the graph in figure 2. Adding the edge formula_27 turns it into a 2-tree; hence, it is a partial 2-tree. Thus, it is 2-flattenable. Example. The wheel graph formula_28 contains formula_12 as a minor. Thus, it is not 2-flattenable. 3-flattenable graphs. The class of 3-flattenable graphs strictly contains the class of partial 3-trees. More precisely, the forbidden minors for partial 3-trees are the complete graph formula_29, the 1-skeleton of the octahedron formula_30, formula_31, and formula_32, but formula_31, and formula_32 are 3-flattenable. These graphs are shown in Figure 3. Furthermore, the following theorem from Belk & Connelly shows that the only forbidden minors for 3-flattenability are formula_29 and formula_30. Theorem. A graph is 3-flattenable if and only if it does not have formula_29 or formula_30 as a minor. "Proof Idea:" The proof given in Belk & Connelly assumes that formula_31, and formula_32 are 3-realizable. This is proven in the same article using mathematical tools from rigidity theory, specifically those concerning tensegrities. The complete graph formula_29 is not 3-flattenable, and the same argument that shows formula_12 is not 2-flattenable and formula_11 is not 1-flattenable works here: assigning the distance 1 to the edges of formula_29 yields a DCS with no realization in 3-dimensions. Figure 4 gives a visual proof that the graph formula_30 is not 3-flattenable. Vertices 1, 2, and 3 form a degenerate triangle. For the edges between vertices 1- 5, edges formula_33 and formula_34 are assigned the distance formula_35 and all other edges are assigned the distance 1. Vertices 1- 5 have unique placements in 3-dimensions, up to congruence. Vertex 6 has 2 possible placements in 3-dimensions: 1 on each side of the plane formula_36 defined by vertices 1, 2, and 4. Hence, the edge formula_37 has two distance values that can be realized in 3-dimensions. However, vertex 6 can revolve around the plane formula_36 in 4-dimensions while satisfying all constraints, so the edge formula_37 has infinitely many distance values that can only be realized in 4-dimensions or higher. Thus, formula_30 is not 3-flattenable. The fact that these graphs are not 3-flattenable proves that any graph with either formula_29 or formula_30 as a minor is not 3-flattenable. The other direction shows that if a graph formula_2 does not have formula_29 or formula_30 as a minor, then formula_2 can be constructed from partial 3-trees, formula_31, and formula_32 via 1-sums, 2-sums, and 3-sums. These graphs are all 3-flattenable and these operations preserve 3-flattenability, so formula_2 is 3-flattenable. The techniques in this proof yield the following result from Belk & Connelly. Theorem. Every 3-realizable graph is a subgraph of a graph that can be obtained by a sequence of 1-sums, 2-sums, and 3-sums of the graphs formula_12, formula_31, and formula_32. Example. The previous theorem can be applied to show that the 1-skeleton of a cube is 3-flattenable. Start with the graph formula_12, which is the 1-skeleton of a tetrahedron. On each face of the tetrahedron, perform a 3-sum with another formula_12 graph, i.e. glue two tetrahedra together on their faces. The resulting graph contains the cube as a subgraph and is 3-flattenable. In higher dimensions. Giving a forbidden minor characterization of formula_0-flattenable graphs, for dimension formula_38, is an open problem. For any dimension formula_0, formula_39 and the 1-skeleton of the formula_0-dimensional analogue of an octahedron are forbidden minors for formula_0-flattenability. It is conjectured that the number of forbidden minors for formula_0-flattenability grows asymptotically to the number of forbidden minors for partial formula_0-trees, and there are over formula_40 forbidden minors for partial 4-trees. An alternative characterization of formula_0-flattenable graphs relates flattenability to Cayley configuration spaces. See the section on the connection to Cayley configuration spaces. Connection to the graph realization problem. Given a distance constraint system and a dimension formula_0, the graph realization problem asks for a formula_0-dimensional framework of the DCS, if one exists. There are algorithms to realize formula_0-flattenable graphs in formula_0-dimensions, for formula_41, that run in polynomial time in the size of the graph. For formula_42, realizing each tree in a forest in 1-dimension is trivial to accomplish in polynomial time. An algorithm for formula_43 is mentioned in Belk & Connelly. For formula_44, the algorithm in So & Ye obtains a framework formula_45 of a DCS using semidefinite programming techniques and then utilizes the "folding" method described in Belk to transform formula_45 into a 3-dimensional framework. Flattenability under "p"-norms. This section concerns flattenability results for graphs under general formula_46-norms, for formula_47. Connection to algebraic geometry. Determining the flattenability of a graph under a general formula_46-norm can be accomplished using methods in algebraic geometry, as suggested in Belk & Connelly. The question of whether a graph formula_4 is formula_0-flattenable is equivalent to determining if two semi-algebraic sets are equal. One algorithm to compare two semi-algebraic sets takes formula_48 time. Connection to Cayley configuration spaces. For general formula_49-norms, there is a close relationship between flattenability and Cayley configuration spaces. The following theorem and its corollary are found in Sitharam & Willoughby. Theorem. A graph formula_2 is formula_0-flattenable if and only if for every subgraph formula_50 of formula_2 resulting from removing a set of edges formula_51 from formula_2 and any formula_52-distance vector formula_53 such that the DCS formula_54 has a realization, the formula_0-dimensional Cayley configuration space of formula_54 over formula_51 is convex. Corollary. A graph formula_2 is not formula_0-flattenable if there exists some subgraph formula_50 of formula_2 and some formula_52-distance vector formula_16 such that the formula_0-dimensional Cayley configuration space of formula_54 over formula_51 is not convex. 2-Flattenability under the l1 and l∞ norms. The formula_55 and formula_56 norms are equivalent up to rotating axes in 2-dimensions, so 2-flattenability results for either norm hold for both. This section uses the formula_55-norm. The complete graph formula_12 is 2-flattenable under the formula_55-norm and formula_29 is 3-flattenable, but not 2-flattenable. These facts contribute to the following results on 2-flattenability under the formula_55-norm found in Sitharam & Willoughby. Observation. The set of 2-flattenable graphs under the formula_55-norm (and formula_56-norm) strictly contains the set of 2-flattenable graphs under the formula_8-norm. Theorem. A 2-sum of 2-flattenable graphs is 2-flattenable if and only if at most one graph has a formula_12 minor. The fact that formula_12 is 2-flattenable but formula_29 is not has implications on the forbidden minor characterization for 2-flattenable graphs under the formula_55-norm. Specifically, the minors of formula_29 could be forbidden minors for 2-flattenability. The following results explore these possibilities and give the complete set of forbidden minors. Theorem. The banana graph, or formula_29 with one edge removed, is not 2-flattenable. Observation. The graph obtained by removing two edges that are incident to the same vertex from formula_29 is 2-flattenable. Observation. Connected graphs on 5 vertices with 7 edges are 2-flattenable. The only minor of formula_29 left is the wheel graph formula_28, and the following result shows that this is one of the forbidden minors. Theorem. A graph is 2-flattenable under the formula_55- or formula_56-norm if and only if it does not have either of the following graphs as minors: Connection to structural rigidity. This section relates flattenability to concepts in structural (combinatorial) rigidity theory, such as the rigidity matroid. The following results concern the formula_52-distance cone formula_57, i.e., the set of all formula_52-distance vectors that can be realized as a configuration of formula_13 points in some dimension. A proof that this set is a cone can be found in Ball. The formula_0-stratum of this cone formula_58 are the vectors that can be realized as a configuration of formula_13 points in formula_0-dimensions. The projection of formula_57 or formula_58 onto the edges of a graph formula_2 is the set of formula_52 distance vectors that can be realized as the edge-lengths of some embedding of formula_2. A generic property of a graph formula_2 is one that almost all frameworks of distance constraint systems, whose graph is formula_2, have. A framework of a DCS formula_3 under an formula_49-norm is a generic framework (with respect to formula_0-flattenability) if the following two conditions hold: Condition (1) ensures that the neighborhood formula_59 has full rank. In other words, formula_59 has dimension equal to the flattening dimension of the complete graph formula_60 under the formula_49-norm. See Kitson for a more detailed discussion of generic framework for formula_49-norms. The following results are found in Sitharam & Willoughby. Theorem. A graph formula_2 is formula_0-flattenable if and only if every generic framework of formula_2 is formula_0-flattenable. Theorem. formula_0-flattenability is not a generic property of graphs, even for the formula_8-norm. Theorem. A generic formula_0-flattenable framework of a graph formula_2 exists if and only if formula_2 is independent in the generic formula_0-dimensional rigidity matroid. Corollary. A graph formula_2 is formula_0-flattenable only if formula_2 is independent in the formula_0-dimensional rigidity matroid. Theorem. For general formula_49-norms, a graph formula_2 is
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "d'" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "(G,\\delta)" }, { "math_id": 4, "text": "G=(V,E)" }, { "math_id": 5, "text": "\\delta: E \\rightarrow \\mathbb{R}^{|E|}" }, { "math_id": 6, "text": "\\mathbb{R}^d" }, { "math_id": 7, "text": "H" }, { "math_id": 8, "text": "l_2" }, { "math_id": 9, "text": "G = \\left(V,E\\right)" }, { "math_id": 10, "text": "O \\left( \\sqrt{\\left| E \\right|} \\right)" }, { "math_id": 11, "text": "K_3" }, { "math_id": 12, "text": "K_4" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "n-1" }, { "math_id": 15, "text": "n=3" }, { "math_id": 16, "text": "\\delta" }, { "math_id": 17, "text": "v_1" }, { "math_id": 18, "text": "v_2" }, { "math_id": 19, "text": "\\delta_{12}" }, { "math_id": 20, "text": "v_3" }, { "math_id": 21, "text": "\\delta_{13}" }, { "math_id": 22, "text": "\\delta_{23}" }, { "math_id": 23, "text": "k" }, { "math_id": 24, "text": "k+1" }, { "math_id": 25, "text": "T" }, { "math_id": 26, "text": "u" }, { "math_id": 27, "text": "\\bar{AC}" }, { "math_id": 28, "text": "W_5" }, { "math_id": 29, "text": "K_5" }, { "math_id": 30, "text": "K_{2,2,2}" }, { "math_id": 31, "text": "V_8" }, { "math_id": 32, "text": "C_5 \\times C_2" }, { "math_id": 33, "text": "(1,4)" }, { "math_id": 34, "text": "(3,4)" }, { "math_id": 35, "text": "\\sqrt{2}" }, { "math_id": 36, "text": "\\Pi" }, { "math_id": 37, "text": "(5,6)" }, { "math_id": 38, "text": "d>3" }, { "math_id": 39, "text": "K_{d+2}" }, { "math_id": 40, "text": "75" }, { "math_id": 41, "text": "d \\leq 3" }, { "math_id": 42, "text": "d=1" }, { "math_id": 43, "text": "d=2" }, { "math_id": 44, "text": "d=3" }, { "math_id": 45, "text": "r" }, { "math_id": 46, "text": "p" }, { "math_id": 47, "text": "1 \\leq p \\leq \\infty" }, { "math_id": 48, "text": "(4|E|)^{O\\left(nd|V|^2\\right)}" }, { "math_id": 49, "text": "l_p" }, { "math_id": 50, "text": "H=G \\setminus F" }, { "math_id": 51, "text": "F" }, { "math_id": 52, "text": "l^p_p" }, { "math_id": 53, "text": "\\delta_H" }, { "math_id": 54, "text": "(H,\\delta_H)" }, { "math_id": 55, "text": "l_1" }, { "math_id": 56, "text": "l_{\\infty}" }, { "math_id": 57, "text": "\\Phi_{n,l_p}" }, { "math_id": 58, "text": "\\Phi^d_{n,l_p}" }, { "math_id": 59, "text": "\\Omega" }, { "math_id": 60, "text": "K_n" } ]
https://en.wikipedia.org/wiki?curid=66487331
66488238
Comodule over a Hopf algebroid
In mathematics, at the intersection of algebraic topology and algebraic geometry, there is the notion of a Hopf algebroid which encodes the information of a presheaf of groupoids whose object sheaf and arrow sheaf are represented by algebras. Because any such presheaf will have an associated site, we can consider quasi-coherent sheaves on the site, giving a topos-theoretic notion of modules. Duallypg 2, comodules over a Hopf algebroid are the purely algebraic analogue of this construction, giving a purely algebraic description of quasi-coherent sheaves on a stack: this is one of the first motivations behind the theory. Definition. Given a commutative Hopf-algebroid formula_0 a left comodule formula_1pg 302 is a left formula_2-module formula_1 together with an formula_2-linear mapformula_3which satisfies the following two properties A right comodule is defined similarly, but instead there is a mapformula_6satisfying analogous axioms. Structure theorems. Flatness of Γ gives an abelian category. One of the main structure theorems for comodulespg 303 is if formula_7 is a flat formula_2-module, then the category of comodules formula_8 of the Hopf-algebroid is an abelian category. Relation to stacks. There is a structure theorem pg 7 relating comodules of Hopf-algebroids and modules of presheaves of groupoids. If formula_0 is a Hopf-algebroid, there is an equivalence between the category of comodules formula_8 and the category of quasi-coherent sheaves formula_9 for the associated presheaf of groupoidsformula_10to this Hopf-algebroid. Examples. From BP-homology. Associated to the Brown-Peterson spectrum is the Hopf-algebroid formula_11 classifying p-typical formal group laws. Noteformula_12where formula_13 is the localization of formula_14 by the prime ideal formula_15. If we let formula_16 denote the idealformula_17Since formula_18 is a primitive in formula_19, there is an associated Hopf-algebroid formula_0formula_20There is a structure theorem on the Adams-Novikov spectral sequence relating the Ext-groups of comodules on formula_11 to Johnson-Wilson homology, giving a more tractable spectral sequence. This happens through an equivalence of categories of comodules of formula_0 to the category of comodules of formula_21giving the isomorphismformula_22assuming formula_1 and formula_23 satisfy some technical hypotheses pg 24. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "(A,\\Gamma)" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\\psi: M \\to \\Gamma\\otimes_AM" }, { "math_id": 4, "text": "(\\varepsilon\\otimes Id_M)\\circ \\psi = Id_M" }, { "math_id": 5, "text": "(\\Delta\\otimes Id_M) \\circ \\psi = (Id_\\Gamma \\otimes \\psi) \\circ \\psi" }, { "math_id": 6, "text": "\\phi: M \\to M \\otimes_A \\Gamma" }, { "math_id": 7, "text": "\\Gamma" }, { "math_id": 8, "text": "\\text{Comod}(A,\\Gamma)" }, { "math_id": 9, "text": "\\text{QCoh}(\\text{Spec}(A),\\text{Spec}(\\Gamma))" }, { "math_id": 10, "text": "\\text{Spec}(\\Gamma)\\rightrightarrows \\text{Spec}(A)" }, { "math_id": 11, "text": "(BP_*,BP_*(BP))" }, { "math_id": 12, "text": "\\begin{align}\nBP_* &= \\mathbb{Z}_{(p)}[v_1,v_2,\\ldots] \\\\\nBP_*(BP) &= BP_*[t_1,t_2,\\ldots]\n\\end{align}" }, { "math_id": 13, "text": "\\mathbb{Z}_{(p)}" }, { "math_id": 14, "text": "\\mathbb{Z}" }, { "math_id": 15, "text": "(p)" }, { "math_id": 16, "text": "I_n" }, { "math_id": 17, "text": "I_n = (p,v_1,\\ldots, v_{n-1})" }, { "math_id": 18, "text": "v_n" }, { "math_id": 19, "text": "BP_*/I_n" }, { "math_id": 20, "text": "(v_n^{-1}BP_*/I_n, v_n^{-1}BP_*(BP)/I_n)" }, { "math_id": 21, "text": "(v_n^{-1}E(m)_*/I_n, v_n^{-1}E(m)_*(E(m)/I_n)" }, { "math_id": 22, "text": "\\text{Ext}^{*,*}_{BP_*BP}(M,N) \\cong\n\\text{Ext}^{*,*}_{E(m)_*E(m)}(E(m)_*\\otimes_{BP_*} M,E(m)_*\\otimes_{BP_*}N)" }, { "math_id": 23, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=66488238
66489065
Geometric rigidity
In discrete geometry, geometric rigidity is a theory for determining if a geometric constraint system (GCS) has finitely many formula_0-dimensional solutions, or frameworks, in some metric space. A framework of a GCS is rigid in formula_0-dimensions, for a given formula_0 if it is an isolated solution of the GCS, factoring out the set of trivial motions, or isometric group, of the metric space, e.g. translations and rotations in Euclidean space. In other words, a rigid framework formula_1 of a GCS has no nearby framework of the GCS that is reachable via a non-trivial continuous motion of formula_1 that preserves the constraints of the GCS. Structural rigidity is another theory of rigidity that concerns generic frameworks, i.e., frameworks whose rigidity properties are representative of all frameworks with the same constraint graph. Results in geometric rigidity apply to all frameworks; in particular, to non-generic frameworks. Geometric rigidity was first explored by Euler, who conjectured that all polyhedra in formula_2-dimensions are rigid. Much work has gone into proving the conjecture, leading to many interesting results discussed below. However, a counterexample was eventually found. There are also some generic rigidity results with no combinatorial components, so they are related to both geometric and structural rigidity. Definitions. The definitions below, which can be found in, are with respect to bar-joint frameworks in formula_0-dimensional Euclidean space, and will be generalized for other frameworks and metric spaces as needed. Consider a linkage formula_3, i.e. a constraint graph formula_4 with distance constraints formula_5 assigned to its edges, and the configuration space formula_6 consisting of frameworks formula_1 of formula_3. The frameworks in formula_6 consist of maps formula_7 that satisfy formula_8 for all edges formula_9 of formula_10. In other words, formula_11 is a placement of the vertices of formula_10 as points in formula_0-dimensions that satisfy all distance constraints formula_5. The configuration space formula_6 is an algebraic set. Continuous and trivial motions. A continuous motion is a continuous path in formula_6 that describes the physical motion between two frameworks of formula_3 that preserves all constraints. A trivial motion is a continuous motion resulting from the formula_12 Euclidean isometries, i.e. translations and rotations. In general, any metric space has a set of trivial motions coming from the isometric group of the space. Local rigidity. A framework of a GCS is locally rigid, or just rigid, if all its continuous motions are trivial. Testing for local rigidity is co-NP hard. Rigidity map. The rigidity map formula_13 takes a framework formula_1 and outputs the squared-distances formula_14 between all pairs of points that are connected by an edge. Rigidity matrix. The Jacobian, or derivative, of the rigidity map yields a system of linear equations of the form formula_15 for all edges formula_9 of formula_10. The rigidity matrix formula_16 is an formula_17 matrix that encodes the information in these equations. Each edge of formula_10 corresponds to a row of formula_16 and each vertex corresponds to formula_0 columns of formula_16. The row corresponding to the edge formula_9 is defined as follows. formula_18 Infinitesimal motion. An infinitesimal motion is an assignment formula_19 of velocities to the vertices of a framework formula_1 such that formula_20. Hence, the kernel of the rigidity matrix is the space of infinitesimal motions. A trivial infinitesimal motion is defined analogously to a trivial continuous motion. Stress. A stress is an assignment formula_21 to the edges of a framework formula_1. A stress is proper if its entries are nonnegative and is a self stress if it satisfies formula_22. A stress satisfying this equation is also called a resolvable stress, equilibrium stress, prestress, or sometimes just a stress. Stress Matrix. For a stress formula_23 applied to the edges of a framework formula_1 with the constraint graph formula_4, define the formula_24 stress matrix formula_25 as formula_26 It is easily verified that for any two formula_27 and any stress formula_23, formula_28 The rigidity matrix as a linear transformation. The information in this section can be found in. The rigidity matrix can be viewed as a linear transformation from formula_29 to formula_30. The domain of this transformation is the set of formula_31 column vectors, called velocity or displacements vectors, denoted by formula_32, and the image is the set of formula_33 edge distortion vectors, denoted by formula_34. The entries of the vector formula_32 are velocities assigned to the vertices of a framework formula_1, and the equation formula_35 describes how the edges are compressed or stretched as a result of these velocities. The dual linear transformation leads to a different physical interpretation. The codomain of the linear transformation is the set of formula_33 column vectors, or stresses, denoted by formula_23, that apply a stress formula_36 to each edge formula_9 of a framework formula_1. The stress formula_36 applies forces to the vertices of formula_9 that are equal in magnitude but opposite in direction, depending on whether formula_9 is being compressed or stretched by formula_36. Consider the equation formula_37 where formula_38 is a formula_31 vector. The terms on the left corresponding to the formula_0 columns of a vertex formula_39 in formula_40 yield the entry in formula_38 that is the net force formula_41 applied to formula_39 by the stresses on edges incident to formula_39. Hence, the domain of the dual linear transformation is the set of stresses on edges and the image is the set of net forces on vertices. A net force formula_38 can be viewed as being able to counteract, or resolve, the force formula_42, so the image of the dual linear transformation is really the set of resolvable forces. The relationship between these dual linear transformations is described by the work done by a velocity vector formula_32 under a net force formula_38: formula_43 where formula_23 is a stress and formula_34 is an edge distortion. In terms of the stress matrix, this equation above becomes formula_44. Types of rigidity. This section covers the various types of rigidity and how they are related.  For more information, see. Infinitesimal rigidity. Infinitesimal rigidity is the strongest form of rigidity that restricts a framework from admitting even non-trivial infinitesimal motions. It is also called first-order rigidity because of its relation to the rigidity matrix. More precisely, consider the linear equations formula_45 resulting from the equation formula_20. These equations state that the projections of the velocities formula_46 and formula_47 onto the edge formula_9 cancel out. Each of the following statements is sufficient for a formula_0-dimensional framework to be infinitesimally rigid in formula_0-dimensions: In general, any type of framework is infinitesimally rigid in formula_0-dimensions if space of its infinitesimal motions is the space of trivial infinitesimal motions of the metric space. The following theorem by Asimow and Roth relates infinitesimal rigidity and rigidity. Theorem. If a framework is infinitesimally rigid, then it is rigid. The converse of this theorem is not true in general; however, it is true for generic rigid frameworks (with respect to infinitesimal rigidity), see combinatorial characterizations of generically rigid graphs. Static rigidity. A formula_0-dimensional framework formula_1 is statically rigid in formula_0-dimensions if every force vector formula_38 on the vertices of formula_1 that is orthogonal to the trivial motions can be resolved by the net force of some proper stress formula_23; or written mathematically, for every such force vector formula_38 there exists a proper stress formula_23 such that formula_49 Equivalently, the rank of formula_40 must be formula_48. Static rigidity is equivalent to infinitesimal rigidity. Second-order rigidity. Second-order rigidity is weaker than infinitesimal and static rigidity. The second derivative of the rigidity map consists of equations of the form formula_50 The vector formula_51 assigns an acceleration to each vertex of a framework formula_1. These equations can be written in terms of matrices: formula_52, where formula_53 is defined similarly to the rigidity matrix. Each of the following statements are sufficient for a formula_0-dimensional framework to be second-order rigid in formula_0-dimensions: The third statement shows that for each such formula_32, formula_56 is not in the column span of formula_40, i.e., it is not an edge distortion resulting from formula_32. This follows from the Fredholm alternative: since the column span of formula_40 is orthogonal to the kernel of formula_57, i.e., the set of equilibrium stresses, either formula_52 for some acceleration formula_51 or there is an equilibrium stress formula_23 satisfying the third condition. The third condition can be written in terms of the stress matrix: formula_58. Solving for formula_23 is a non-linear problem in formula_32 with no known efficient algorithm. Prestress stability. Prestress stability is weaker than infinitesimal and static rigidity but stronger than second-order rigidity. Consider the third sufficient condition for second-order rigidity. A formula_0-dimensional framework formula_1 is prestress stable if there exists an equilibrium stress formula_23 such that for all non-trivial velocities formula_32, formula_58. Prestress stability can be verified via semidefinite programming techniques. Global rigidity. A formula_0-dimensional framework formula_1 of a linkage formula_3 is globally rigid in formula_0-dimensions if all frameworks in the configuration space formula_6 are equivalent up to trivial motions, i.e., factoring out the trivial motions, there is only one framework of formula_3. Theorem. Global rigidity is a generic property of graphs. Minimal rigidity. A formula_0-dimensional framework formula_1 is minimally rigid in formula_0-dimensions if formula_1 is rigid and removing any edge from formula_1 results in a framework that is not rigid. Redundant rigidity. There are two types of redundant rigidity: vertex-redundant and edge-redundant rigidity. A formula_0-dimensional framework formula_1 is edge-redundantly rigid in formula_0-dimensions if formula_1 is rigid and removing any edge from formula_1 results in another rigid framework. Vertex-redundant rigidity is defined analogously. Rigidity for various types of frameworks. Polyhedra. This section concerns the rigidity of polyhedra in formula_2-dimensions, see polyhedral systems for a definition of this type of GCS. A polyhedron is rigid if its underlying bar-joint framework is rigid. One of the earliest results for rigidity was a conjecture by Euler in 1766. Conjecture. A closed spatial figure allows no changes, as long as it is not ripped apart. Much work has gone into proving this conjecture, which has now been proved false by counterexample. The first major result is by Cauchy in 1813 and is known as Cauchy's theorem. Cauchy's Theorem. If there is an isometry between the surfaces of two strictly convex polyhedra which is an isometry on each of the faces, then the two polyhedra are congruent. There were minor errors with Cauchy's proof. The first complete proof was given in, and a slightly generalized result was given in. The following corollary of Cauchy's theorem relates this result to rigidity. Corollary. The 2-skeleton of a strictly convex polyhedral framework in formula_2-dimensions is rigid. In other words, if we treat the convex polyhedra as a set of rigid plates, i.e., as a variant of a body-bar-hinge framework, then the framework is rigid. The next result, by Bricard in 1897, shows that the strict convexity condition can be dropped for formula_59-skeletons of the octahedron. Theorem. The formula_59-skeleton of any polyhedral framework of the octahedron in formula_2-dimensions is rigid. However, there exists a framework of the octahedron whose formula_60-skeleton is not rigid in formula_2-dimensions. The proof of the latter part of this theorem shows that these flexible frameworks exist due to self-intersections. Progress on Eurler's conjecture did not pick up again until the late 19th century. The next theorem and corollary concern triangulated polyhedra. Theorem. If vertices are inserted in the edges of a strictly convex polyhedron and the faces are triangulated, then the formula_60-skeleton of the resulting polyhedron is infinitesimally rigid. Corollary. If a convex polyhedron in formula_2-dimensions has the property that the collection of faces containing a given vertex do not all lie in the same plane, then the formula_59-skeleton of that polyhedron is infinitesimally rigid. The following result shows that the triangulation condition in the above theorem is necessary. Theorem. The formula_60-skeleton of a strictly convex polyhedron embedded in formula_2-dimensions which has at least one non-triangluar face is not rigid. The following conjecture extends Cauchy's result to more general polyhedra. Conjecture. Two combinatorially equivalent polyhedra with equal corresponding dihedral angles are isogonal. This conjecture has been proved for some special cases. The next result applies in the generic setting, i.e., to almost all polyhedra with the same combinatorial structure, see structural rigidity. Theorem. Every closed simply connected polyhedral surface with a formula_2-dimensional framework is generically rigid. This theorem demonstrates that Euler's conjecture is true for almost all polyhedra. However, a non-generic polyhedron was found that is not rigid in formula_2-dimensions, disproving the conjecture. This polyhedra is topologically a sphere, which shows that the generic result above is optimal. Details on how to construct this polyhedra can be found in. An interesting property of this polyhedra is that its volume remains constant along any continuous motion path, leading to the following conjecture. Bellows Conjecture. Every orientable closed polyhedral surface flexes with constant volume. This conjecture was first proven for spherical polyhedra and then in general. Tensegrities. This section concerns the rigidity of tensegrities, see tensegrity systems for a definition of this type of GCS. Definitions. The definitions below can be found in. Infinitesimal motion. An infinitesimal motion of a tensegrity framework formula_1 is a velocity vector formula_19 such that for each edge formula_9 of the framework, Second-order motion. A second-order motion of a tensegrity framework formula_1 is a solution formula_54 to the following constraints: Global rigidity.’ A formula_0-dimensional tensegrity framework formula_1 of a tensegrity GCS is globally rigid in formula_0-dimensions if every other formula_0-dimensional framework formula_69 of the same GCS that is dominated by formula_1 can be obtained via a trivial motion of formula_1. Universal rigidity. A formula_0-dimensional tensegrity framework formula_1 of a tensegrity GCS is universally rigid if it is globally rigid in any dimension. Dimensional rigidity. A formula_0-dimensional tensegrity framework formula_1 of a tensegrity GCS is dimensionally rigid in formula_0-dimensions if any other formula_70-dimensional tensegrity framework formula_69, for any formula_70 satisfying the constraints of the GCS, has an affine span of dimension at most formula_0. Super stable. A formula_0-dimensional tensegrity framework formula_1 is super stable in formula_0-dimensions if is rigid in formula_0-dimensions as a bar-joint framework and has a proper equilibrium stress formula_23 such that the stress matrix formula_25 is positive semidefinite and has rank formula_71. Rigidity theorems. Generic results. Infinitesimal rigidity is not a generic property of tensegrities, see structural rigidity. In other words, not all generic tensegrities with the same constraint graph have the same infinitesimal rigidity properties. Hence, some work has gone into identifying specific classes of graphs for which infinitesimal rigidity is a generic property of tensegrities. Graphs satisfying this condition are called strongly rigid. Testing a graph for strong rigidity is NP-hard, even for formula_60-dimension. The following result equates generic redundant rigidity of graphs to infinitesimally rigid tensegrities. Theorem. A graph formula_10 has an infinitesimally rigid tensegrity framework in formula_0-dimensions, for some partition of the edges of formula_10 into bars, cables, and struts if and only if formula_10 is generically edge-redundantly rigid in formula_0-dimensions. The first result demonstrates when rigidity and infinitesimal rigidity of tensegrities are equivalent. Theorem. Let formula_1 be a formula_0-dimensional tensegrity framework where: the vertices of formula_10 are realized as a strictly convex polygon; the bars form a Hamilton cycle on the boundary of this polygon; and there are no struts. Then, formula_1 is rigid in formula_0-dimensions if and only if it is infinitesimally rigid in formula_0-dimensions. The following is a necessary condition for rigidity. Theorem. Let formula_1 be a formula_0-dimensional tensegrity framework with at least one cable or strut. If formula_1 is rigid in formula_0-dimensions, then it has a non-zero proper equilibrium stress. Rigidity of tensegrities can also be written in terms of bar-joint frameworks as follows. Theorem. Let formula_1 be a formula_0-dimensional tensegrity framework with at least one cable or strut. Then formula_1 is infinitesimally rigid in formula_0-dimensions if it is rigid in formula_0-dimensions as a bar-joint framework and has a strict proper stress. The following is a sufficient condition for second-order rigidity. Theorem. Let formula_1 be a formula_0-dimensional tensegrity framework. If for all non-trivial infinitesimal motions formula_32 of formula_1, there exists a proper equilibrium stress formula_23 such that formula_72 then formula_1 is second-order rigid. An interesting application of tensegrities is in sphere-packings in polyhedral containers. Such a packing can be modelled as a tensegrity with struts between pairs of tangent spheres and between the boundaries of the container and the spheres tangent to them. This model has been studied to compute local maximal densities of these packings. The next result demonstrates when tensegrity frameworks have the same equilibrium stresses. Theorem. Let formula_1 be a formula_0-dimensional tensegrity framework with a proper stress formula_23 such that the stress matrix formula_25 is positive semidefinite. Then, formula_23 is a proper stress of all formula_0-dimensional tensegrity frameworks dominated by formula_1. Global rigidity theorems. The following is a sufficient condition for global rigidity of generic tensegrity frameworks based on stress matrices. Theorem. Let formula_1 be a formula_0-dimensional generic tensegrity framework with a proper equilibrium stress formula_23. If the stress matrix formula_25 has rank formula_71, then formula_1 is globally rigid in formula_0 dimensions. While this theorem is for the generic setting, it does not offer a combinatorial characterization of generic global rigidity, so it is not quite a result of structural rigidity. Universal and dimensional rigidity. Let formula_1 be a formula_0-dimensional generic tensegrity framework, such that the affine span of formula_11 is formula_73, with a proper equilibrium stress formula_23 and the stress matrix formula_25. A finite set of non-zero vectors in formula_73 lie on a conic at infinity if, treating them as points in formula_74-dimensional projective space, they lie on a conic. Consider the following three statements: If Statements 1 and 2 hold, then formula_1 is dimensionally rigid in formula_0-dimensions, and if Statement 3 also holds, then formula_1 is universally rigid in formula_0-dimensions.
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "(G,p)" }, { "math_id": 2, "text": "3" }, { "math_id": 3, "text": "(G,\\delta)" }, { "math_id": 4, "text": "G=(V,E)" }, { "math_id": 5, "text": "\\delta" }, { "math_id": 6, "text": "\\mathcal{C} (G,\\delta)" }, { "math_id": 7, "text": "p:V \\rightarrow \\mathbb{R}^{d|V|}" }, { "math_id": 8, "text": "\\|p(u) - p(v)\\|^2 = \\delta_{uv}," }, { "math_id": 9, "text": "(u,v)" }, { "math_id": 10, "text": "G" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "d+1 \\choose 2" }, { "math_id": 13, "text": "\\rho:\\mathbb{R}^{d|V|} \\rightarrow \\mathbb{R}^{|E|}" }, { "math_id": 14, "text": "\\|p(u) - p(v)\\|^2" }, { "math_id": 15, "text": "(p(u)-p(v)) \\cdot (p'(v) - p'(u))=0," }, { "math_id": 16, "text": "R(G,p)" }, { "math_id": 17, "text": "|E| \\times d|V|" }, { "math_id": 18, "text": "\n\\begin{bmatrix}\n\\, & \\dots & \\text{columns for } u & \\dots & \\text{columns for } v & \\dots \\\\\n\\vdots & \\, & \\, & \\vdots & \\, & \\, \\\\\n\\text{row for }(u,v) & 0 \\dots 0 & p(u) - p(v) & 0 \\dots 0 & p(v) - p(u) & 0 \\dots 0 \\\\\n\\vdots & \\, & \\, & \\vdots & \\, & \\,\n\\end{bmatrix}\n" }, { "math_id": 19, "text": "p':V \\rightarrow \\mathbb{R}^d" }, { "math_id": 20, "text": "R(G,p)p'=0" }, { "math_id": 21, "text": "\\omega:E \\rightarrow \\mathbb{R}" }, { "math_id": 22, "text": "\\omega R(G,p)=0" }, { "math_id": 23, "text": "\\omega" }, { "math_id": 24, "text": "|V| \\times |V|" }, { "math_id": 25, "text": "\\Omega" }, { "math_id": 26, "text": "\n\\Omega_{uv} = \n\\begin{cases}\n-\\omega_{uv} & \\text{if } u \\neq v \\\\\n\\sum_{v \\in V} {\\omega_{uv}} & \\text{otherwise}\n\\end{cases}\n" }, { "math_id": 27, "text": "p,q \\in \\mathbb{R}^{d|V|}" }, { "math_id": 28, "text": "\\omega R(p) q = p^T \\Omega q." }, { "math_id": 29, "text": "\\mathbb{R}^{d|V|}" }, { "math_id": 30, "text": "\\mathbb{R}^{|E|}" }, { "math_id": 31, "text": "1 \\times d|V|" }, { "math_id": 32, "text": "p'" }, { "math_id": 33, "text": "1 \\times |E|" }, { "math_id": 34, "text": "e'" }, { "math_id": 35, "text": "R(G,p)p'= e'" }, { "math_id": 36, "text": "\\omega_{uv}" }, { "math_id": 37, "text": "\\omega^T R(p) = f," }, { "math_id": 38, "text": "f" }, { "math_id": 39, "text": "v" }, { "math_id": 40, "text": "R(p)" }, { "math_id": 41, "text": "f_v" }, { "math_id": 42, "text": "-f" }, { "math_id": 43, "text": "W = fp' = (\\omega R(p)) p' = \\omega (R(p) p') = \\omega e'," }, { "math_id": 44, "text": "W = p^T \\Omega p'" }, { "math_id": 45, "text": "(p(u)-p(v)) \\cdot (p'(u)-p'(v)) = 0" }, { "math_id": 46, "text": "p'(u)" }, { "math_id": 47, "text": "p'(v)" }, { "math_id": 48, "text": "d|V| - {d+1 \\choose 2}" }, { "math_id": 49, "text": "f + \\omega R(p) = 0." }, { "math_id": 50, "text": "(p(u)-p(v)) \\cdot (p' '(u) - p' '(v)) + (p'(u) - p'(v)) \\cdot (p'(u) - p'(v)) = 0." }, { "math_id": 51, "text": "p' '" }, { "math_id": 52, "text": "R(p) p' ' = -R(p')p'" }, { "math_id": 53, "text": "R(p')" }, { "math_id": 54, "text": "(p',p' ')" }, { "math_id": 55, "text": "\\omega^T R(p')p' > 0" }, { "math_id": 56, "text": "R(p')p'" }, { "math_id": 57, "text": "R(p)^T" }, { "math_id": 58, "text": "p'^T \\Omega p' > 0" }, { "math_id": 59, "text": "2" }, { "math_id": 60, "text": "1" }, { "math_id": 61, "text": "(p_u - p_v) \\cdot (p'_u - p'_v) = 0" }, { "math_id": 62, "text": "(p_u - p_v) \\cdot (p'_u - p'_v) \\leq 0" }, { "math_id": 63, "text": "(p_u - p_v) \\cdot (p'_u - p'_v) \\geq 0" }, { "math_id": 64, "text": "\\|p'_u - p'_v\\|^2 + (p_u - p_v) \\cdot (p' '_u - p' '_v) = 0" }, { "math_id": 65, "text": "\\|p'_u - p_v\\|^2 + (p_u - p_v) \\cdot (p' '_u - p'_v) \\leq 0" }, { "math_id": 66, "text": "(p_u - p_v) \\cdot (p'_u - p'_v) < 0" }, { "math_id": 67, "text": "\\|p'_u - p_v\\|^2 + (p_u - p_v) \\cdot (p' '_u - p'_v) \\geq 0" }, { "math_id": 68, "text": "(p_u - p_v) \\cdot (p'_u - p'_v) > 0" }, { "math_id": 69, "text": "(G,q)" }, { "math_id": 70, "text": "D" }, { "math_id": 71, "text": "|V|-d-1" }, { "math_id": 72, "text": "\\sum_{u,v \\in V} \\omega_{uv} (p'_u - p'_v) \\cdot (p'_u - p'_v) > 0," }, { "math_id": 73, "text": "\\mathbb{R}^d" }, { "math_id": 74, "text": "(d-1)" }, { "math_id": 75, "text": "rank(\\Omega)=|V|-d-1" } ]
https://en.wikipedia.org/wiki?curid=66489065
664897
Secure Communication based on Quantum Cryptography
Secure Communication based on Quantum Cryptography (SECOQC) is a project that aims to develop quantum cryptography (see there for further details). The European Union decided in 2004 to invest 11 million EUR in the project as a way of circumventing espionage attempts by ECHELON. Christian Monyk, the coordinator of SECOQC, said people and organizations in Austria, Belgium, the United Kingdom, Canada, the Czech Republic, Denmark, France, Germany, Italy, Russia, Sweden, and Switzerland would participate in the project. On October 8, 2008 SECOQC was launched in Vienna. Limitations of classical quantum cryptography. Quantum cryptography, usually known as quantum key distribution (QKD) provides powerful security. But it has some limitations. Following no-cloning theorem, QKD only can provide one-to-one connections. So the number of links will increase formula_0 as formula_1 represents the number of nodes. If a node wants to participate into the QKD network, it will cause some issues like constructing quantum communication line. To overcome these issues, SECOQC was started. Brief architecture of SECOQC network. SECOQC network architecture can be divided into two parts: trusted private networks and quantum networks connected via QBBs (quantum backbones). The private networks are conventional networks with end-nodes and a QBB. Each QBB enables quantum channel communication with another QBB and consists of a number of QKD devices that are connected with other QKD devices over one-to-one connections. From this, SECOQC can provide easier registration of new end-nodes in a QKD network, and quick recovery from threats on quantum channel links. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N(N-1)/2" }, { "math_id": 1, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=664897
66490111
Geiringer–Laman theorem
The Geiringer–Laman theorem gives a combinatorial characterization of generically rigid graphs in formula_0-dimensional Euclidean space, with respect to bar-joint frameworks. This theorem was first proved by Hilda Pollaczek-Geiringer in 1927, and later by Gerard Laman in 1970. An efficient algorithm called the Pebble Game is used to identify this class of graphs. This theorem has been the inspiration for many Geiringer-Laman type results for other types of frameworks with generalized Pebble games. Statement of the theorem. This theorem relies on definitions of genericity that can be found on the structural rigidity page. Let formula_1 denote the vertex set of a set of edges formula_2. Geiringer-Laman Theorem. A graph formula_3 is generically rigid in formula_0-dimensions with respect to bar-joint frameworks if and only if formula_4 has a spanning subgraph formula_5 such that The spanning subgraph formula_9 satisfying the conditions of the theorem is called a Geiringer-Laman, or minimally rigid, graph. Graphs satisfying the second condition form the independent sets of a sparsity matroid, and are called formula_10-sparse. A graph satisfying both conditions is also called a formula_10-tight graph. The direction of the theorem which states that a generically rigid graph is formula_10-tight is called the Maxwell direction, because James Clerk Maxwell gave an analogous necessary condition of formula_11-sparsity for a graph to be independent in the formula_12-dimensional generic rigidity matroid. The other direction of the theorem is the more difficult direction to prove. For dimensions formula_13, a graph that is formula_11-tight is not necessarily generically minimally rigid, i.e., the converse of the Maxwell Direction is not true. Example. Consider the graphs in Figure 1. The graph in (c) is generically minimally rigid, but it is not infinitesimally rigid. The red velocity vectors depict a non-trivial infinitesimal flex. Removing the red edge in (a) yields a generically minimally rigid spanning graph. Adding the dashed red edge in (b) makes the graph generically minimally rigid. Theorem. Let formula_4 be a graph. The following statements are equivalent: The equivalence of the first and second statements is the Geiringer-Laman theorem. The equivalence of the first and third statements was first proved by Crapo via the Geiringer-Laman theorem, and later by Tay via a more direct approach. Outline of proof. The proof of the Geiringer-Laman theorem given below is based on Laman's proof. Furthermore, the details of the proofs below are based on lecture notes found here Consider a bar-joint system formula_16 and a framework formula_17 of this system, where formula_18 is a map that places the vertices of formula_4 in the plane such that the distance constraints formula_19 are satisfied. For convenience, we refer to formula_20 as a framework of formula_4. The proof of the Geiringer-Laman theorem follows the outline below. Step 1 sets up the generic setting of rigidity so that we can focus on generic infinitesimal rigidity rather than generic rigidity. This is an easier approach, because infinitesimal rigidity involves a system of linear equations, rather than quadratic in the case of regular rigidity. In particular, we can prove structural properties about the rigidity matrix of a generic framework. These results were first proved by Asimow and Roth, see Combinatorial characterizations of generically rigid graphs. Note that in Step 1.4 the framework must be generic with respect to infinitesimal rigidity. In particular, a framework formula_17 that is rigid and generic with respect to rigidity is not necessarily infinitesimally rigid. Step 2 is the Maxwell Direction of the proof, which follows from simple counting arguments on the rigidity matrix. Step 3 shows that generically minimally rigid graphs are exactly the graphs that can be constructed starting from a single edge using two simple operations, which are defined below. Step 4 shows that graphs with this type of construction are generically infinitesimally rigid. Finally, once Step 1 is proved, Steps 2-3 prove the Geiringer-Laman theorem. Equivalence of generic rigidity and generic infinitesimal rigidity. Let formula_3 be a graph. First, we show that generic frameworks with respect to infinitesimal rigidity form an open and dense set in formula_21. One necessary and sufficient condition for a framework formula_20 of formula_4 to be infinitesimally rigid is for its rigidity matrix formula_22 to have max rank over all frameworks of formula_4. Proposition 1. For any framework formula_20 of formula_4 and any neighborhood formula_23, there exists a framework formula_24 in formula_23 such that the rigidity matrix formula_25 has max rank. "Proof." If the rigidity matrix formula_22 does not have max rank, then it has a set of dependent rows corresponding to a subset of edges formula_26 such that for some other rigidity matrix formula_25, the rows corresponding to formula_27 are independent. Let formula_28 be the set of frameworks such that the rows corresponding to formula_27 in their rigidity matrices are dependent. In other words, formula_28 is the set of frameworks formula_20 such that the minor of the rows corresponding to formula_27 in formula_22 is formula_29. Hence, formula_28 is a curve in formula_21, because a minor is a polynomial in the entries of a matrix. Let formula_30 be the union of these curves over all subsets of edges of formula_2. If a framework formula_22 does not have max rank for some framework formula_20, then formula_20 is contained in formula_30. Finally, since formula_30 is a finite set of curves, the proposition is proved. Proposition 2. Infinitesimal rigidity is a generic property of graphs. "Proof." We show that if one generic framework with respect to infinitesimal rigidity is infinitesimally rigid, then all generic frameworks are infinitesimally rigid. If a framework formula_20 of a graph formula_4 is infinitesimally rigid, then formula_22 has max rank. Note that the kernel of the rigidity matrix is the space of infinitesimal motions of a framework, which has dimension formula_31 for infinitesimally rigid frameworks. Hence, by the Rank–nullity theorem, if one generic framework is infinitesimally rigid then all generic frameworks are infinitesimal rigidity have rigid. Proposition 3. If a framework formula_20 of a graph formula_4 is infinitesimally rigid, then it is rigid. "Proof." Assume that formula_20 is not rigid, so there exists a framework formula_24 in a neighborhood formula_23 such that formula_32 and formula_24 is cannot be obtained via a trivial motion of formula_20. Since formula_24 is in formula_23, there exists a formula_33 and formula_34 such that formula_35. Applying some algebra yields: formula_36 Hence, formula_37 We can choose a sequence of formula_38 such that formula_39 and formula_40. This causes formula_41 and formula_42. Hence, formula_43 The first and last expressions in the equations above state that formula_44 is an infinitesimal motion of the framework formula_20. Since there is no trivial motion between formula_20 and formula_24, formula_44 is not a trivial infinitesimal motion. Thus, formula_20 is not infinitesimally rigid. Proposition 4. If a framework formula_20 of a graph formula_4 is rigid and generic with respect to infinitesimal rigidity, then formula_20 is infinitesimally rigid. "Proof." This follows from the implicit function theorem. First, we will factor out all trivial motions of formula_20. Since formula_22 has max rank, no formula_45 points of formula_20 are colinear. Hence, we can pin formula_0 points of formula_20 to factor out trivial motions: one point at the origin and another along the formula_46-axis at a distance from the origin consistent with all constraints. This yields a pinned framework formula_24 that lives in formula_47. This can be done for all frameworks in a neighborhood formula_23 of formula_20 to obtain a neighborhood formula_48 of formula_24 of pinned frameworks. The set of such frameworks is still a smooth manifold, so the rigidity map and rigidity matrix can be redefined on the new domain. Specifically, the rigidity matrix formula_25 of a pinned framework formula_24 has formula_49 columns and rank equal to formula_22, where formula_20 is the unpinned framework corresponding to formula_24. In this pinned setting, a framework is rigid if it is the only nearby solution to the rigidity map. Now, assume an unpinned framework formula_20 is not infinitesimally rigid, so that formula_50. Then the formula_51, where formula_24 is the pinned version of formula_20. We now set up to apply the implicit function theorem. Our continuously differentiable function is the rigidity map formula_52. The Jacobian of formula_53 is the rigidity matrix. Consider the subset of edges formula_26 corresponding to the formula_54 independent rows of formula_25, yielding the submatrix formula_55. We can find formula_54 independent columns of formula_55. Denote the entries in these columns by the vectors formula_56. Denote the entries of the remaining columns by the vectors formula_57. The formula_58 submatrix of formula_55 induced the formula_59 is invertible, so by the implicit function theorem, there exists a continuously differentiable function formula_60 such that formula_61 and formula_62. Hence, the framework formula_63 of the subgraph formula_5 is not rigid, and since the rows of formula_55 span the row space of formula_25, formula_24 is also not rigid. This contradicts our assumption, so formula_20 is infinitesimally rigid. Proposition 5. Rigidity is a generic property of graphs. "Proof." Let formula_20 be a rigid framework of formula_4 that is generic with respect to rigidity. By definition, there is a neighborhood of rigid frameworks formula_23 of formula_20. By Proposition 1, there is a framework formula_24 in formula_23 that is generic with respect to infinitesimal rigidity, so by Proposition 4, formula_24 is infinitesimally rigid. Hence, by Proposition 2, all frameworks that are generic with respect to infinitesimal rigidity are infinitesimally rigid, and by Proposition 3 they are also rigid. Finally, every neighborhood formula_64 of every framework formula_65 that is generic with respect to rigidity contains a framework formula_66 that is generic with respect to infinitesimal rigidity, by Proposition 1. Thus, if formula_20 is rigid then formula_65 is rigid. Theorem 1. A graph formula_4 is generically rigid if and only if it is generically infinitesimally rigid. "Proof." The proof follows a similar argument to the proof of Proposition 5. If formula_4 is generically rigid, then there exists a generic framework formula_20 with respect to rigidity that is rigid by definition. By Propositions 1 and 4, in any neighborhood of formula_20 there is a framework formula_24 that is generic with respect to infinitesimal rigidity and infinitesimally rigid. Hence, by Proposition 2, formula_4 is generically infinitesimally rigid. For the other direction, assume to the contrary that formula_4 is generically infinitesimally rigid, but not generically rigid. Then there exists a generic framework formula_20 with respect to rigidity that is not rigid by definition. By Proposition 1, in any neighborhood of formula_20 there is a framework formula_24 that is generic with respect to infinitesimal rigidity. By assumption formula_24 is infinitesimally rigid, and by Proposition 3, formula_24 is also rigid. Thus, formula_20 must be rigid and, by Proposition 5, all frameworks that are generic with respect to rigidity are rigid. This contradicts our assumption that formula_4 is not generically rigid. Maxwell direction. The Maxwell Direction of the Geiringer-Laman theorem follows from a simple counting argument on the rigidity matrix. Maxwell Direction. If a graph formula_4 has a generic infinitesimally rigid framework, then formula_4 has a Geiringer-Laman subgraph. "Proof." Let formula_20 be a generic infinitesimally rigid framework of formula_4. By definition, formula_22 has max rank, i.e., formula_67. In particular, formula_22 has formula_49 independent rows. Each row of formula_22 corresponds to an edge of formula_4, so the submatrix formula_68 with just the independent rows corresponds to a subgraph formula_5 such that formula_69. Furthermore, any subgraph formula_70 of formula_9 corresponds to a submatrix formula_71 of formula_68. Since the rows of formula_68 are independent, so are the rows of formula_71. Hence, formula_72, which clearly satisfies formula_8. Equivalence of generic infinitesimal rigidity and Henneberg constructions. Now we begin the proof of the other direction of the Geiringer-Laman theorem by first showing that a generically minimally rigid graph has a Henneberg construction. A Henneberg graph formula_4 has the following recursive definition: The two operations above are called a formula_29-extension and a formula_76-extension respectively. The following propositions are proved in: Proposition 6. A generically minimally rigid graph has no vertex with degree formula_76 and at least one vertex with degree less than formula_77 Proposition 7. If formula_4 is a generically minimally rigid graph with a vertex formula_78 of degree formula_45, connected to vertices formula_79 and formula_80, then for at least one pair of the neighbors of formula_78, without loss of generality say formula_81, there is no subgraph formula_82 of formula_4 that contains formula_83 and formula_84 and satisfies formula_85. Theorem 2. A generically minimally rigid graph formula_4 with at least formula_0 vertices has a Henneberg construction. "Proof." We proceed by induction on the number of vertices formula_86. The base case of formula_87 is the base case Henneberg graph. Assume formula_3 has a Henneberg construction when formula_88 and we will prove it for formula_89. When formula_89, formula_4 has a vertex formula_78 with degree formula_0 or formula_45, by Proposition 6. Case 1: formula_78 has degree 2. Let formula_82 be the subgraph of formula_4 obtained by removing formula_78, so formula_90 and formula_91. Since formula_4 is minimally rigid, we have formula_92 Furthermore, any subgraph formula_93 of formula_9 is also a subgraph of formula_4, so formula_94 by assumption. Hence, formula_9 is minimally rigid, by the Maxwell Direction, and formula_9 has a Henneberg construction by the inductive hypothesis. Now simply notice that formula_4 can be obtained from formula_9 via a formula_29-extension. Case 2: formula_78 has degree 3. Let the edges incident to formula_78 be formula_95 and formula_96. By Proposition 7, for one pair of the neighbors of formula_78, without loss of generality say formula_81, there is no subgraph formula_97 of formula_4 that contains formula_83 and formula_84 and satisfies formula_85. Note that formula_81 cannot be an edge, or else the subgraph on just that edge satisfies the previous equality. Consider the graph formula_82 obtained by removing formula_78 from formula_4 and adding the edge formula_81. We have formula_98. Furthermore, as with Case 1, any subgraph of formula_9 that does not contain formula_81 satisfies the second condition for minimal rigidity by assumption. For a subgraph of formula_99 that does contain formula_81, removing this edge yields a subgraph formula_100 of formula_4. By Proposition 7, formula_101, so formula_102. Hence, formula_9 is minimally rigid, and formula_9 has a Henneberg construction by the inductive hypothesis. Finally, notice that formula_4 can be obtained from formula_9 via a formula_76-extension. Combining Cases 1 and 2 proves the theorem by induction. It is also easy to the converse of Theorem 2 by induction. Proposition 8. A graph with a Henneberg construction is generically minimally rigid. Henneberg constructible graphs are generically infinitesimally rigid. To complete the proof of the Geringer-Laman theorem, we show that if a graph has a Henneberg construction then it is generically infinitesimmaly rigid. The proof of this result relies on the following proposition from. Proposition 9. If formula_103 are three non-colinear formula_0-dimensional points and formula_104 are three formula_0-dimensional vectors, then the following statements are equivalent: formula_107 vanishes at every point formula_20. Theorem 3. If a graph formula_4 with at least formula_45 vertices has a Henneberg construction, then formula_4 is generically infinitesimally rigid. "Proof." We proceed by induction on the number of vertices formula_86. The graph in the base case formula_108 is a triangle, which is generically infinitesimally rigid. Assume that when formula_88 formula_4 is generically infinitesimally rigid and we will prove it for formula_109. For formula_89, consider the graph formula_82 that formula_4 was obtained from via formula_29- or formula_76-extension. By the inductive hypothesis, formula_9 is generically infinitesimally rigid. Hence, formula_9 has a generic infinitesimally rigid framework formula_110 such that the kernel of formula_111 has dimension formula_45. Let formula_78 be the vertex added to formula_9 to obtain formula_4. We must choose a placement formula_112 in formula_0-dimensions such that formula_113 is a generic infinitesimally rigid framework of formula_4. Case 1: formula_4 is obtained from formula_9 via a formula_29-extension. Choosing such a placement for formula_112 is equivalent to adding rows corresponding to the equations formula_114 to the rigidity matrix formula_111, where formula_115 and formula_74 are the neighbors of formula_78 after the formula_29-extension and formula_116 is the velocity assigned to formula_112 by an infinitesimal motion. Our goal is to choose formula_112 such the dimension of the space of infinitesimal motions of formula_22 is the same as that of formula_111. We can choose formula_112 such that it is not colinear to formula_117 and formula_118, which ensures that there is only one solution to these equations. Hence, the kernel of formula_22 has dimension formula_45, so formula_20 is a generic infinitesimally rigid framework of formula_4. Case 2: formula_4 is obtained from formula_9 via a formula_76-extension. Let the neighbors of formula_78 after the formula_76-extension be the edges formula_119, and formula_96, and let formula_81 be the edge that was removed. Removing the edge formula_81 from formula_9 yields the subgraph formula_120. Let formula_121 be the framework of formula_120 that is equivalent to formula_65, except for the removed edge. The rigidity matrix formula_122 can be obtained from formula_111 by removing the row corresponding to the removed edge. By Proposition 8, formula_9 is generically minimally rigid, so the rows of formula_111 are independent. Hence, the rows of formula_122 are independent and its kernel has dimension formula_77. Let formula_123 be a basis vector for the space of infinitesimal motions of formula_121 such that formula_124 is a basis for the space of trivial infinitesimal motions. Then, any infinitesimal motion of formula_121 can be written as a linear combination of these formula_77 basis vectors. Choosing a placement for formula_112 that results in a generic infinitesimally rigid framework of formula_4 is equivalent to adding rows corresponding to the equations formula_125 to the rigidity matrix formula_122. Our goal is to choose formula_112 such the dimension of the space of infinitesimal motions of formula_22 is formula_76 less than that of formula_122. After rearranging, these equations have a solution if and only if formula_126 Note that formula_127 can be written as formula_128, for constants formula_129. Furthermore, we can move the summation outside of the determinant to obtain formula_130 Since formula_124 form a basis for the trivial infinitesimal motions, the first three terms in the summation are formula_29, leaving only formula_131 Solutions to this equation form a curve in formula_0-dimensions. We can choose formula_112 not along this curve so that formula_132, which ensures that there is only one solution to this equation. Hence, by Proposition 9, the kernel of formula_22 has dimension formula_45, so formula_20 is a generic infinitesimally rigid framework of formula_4. Combining Cases 1 and 2 proves the theorem by induction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2" }, { "math_id": 1, "text": "V(E)" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "G=(V,E)" }, { "math_id": 4, "text": "G" }, { "math_id": 5, "text": "G'=(V,E')" }, { "math_id": 6, "text": "|E'|=2|V|-3" }, { "math_id": 7, "text": "F \\subset E'" }, { "math_id": 8, "text": "|F| \\leq 2|V(F)| - 3" }, { "math_id": 9, "text": "G'" }, { "math_id": 10, "text": "(2,3)" }, { "math_id": 11, "text": "\\textstyle\\big(d, {d+1 \\choose 2}\\big)" }, { "math_id": 12, "text": "d" }, { "math_id": 13, "text": "d \\geq 3" }, { "math_id": 14, "text": "T_1,T_2," }, { "math_id": 15, "text": "T_3" }, { "math_id": 16, "text": "(G=(V,E),\\delta)" }, { "math_id": 17, "text": "(G,p)" }, { "math_id": 18, "text": "p:V \\rightarrow \\mathbb{R}^{2|V|}" }, { "math_id": 19, "text": "\\delta" }, { "math_id": 20, "text": "p" }, { "math_id": 21, "text": "\\mathbb{R}^{2|V|}" }, { "math_id": 22, "text": "R(p)" }, { "math_id": 23, "text": "N(p)" }, { "math_id": 24, "text": "q" }, { "math_id": 25, "text": "R(q)" }, { "math_id": 26, "text": "E' \\subset E" }, { "math_id": 27, "text": "E'" }, { "math_id": 28, "text": "\\mathcal{X}_{E'}" }, { "math_id": 29, "text": "0" }, { "math_id": 30, "text": "\\mathcal{X}" }, { "math_id": 31, "text": "{3 \\choose 2}" }, { "math_id": 32, "text": "\\rho (p) = \\rho(q)" }, { "math_id": 33, "text": "\\delta>0" }, { "math_id": 34, "text": "\\|h\\| < \\delta" }, { "math_id": 35, "text": "q = p + h" }, { "math_id": 36, "text": "\n\\begin{align}\n\\rho (p)_{ij} - \\rho (q)_{ij} &= \\|p_i - p_j\\|^2 - \\|q_i - q_j\\|^2 \\\\\n&= \\|h_i - h_j\\|^2 - 2(p_i - p_j)(h_i - h_j) \\\\\n&= 0.\n\\end{align}\n" }, { "math_id": 37, "text": "\\frac{\\|h_i - h_j\\|^2}{\\|h\\|} = 2(p_i - p_j) \\frac{(h_i - h_j)}{\\|h\\|} = 0." }, { "math_id": 38, "text": "\\delta_n" }, { "math_id": 39, "text": "\\delta_{n+1} < \\delta_n" }, { "math_id": 40, "text": "\\lim_{n \\rightarrow \\infty} \\delta_n = 0" }, { "math_id": 41, "text": "\\lim_{n \\rightarrow \\infty} \\|h_n\\| = 0" }, { "math_id": 42, "text": "\\lim_{n \\rightarrow \\infty} \\frac{\\|h_{n,i} - h_{n,j}\\|^2}{\\|h_n\\|} = (h^{\\star}_i - h^{\\star}_j)" }, { "math_id": 43, "text": "\n\\begin{align}\n2(p_i - p_j)( h^{\\star}_i - h^{\\star}_j) &= \\lim_{n \\rightarrow \\infty} 2(p_i - p_j) \\frac{\\|h_{n,i} - h_{n,j}\\|^2}{\\|h_n\\|} \\\\\n&= \\lim_{n \\rightarrow \\infty} \\frac{\\|h_{n,i} - h_{n,j}\\|^2}{\\|h_n\\|} \\\\\n&=0.\n\\end{align}\n" }, { "math_id": 44, "text": "h^{\\star}" }, { "math_id": 45, "text": "3" }, { "math_id": 46, "text": "x" }, { "math_id": 47, "text": "\\mathbb{R}^{2|V|-3}" }, { "math_id": 48, "text": "N(q)" }, { "math_id": 49, "text": "2|V|-3" }, { "math_id": 50, "text": "rank(R(p)) = k < 2|V| - 3" }, { "math_id": 51, "text": "rank(R(q))=k" }, { "math_id": 52, "text": "\\rho:\\mathbb{R}^{2|V|-3} \\rightarrow \\mathbb{R}^{|E|}" }, { "math_id": 53, "text": "\\rho" }, { "math_id": 54, "text": "k" }, { "math_id": 55, "text": "R(q)_{E'}" }, { "math_id": 56, "text": "y=y_1,\\dots,y_k" }, { "math_id": 57, "text": "x=x_1,\\dots,x_{2|V|-3-k}" }, { "math_id": 58, "text": "k \\times k" }, { "math_id": 59, "text": "y_1,\\dots,y_k" }, { "math_id": 60, "text": "g" }, { "math_id": 61, "text": "y=g(x)" }, { "math_id": 62, "text": "\\rho(x,y)_{E'} = \\rho(q)_{E'}" }, { "math_id": 63, "text": "q_{E'})" }, { "math_id": 64, "text": "N(p')" }, { "math_id": 65, "text": "p'" }, { "math_id": 66, "text": "q'" }, { "math_id": 67, "text": "rank(R(p))=2|V|-3" }, { "math_id": 68, "text": "R(p)_{G'}" }, { "math_id": 69, "text": "|E'| = 2|V|-3" }, { "math_id": 70, "text": "H=(V',F)" }, { "math_id": 71, "text": "R(p)_H" }, { "math_id": 72, "text": "rank(R(p)_H) = |F|" }, { "math_id": 73, "text": "(u,v)" }, { "math_id": 74, "text": "w" }, { "math_id": 75, "text": "u,v," }, { "math_id": 76, "text": "1" }, { "math_id": 77, "text": "4" }, { "math_id": 78, "text": "v" }, { "math_id": 79, "text": "u_1,u_2," }, { "math_id": 80, "text": "u_3" }, { "math_id": 81, "text": "(u_1,u_2)" }, { "math_id": 82, "text": "G'=(V',E')" }, { "math_id": 83, "text": "u_1" }, { "math_id": 84, "text": "u_2" }, { "math_id": 85, "text": "|F|=2|U|-3" }, { "math_id": 86, "text": "|V|" }, { "math_id": 87, "text": "|V|=2" }, { "math_id": 88, "text": "|V|=k" }, { "math_id": 89, "text": "|V|=k+1" }, { "math_id": 90, "text": "|V'|=|V|-1" }, { "math_id": 91, "text": "|E'|=|E|-2" }, { "math_id": 92, "text": "|E'|=|E|-2=2|V|-3-2=2|V'|-3." }, { "math_id": 93, "text": "H=(V' ',F)" }, { "math_id": 94, "text": "|F| \\leq 2|V' '| - 4," }, { "math_id": 95, "text": "(v,u_1),(v,u_2)," }, { "math_id": 96, "text": "(v,u_3)" }, { "math_id": 97, "text": "H=(U,F)" }, { "math_id": 98, "text": "|E'|=|E|-3+1=2|V|-3 - 2 = 2|V'| - 3" }, { "math_id": 99, "text": "G' '=(V' ',E' ')" }, { "math_id": 100, "text": "G' ' '=(V' ' ',E' ' ')" }, { "math_id": 101, "text": "|E' ' '| \\leq 2|V' ' '| - 2" }, { "math_id": 102, "text": "|E' '| \\leq 2|V' '| - 3" }, { "math_id": 103, "text": "p_1,p_2,p_3" }, { "math_id": 104, "text": "u_1,u_2,u_3" }, { "math_id": 105, "text": "(p_i - p_j) \\cdot (u_i - u_j)=0" }, { "math_id": 106, "text": "i,j \\in \\{1,2,3\\}" }, { "math_id": 107, "text": "\nf(p)=\n\\begin{vmatrix}\np-p_1 & (p - p_1) \\cdot u_1 \\\\\np-p_2 & (p - p_2) \\cdot u_2 \\\\\np-p_3 & (p - p_3) \\cdot u_3\n\\end{vmatrix}\n" }, { "math_id": 108, "text": "|V|=3" }, { "math_id": 109, "text": "k+1" }, { "math_id": 110, "text": "p' \\in \\mathbb{R}^{2k}" }, { "math_id": 111, "text": "R(p')" }, { "math_id": 112, "text": "p_v" }, { "math_id": 113, "text": "p=p' \\cup p_v" }, { "math_id": 114, "text": "\n\\begin{align}\n& (p_v - p_u) \\cdot (q_v - q_u) = 0\\\\\n& (p_v - p_w) \\cdot (q_v - q_w) = 0\n\\end{align}\n" }, { "math_id": 115, "text": "u" }, { "math_id": 116, "text": "q_v" }, { "math_id": 117, "text": "p_u" }, { "math_id": 118, "text": "p_w" }, { "math_id": 119, "text": "(v,u_1),(v,u_2)" }, { "math_id": 120, "text": "G' '" }, { "math_id": 121, "text": "p' '" }, { "math_id": 122, "text": "R(p' ')" }, { "math_id": 123, "text": "\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4" }, { "math_id": 124, "text": "\\lambda_1,\\lambda_2,\\lambda_3" }, { "math_id": 125, "text": "\n\\begin{align}\n& (p_v - p_{u_1}) \\cdot (q_v - q_{u_1}) = 0\\\\\n& (p_v - p_{u_2}) \\cdot (q_v - q_{u_2}) = 0\\\\\n& (p_v - p_{u_3}) \\cdot (q_v - q_{u_3}) = 0\n\\end{align}\n" }, { "math_id": 126, "text": "\n\\begin{vmatrix}\np_v-p_{u_1} & (p_v-p_{u_1}) \\cdot q_{u_1} \\\\\np_v-p_{u_2} & (p_v-p_{u_2}) \\cdot q_{u_2} \\\\\np_v-p_{u_3} & (p_v-p_{u_3}) \\cdot q_{u_3}\n\\end{vmatrix}\n=0.\n" }, { "math_id": 127, "text": "q_{u_j}" }, { "math_id": 128, "text": "\\sum_i^4 {c_i \\lambda_{i,u_j}}" }, { "math_id": 129, "text": "c_1,c_2,c_3,c_4" }, { "math_id": 130, "text": "\n\\sum_i^4 c_i\n\\begin{vmatrix}\np_v-p_{u_1} & (p_v-p_{u_1}) \\cdot \\lambda_{i,u_1} \\\\\np_v-p_{u_2} & (p_v-p_{u_2}) \\cdot \\lambda_{i,u_2} \\\\\np_v-p_{u_3} & (p_v-p_{u_3}) \\cdot \\lambda_{i,u_3}\n\\end{vmatrix}\n=0.\n" }, { "math_id": 131, "text": "\nc_4\n\\begin{vmatrix}\np_v-p_{u_1} & (p_v-p_{u_1}) \\cdot \\lambda_{4,u_1} \\\\\np_v-p_{u_2} & (p_v-p_{u_2}) \\cdot \\lambda_{4,u_2} \\\\\np_v-p_{u_3} & (p_v-p_{u_3}) \\cdot \\lambda_{4,u_3}\n\\end{vmatrix}\n=0.\n" }, { "math_id": 132, "text": "c_4=0" } ]
https://en.wikipedia.org/wiki?curid=66490111
66501922
Minimal polynomial of 2cos(2pi/n)
Polynomial connecting together the real part of the roots of unity. In number theory, the real parts of the roots of unity are related to one-another by means of the minimal polynomial of formula_0 The roots of the minimal polynomial are twice the real part of the roots of unity, where the real part of a root of unity is just formula_1 with formula_2 coprime with formula_3 Formal definition. For an integer formula_4, the minimal polynomial formula_5 of formula_6 is the non-zero monic polynomial of smallest degree for which formula_7. For every n, the polynomial formula_5 is monic, has integer coefficients, and is irreducible over the integers and the rational numbers. All its roots are real; they are the real numbers formula_8 with formula_2 coprime with formula_9 and either formula_10 or formula_11 These roots are twice the real parts of the primitive nth roots of unity. The number of integers formula_2 relatively prime to formula_9 is given by Euler's totient function formula_12 it follows that the degree of formula_5 is formula_13 for formula_14 and formula_15 for formula_16 The first two polynomials are formula_17 and formula_18 The polynomials formula_5 are typical examples of irreducible polynomials whose roots are all real and which have a cyclic Galois group. Examples. The first few polynomials formula_5 are formula_19 Explicit form if "n" is odd. If formula_9 is an odd prime, the polynomial formula_20 can be written in terms of binomial coefficients following a "zigzag path" through Pascal's triangle: Putting formula_21 and formula_22 then we have formula_23 for primes formula_24. If formula_9 is odd but not a prime, the same polynomial formula_25, as can be expected, is reducible and, corresponding to the structure of the cyclotomic polynomials formula_26 reflected by the formula formula_27, turns out to be just the product of all formula_28 for the divisors formula_29 of formula_9, including formula_9 itself: formula_30 This means that the formula_28 are exactly the irreducible factors of formula_25, which allows to easily obtain formula_28 for any odd formula_31, knowing its degree formula_32. For example, formula_33 Explicit form if "n" is even. From the below formula in terms of Chebyshev polynomials and the product formula for odd formula_9 above, we can derive for even formula_9 formula_34 Independently of this, if formula_35 is an even prime power, we have for formula_36 the recursion (see OEIS: ) formula_37, starting with formula_38. Roots. The roots of formula_5 are given by formula_39, where formula_40 and formula_41. Since formula_5 is monic, we have formula_42 Combining this result with the fact that the function formula_43 is even, we find that formula_39 is an algebraic integer for any positive integer formula_9 and any integer formula_2. Relation to the cyclotomic polynomials. For a positive integer formula_9, let formula_44, a primitive formula_9-th root of unity. Then the minimal polynomial of formula_45 is given by the formula_9-th cyclotomic polynomial formula_46. Since formula_47, the relation between formula_48 and formula_45 is given by formula_49. This relation can be exhibited in the following identity proved by Lehmer, which holds for any non-zero complex number formula_50: formula_51 Relation to Chebyshev polynomials. In 1993, Watkins and Zeitlin established the following relation between formula_5 and Chebyshev polynomials of the first kind. If formula_52 is odd, then formula_53 and if formula_54 is even, then formula_55 If formula_56 is a power of formula_57, we have moreover directly formula_58 Absolute value of the constant coefficient. The absolute value of the constant coefficient of formula_5 can be determined as follows: formula_59 Generated algebraic number field. The algebraic number field formula_60 is the maximal real subfield of a cyclotomic field formula_61. If formula_62 denotes the ring of integers of formula_63, then formula_64. In other words, the set formula_65 is an integral basis of formula_62. In view of this, the discriminant of the algebraic number field formula_63 is equal to the discriminant of the polynomial formula_5, that is formula_66 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2\\cos(2\\pi/n)." }, { "math_id": 1, "text": "\\cos\\left(2k\\pi/n\\right)" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "n." }, { "math_id": 4, "text": "n \\geq 1" }, { "math_id": 5, "text": "\\Psi_n(x)" }, { "math_id": 6, "text": "2\\cos(2\\pi/n)" }, { "math_id": 7, "text": "\\Psi_n\\!\\left(2\\cos(2\\pi/n)\\right) = 0" }, { "math_id": 8, "text": "2\\cos\\left(2k\\pi/n\\right)" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "1 \\le k < n" }, { "math_id": 11, "text": "k=n=1." }, { "math_id": 12, "text": "\\varphi(n);" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "n=1,2" }, { "math_id": 15, "text": "\\varphi(n)/2" }, { "math_id": 16, "text": "n\\geq 3." }, { "math_id": 17, "text": "\\Psi_1(x) = x - 2" }, { "math_id": 18, "text": "\\Psi_2(x) = x + 2." }, { "math_id": 19, "text": " \\begin{align}\n\\Psi_1(x) &= x - 2 \\\\\n\\Psi_2(x) &= x + 2 \\\\\n\\Psi_3(x) &= x + 1 \\\\\n\\Psi_4(x) &= x \\\\\n\\Psi_5(x) &= x^2 + x - 1 \\\\\n\\Psi_6(x) &= x - 1 \\\\\n\\Psi_7(x) &= x^3 + x^2 - 2x - 1 \\\\\n\\Psi_8(x) &= x^2 - 2 \\\\\n\\Psi_9(x) &= x^3 - 3x + 1 \\\\\n\\Psi_{10}(x) &= x^2 - x - 1 \\\\\n\\Psi_{11}(x) &= x^5 + x^4 - 4x^3 - 3x^2 +3x + 1 \\\\\n\\Psi_{12}(x) &= x^2 - 3\\\\\n\\Psi_{13}(x) &= x^6 + x^5 - 5 x^4 - 4 x^3 + 6 x^2 + 3 x - 1\\\\\n\\Psi_{14}(x) &= x^3 - x^2 - 2 x + 1 \\\\\n\\Psi_{15}(x) &= x^4 - x^3 - 4 x^2 + 4 x + 1 \\\\\n\\Psi_{16}(x) &= x^4 - 4 x^2 + 2 \\\\\n\\Psi_{17}(x) &= x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1 \\\\\n\\Psi_{18}(x) &= x^3 - 3 x - 1 \\\\\n\\Psi_{19}(x) &= x^9 + x^8 - 8 x^7 - 7 x^6 + 21 x^5 + 15 x^4 - 20 x^3 - 10 x^2 + 5 x + 1 \\\\\n\\Psi_{20}(x) &= x^4 - 5 x^2 + 5\n\\end{align}" }, { "math_id": 20, "text": "\\Psi_{n}(x)" }, { "math_id": 21, "text": "n=2m+1" }, { "math_id": 22, "text": "\\begin{align}\n\\chi_{n}(x):&= \\binom {m}{0}x^{m}+\\binom {m-1}{0}x^{m-1}-\\binom {m-1}{1}x^{m-2}-\\binom {m-2}{1}x^{m-3}+\\binom {m-2}{2}x^{m-4}+\\binom {m-3}{2}x^{m-5}--++\\cdots\\\\\n &= \\sum_{k=0}^m (-1)^{\\lfloor k/2\\rfloor}\\binom {m-\\lfloor (k+1)/2\\rfloor}{\\lfloor k/2\\rfloor} x^{m-k}\\\\\n &= \\binom {m}{m}x^{m}+\\binom {m-1}{m-1}x^{m-1}-\\binom {m-1}{m-2}x^{m-2}-\\binom {m-2}{m-3}x^{m-3}+\\binom {m-2}{m-4}x^{m-4}+\\binom {m-3}{m-5}x^{m-5}--++\\cdots\\\\\n&= \\sum_{k=0}^m (-1)^{\\lfloor (m-k)/2\\rfloor}\\binom {\\lfloor (m+k)/2\\rfloor}{k} x^{k},\n\\end{align}" }, { "math_id": 23, "text": "\\Psi_{p}(x)=\\chi_{p}(x)" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "\\chi_{n}(x)" }, { "math_id": 26, "text": "\\Phi_{d}(x)" }, { "math_id": 27, "text": "\\prod_{d\\mid n}\\Phi_d(x) = x^n - 1" }, { "math_id": 28, "text": "\\Psi_{d}(x)" }, { "math_id": 29, "text": "d>1" }, { "math_id": 30, "text": "\\prod _{d\\mid n\\atop d>1}\\Psi_{d}( x)=\\chi_{n}( x)." }, { "math_id": 31, "text": "d" }, { "math_id": 32, "text": "\\frac{1}{2}\\varphi(d)" }, { "math_id": 33, "text": "\\begin{align}\n\\chi_{15}( x)&= x^7 + x^6 - 6x^5 - 5x^4 + 10x^3 + 6x^2 - 4x - 1\\\\\n &= (x + 1)(x^2 + x - 1)(x^4 - x^3 - 4x^2 + 4x + 1)\\\\\n&=\\Psi_{3}(x)\\cdot\\Psi_{5}(x)\\cdot\\Psi_{15}(x).\n\\end{align}" }, { "math_id": 34, "text": "\\prod _{d\\mid n\\atop d>1}\\Psi_{d}( x)= \\Big(\\chi_{n+1}(x)+\\chi_{n-1}(x)\\Big)." }, { "math_id": 35, "text": "n=2^k" }, { "math_id": 36, "text": "k\\ge 2" }, { "math_id": 37, "text": "\\Psi_{2^{k+1}}( x)=(\\Psi_{2^k}( x))^2-2" }, { "math_id": 38, "text": "\\Psi_{4}( x)=x" }, { "math_id": 39, "text": "2\\cos\\left(\\frac{2\\pi k}{n}\\right)" }, { "math_id": 40, "text": "1 \\leq k < \\frac{n}{2}" }, { "math_id": 41, "text": "\\gcd(k, n) = 1" }, { "math_id": 42, "text": "\\Psi_n(x) = \\displaystyle\\prod_{\\begin{array}{c} 1 \\leq k < \\frac{n}{2}\\\\ \\gcd(k, n) = 1 \\end{array} } \\left(x - 2\\cos\\left(\\frac{2\\pi k}{n}\\right)\\right)." }, { "math_id": 43, "text": "\\cos(x)" }, { "math_id": 44, "text": "\\zeta_n = \\exp\\left(\\frac{2\\pi i}{n}\\right) = \\cos\\left(\\frac{2\\pi}{n}\\right) + \\sin\\left(\\frac{2\\pi}{n}\\right)i" }, { "math_id": 45, "text": "\\zeta_n" }, { "math_id": 46, "text": "\\Phi_n(x)" }, { "math_id": 47, "text": "\\zeta_n^{-1} = \\cos\\left(\\frac{2\\pi}{n}\\right) - \\sin\\left(\\frac{2\\pi}{n}\\right)i" }, { "math_id": 48, "text": "2\\cos\\left(\\frac{2\\pi}{n}\\right)" }, { "math_id": 49, "text": "2\\cos\\left(\\frac{2\\pi}{n}\\right) = \\zeta_n + \\zeta_n^{-1}" }, { "math_id": 50, "text": "z" }, { "math_id": 51, "text": "\\Psi_n\\left(z + z^{-1}\\right) = z^{-\\frac{\\varphi(n)}{2}}\\Phi_n(z)" }, { "math_id": 52, "text": "n = 2s + 1" }, { "math_id": 53, "text": "\\prod_{d \\mid n}\\Psi_d(2x) = 2(T_{s + 1}(x) - T_s(x))," }, { "math_id": 54, "text": "n = 2s" }, { "math_id": 55, "text": "\\prod_{d \\mid n}\\Psi_d(2x) = 2(T_{s + 1}(x) - T_{s - 1}(x))." }, { "math_id": 56, "text": "n " }, { "math_id": 57, "text": "2" }, { "math_id": 58, "text": " \\Psi_{2^{k+1}}(2x) = 2T_{2^{k-1}}(x) ." }, { "math_id": 59, "text": "|\\Psi_n(0)| = \\begin{cases}0 & \\text{if}\\ n = 4,\\\\2 & \\text{if}\\ n = 2^k, k \\geq 0, k \\neq 2,\\\\ p & \\text{if}\\ n = 4p^k, k \\geq 1, p > 2\\ \\text{prime,}\\\\1 & \\text{otherwise.}\\end{cases}" }, { "math_id": 60, "text": "K_n = \\mathbb{Q}\\left(\\zeta_n + \\zeta_n^{-1}\\right)" }, { "math_id": 61, "text": "\\mathbb Q(\\zeta_n)" }, { "math_id": 62, "text": "\\mathcal O_{K_n}" }, { "math_id": 63, "text": "K_n" }, { "math_id": 64, "text": "\\mathcal O_{K_n} = \\mathbb Z\\left[\\zeta_n + \\zeta_n^{-1}\\right]" }, { "math_id": 65, "text": "\\left\\{1, \\zeta_n + \\zeta_n^{-1}, \\ldots, \\left(\\zeta_n + \\zeta_n^{-1}\\right)^{\\frac{\\varphi(n)}{2} - 1}\\right\\}" }, { "math_id": 66, "text": "D_{K_n} = \\begin{cases}2^{(m - 1)2^{m - 2} - 1} & \\text{if}\\ n = 2^m, m > 2,\\\\p^{(mp^m - (m + 1)p^{m - 1} - 1)/2} & \\text{if}\\ n = p^m\\ \\text{or}\\ 2p^m, p > 2\\ \\text{prime},\\\\ \\left(\\prod_{i = 1}^{\\omega(n)}p_i^{e_i - 1/(p_i - 1)}\\right)^{\\frac{\\varphi(n)}{2}} & \\text{if}\\ \\omega(n) > 1, k \\neq 2p^m.\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=66501922
665025
Rhombic triacontahedron
Catalan solid with 30 faces The rhombic triacontahedron, sometimes simply called the triacontahedron as it is the most common thirty-faced polyhedron, is a convex polyhedron with 30 rhombic faces. It has 60 edges and 32 vertices of two types. It is a Catalan solid, and the dual polyhedron of the icosidodecahedron. It is a zonohedron. The ratio of the long diagonal to the short diagonal of each face is exactly equal to the golden ratio, φ, so that the acute angles on each face measure 2 arctan() arctan(2), or approximately 63.43°. A rhombus so obtained is called a "golden rhombus". Being the dual of an Archimedean solid, the rhombic triacontahedron is "face-transitive", meaning the symmetry group of the solid acts transitively on the set of faces. This means that for any two faces, A and B, there is a rotation or reflection of the solid that leaves it occupying the same region of space while moving face A to face B. The rhombic triacontahedron is somewhat special in being one of the nine edge-transitive convex polyhedra, the others being the five Platonic solids, the cuboctahedron, the icosidodecahedron, and the rhombic dodecahedron. The rhombic triacontahedron is also interesting in that its vertices include the arrangement of four Platonic solids. It contains ten tetrahedra, five cubes, an icosahedron and a dodecahedron. The centers of the faces contain five octahedra. It can be made from a truncated octahedron by dividing the hexagonal faces into three rhombi: Cartesian coordinates. Let φ be the golden ratio. The 12 points given by (0, ±1, ±"φ") and cyclic permutations of these coordinates are the vertices of a regular icosahedron. Its dual regular dodecahedron, whose edges intersect those of the icosahedron at right angles, has as vertices the 8 points (±1, ±1, ±1) together with the 12 points (0, ±"φ", ±) and cyclic permutations of these coordinates. All 32 points together are the vertices of a rhombic triacontahedron centered at the origin. The length of its edges is . Its faces have diagonals with lengths 2 and . Dimensions. If the edge length of a rhombic triacontahedron is a, surface area, volume, the radius of an inscribed sphere (tangent to each of the rhombic triacontahedron's faces) and midradius, which touches the middle of each edge are: formula_0 where φ is the golden ratio. The insphere is tangent to the faces at their face centroids. Short diagonals belong only to the edges of the inscribed regular dodecahedron, while long diagonals are included only in edges of the inscribed icosahedron. Dissection. The rhombic triacontahedron can be dissected into 20 golden rhombohedra: 10 acute ones and 10 obtuse ones. Orthogonal projections. The rhombic triacontahedron has four symmetry positions, two centered on vertices, one mid-face, and one mid-edge. Embedded in projection "10" are the "fat" rhombus and "skinny" rhombus which tile together to produce the non-periodic tessellation often referred to as Penrose tiling. Stellations. The rhombic triacontahedron has 227 fully supported stellations. Another stellation of the Rhombic triacontahedron is the compound of five cubes. The total number of stellations of the rhombic triacontahedron is . Related polyhedra. This polyhedron is a part of a sequence of rhombic polyhedra and tilings with ["n", 3] Coxeter group symmetry. The cube can be seen as a rhombic hexahedron where the rhombi are also rectangles. Uses. Danish designer Holger Strøm used the rhombic triacontahedron as a basis for the design of his buildable lamp IQ-light (IQ for "interlocking quadrilaterals"). Woodworker Jane Kostick builds boxes in the shape of a rhombic triacontahedron. The simple construction is based on the less than obvious relationship between the rhombic triacontahedron and the cube. Roger von Oech's "Ball of Whacks" comes in the shape of a rhombic triacontahedron. The rhombic triacontahedron is used as the "d30" thirty-sided die, sometimes useful in some roleplaying games or other places.
[ { "math_id": 0, "text": "\\begin{align}\nS &= 12\\sqrt{5}\\,a^2 &&\\approx 26.8328 a^2 \\\\[6px]\nV &= 4\\sqrt{5+2\\sqrt{5}}\\,a^3 &&\\approx 12.3107 a^3 \\\\[6px]\nr_\\mathrm{i} &= \\frac{\\varphi^2}{\\sqrt{1 + \\varphi^2}}\\,a = \\sqrt{1 + \\frac{2}{\\sqrt{5}}}\\,a &&\\approx 1.37638 a \\\\[6px]\nr_\\mathrm{m} &= \\left(1+\\frac{1}{\\sqrt{5}}\\right)\\,a &&\\approx 1.44721 a\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=665025
665091
Exponential hierarchy
In computational complexity theory, the exponential hierarchy is a hierarchy of complexity classes that is an exponential time analogue of the polynomial hierarchy. As elsewhere in complexity theory, “exponential” is used in two different meanings (linear exponential bounds formula_0 for a constant "c", and full exponential bounds formula_1), leading to two versions of the exponential hierarchy. This hierarchy is sometimes also referred to as the "weak" exponential hierarchy, to differentiate it from the "strong" exponential hierarchy. EH. The complexity class EH is the union of the classes formula_2 for all "k", where formula_3 (i.e., languages computable in nondeterministic time formula_0 for some constant "c" with a formula_4 oracle) and formula_5. One also defines formula_6 and formula_7 An equivalent definition is that a language "L" is in formula_2 if and only if it can be written in the form formula_8 where formula_9 is a predicate computable in time formula_10 (which implicitly bounds the length of "yi"). Also equivalently, EH is the class of languages computable on an alternating Turing machine in time formula_0 for some "c" with constantly many alternations. EXPH. EXPH is the union of the classes formula_11, where formula_12 (languages computable in nondeterministic time formula_1 for some constant "c" with a formula_4 oracle), formula_13, and again: formula_14 A language "L" is in formula_11 if and only if it can be written as formula_15 where formula_16 is computable in time formula_17 for some "c", which again implicitly bounds the length of "yi". Equivalently, EXPH is the class of languages computable in time formula_1 on an alternating Turing machine with constantly many alternations. E ⊆ NE ⊆ EH⊆ ESPACE, EXP ⊆ NEXP ⊆ EXPH⊆ EXPSPACE, EH ⊆ EXPH. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. "Complexity Zoo": Class EH
[ { "math_id": 0, "text": "2^{cn}" }, { "math_id": 1, "text": "2^{n^c}" }, { "math_id": 2, "text": "\\Sigma^\\mathsf{E}_k" }, { "math_id": 3, "text": "\\Sigma^\\mathsf{E}_k=\\mathsf{NE}^{\\Sigma^\\mathsf{P}_{k-1}}" }, { "math_id": 4, "text": "\\Sigma^\\mathsf{P}_{k-1}" }, { "math_id": 5, "text": "\\Sigma^\\mathsf{E}_0 = \\mathsf{E}" }, { "math_id": 6, "text": "\\Pi^\\mathsf{E}_k=\\mathsf{coNE}^{\\Sigma^\\mathsf{P}_{k-1}}" }, { "math_id": 7, "text": "\\Delta^\\mathsf{E}_k=\\mathsf{E}^{\\Sigma^\\mathsf{P}_{k-1}}." }, { "math_id": 8, "text": "x\\in L\\iff\\exists y_1\\forall y_2\\dots Qy_k R(x,y_1,\\ldots,y_k)," }, { "math_id": 9, "text": "R(x,y_1,\\ldots,y_n)" }, { "math_id": 10, "text": "2^{c|x|}" }, { "math_id": 11, "text": "\\Sigma^{\\mathsf{EXP}}_k" }, { "math_id": 12, "text": "\\Sigma^{\\mathsf{EXP}}_k=\\mathsf{NEXP}^{\\Sigma^\\mathsf{P}_{k-1}}" }, { "math_id": 13, "text": "\\Sigma^{\\mathsf{EXP}}_0 = \\mathsf{EXP}" }, { "math_id": 14, "text": "\\Pi^{\\mathsf{EXP}}_k=\\mathsf{coNEXP}^{\\Sigma^\\mathsf{P}_{k-1}}, \\Delta^{\\mathsf{EXP}}_k=\\mathsf{EXP}^{\\Sigma^\\mathsf{P}_{k-1}}." }, { "math_id": 15, "text": "x\\in L\\iff\\exists y_1 \\forall y_2 \\dots Qy_k R(x,y_1,\\ldots,y_k)," }, { "math_id": 16, "text": "R(x,y_1,\\ldots,y_k)" }, { "math_id": 17, "text": "2^{|x|^c}" } ]
https://en.wikipedia.org/wiki?curid=665091
665096
ELEMENTARY
In computational complexity theory, the complexity class ELEMENTARY of elementary recursive functions is the union of the classes formula_0 The name was coined by László Kalmár, in the context of recursive functions and undecidability; most problems in it are far from elementary. Some natural recursive problems lie outside ELEMENTARY, and are thus NONELEMENTARY. Most notably, there are primitive recursive problems that are not in ELEMENTARY. We know LOWER-ELEMENTARY ⊊ EXPTIME ⊊ ELEMENTARY ⊊ PR ⊊ R Whereas ELEMENTARY contains bounded applications of exponentiation (for example, formula_1), PR allows more general hyper operators (for example, tetration) which are not contained in ELEMENTARY. Definition. The definitions of elementary recursive functions are the same as for primitive recursive functions, except that primitive recursion is replaced by bounded summation and bounded product. All functions work over the natural numbers. The basic functions, all of them elementary recursive, are: From these basic functions, we can build other elementary recursive functions. Basis for ELEMENTARY. The class of elementary functions coincides with the closure with respect to composition of the projections and one of the following function sets: formula_4, formula_5, formula_6, where formula_7 is the subtraction function defined above. Lower elementary recursive functions. "Lower elementary recursive" functions follow the definitions as above, except that bounded product is disallowed. That is, a lower elementary recursive function must be a zero, successor, or projection function, a composition of other lower elementary recursive functions, or the bounded sum of another lower elementary recursive function. Lower elementary recursive functions are also known as Skolem elementary functions. Whereas elementary recursive functions have potentially more than exponential growth, the lower elementary recursive functions have polynomial growth. The class of lower elementary functions has a description in terms of composition of simple functions analogous to that we have for elementary functions. Namely, a polynomial-bounded function is lower elementary if and only if it can be expressed using a composition of the following functions: projections, formula_8, formula_9, formula_10, formula_11, formula_12, one exponential function (formula_13 or formula_14) with the following restriction on the structure of formulas: the formula can have no more than two floors with respect to an exponent (for example, formula_15 has 1 floor, formula_16 has 2 floors, formula_17 has 3 floors). Here formula_11 is a bitwise AND of n and m. Descriptive characterization. In descriptive complexity, ELEMENTARY is equal to the class HO of languages that can be described by a formula of higher-order logic. This means that every language in the ELEMENTARY complexity class corresponds to as a higher-order formula that is true for, and only for, the elements on the language. More precisely, formula_18, where ⋯ indicates a tower of i exponentiations and formula_19 is the class of queries that begin with existential quantifiers of ith order and then a formula of ("i" − 1)th order. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\begin{align}\n \\mathsf{ELEMENTARY} & = \\bigcup_{k \\in \\mathbb{N}} k\\mathsf{\\mbox{-}EXP} \\\\\n & = \\mathsf{DTIME}\\left(2^n\\right)\\cup\\mathsf{DTIME}\\left(2^{2^n}\\right)\\cup\n \\mathsf{DTIME}\\left(2^{2^{2^n}}\\right)\\cup\\cdots\n \\end{align}\n" }, { "math_id": 1, "text": "O(2^{2^n})" }, { "math_id": 2, "text": "f(m, x_1, \\ldots, x_n) = \\sum\\limits_{i=0}^mg(i, x_1, \\ldots, x_n)" }, { "math_id": 3, "text": "f(m, x_1, \\ldots, x_n) = \\prod\\limits_{i=0}^mg(i, x_1, \\ldots, x_n)" }, { "math_id": 4, "text": "\\{ n+1, n \\,\\stackrel{.}{-}\\, m, \\lfloor n/m \\rfloor, n^m \\}" }, { "math_id": 5, "text": "\\{ n+m, n \\,\\stackrel{.}{-}\\, m, \\lfloor n/m\\rfloor, 2^n \\}" }, { "math_id": 6, "text": "\\{ n+m, n^2, n \\,\\bmod\\, m, 2^n \\}" }, { "math_id": 7, "text": "n \\, \\stackrel{.}{-} \\, m = \\max\\{n-m, 0\\}" }, { "math_id": 8, "text": "n+1" }, { "math_id": 9, "text": "nm" }, { "math_id": 10, "text": "n \\,\\stackrel{.}{-}\\, m" }, { "math_id": 11, "text": "n\\wedge m" }, { "math_id": 12, "text": "\\lfloor n/m \\rfloor" }, { "math_id": 13, "text": "2^n" }, { "math_id": 14, "text": "n^m" }, { "math_id": 15, "text": "xy(z+1)" }, { "math_id": 16, "text": "(x+y)^{yz+x}+z^{x+1}" }, { "math_id": 17, "text": "2^{2^x}" }, { "math_id": 18, "text": "\\mathsf{NTIME}\\left(2^{2^{\\cdots{2^{O(n)}}}}\\right) = \\exists{}\\mathsf{HO}^i" }, { "math_id": 19, "text": "\\exists{}\\mathsf{HO}^i" } ]
https://en.wikipedia.org/wiki?curid=665096
665108
Joseph Larmor
Irish physicist and mathematician (1857–1942) Sir Joseph Larmor (11 July 1857 – 19 May 1942) was an Irish physicist and mathematician who made breakthroughs in the understanding of electricity, dynamics, thermodynamics, and the electron theory of matter. His most influential work was "Aether and Matter", a theoretical physics book published in 1900. Biography. He was born in Magheragall in County Antrim the son of Hugh Larmor, a Belfast shopkeeper and his wife, Anna Wright. The family moved to Belfast circa 1860, and he was educated at the Royal Belfast Academical Institution, and then studied mathematics and experimental science at Queen's College, Belfast (BA 1874, MA 1875), where one of his teachers was John Purser. He subsequently studied at St John's College, Cambridge, where in 1880 he was Senior Wrangler (J. J. Thomson was second wrangler that year) and Smith's Prizeman, getting his MA in 1883. After teaching physics for a few years at Queen's College, Galway, he accepted a lectureship in mathematics at Cambridge in 1885. In 1892 he was elected a Fellow of the Royal Society of London, and he served as one of the Secretaries of the society. He was made an Honorary Fellow of the Royal Society of Edinburgh in 1910. In 1903 he was appointed Lucasian Professor of Mathematics at Cambridge, a post he retained until his retirement in 1932. He never married. He was knighted by King Edward VII in 1909. Motivated by his strong opposition to Home Rule for Ireland, in February 1911 Larmor ran for and was elected as Member of Parliament for Cambridge University (UK Parliament constituency) with the Conservative party. He remained in parliament until the 1922 general election, at which point the Irish question had been settled. Upon his retirement from Cambridge in 1932 Larmor moved back to County Down in Northern Ireland. He received the honorary Doctor of Laws (LLD) from the University of Glasgow in June 1901. He was elected an International Honorary Member of the American Academy of Arts and Sciences in 1903, an International Member of the United States National Academy of Sciences in 1908, and an International Member of the American Philosophical Society in 1913. He was awarded the Poncelet Prize for 1918 by the French Academy of Sciences. Larmor was a Plenary Speaker in 1920 at the ICM at Strasbourg and an Invited Speaker at the ICM in 1924 in Toronto and at the ICM in 1928 in Bologna. He died in Holywood, County Down on 19 May 1942. Work. Larmor proposed that the aether could be represented as a homogeneous fluid medium which was perfectly incompressible and elastic. Larmor believed the aether was separate from matter. He united Lord Kelvin's model of spinning gyrostats (see Vortex theory of the atom) with this theory. Larmor held that matter consisted of particles moving in the aether. Larmor believed the source of electric charge was a "particle" (which as early as 1894 he was referring to as the electron). Larmor held that the flow of charged particles constitutes the current of conduction (but was not part of the atom). Larmor calculated the rate of energy radiation from an accelerating electron. Larmor explained the splitting of the spectral lines in a magnetic field by the oscillation of electrons.Larmor also created the first solar system model of the atom in 1897. He also postulated the proton, calling it a “positive electron.” He said the destruction of this type of atom making up matter “is an occurrence of infinitely small probability.” In 1919, Larmor proposed sunspots are self-regenerative dynamo action on the Sun's surface. Quotes from one of Larmor's voluminous work include: Discovery of Lorentz transformation. Parallel to the development of Lorentz ether theory, Larmor published an approximation to the Lorentz transformations in the "Philosophical Transactions of the Royal Society" in 1897, namely formula_0 for the spatial part and formula_1 for the temporal part, where formula_2 and the local time formula_3. He obtained the full Lorentz transformation in 1900 by inserting formula_4 into his expression of local time such that formula_5, and as before formula_6 and formula_7. This was done around the same time as Hendrik Lorentz (1899, 1904) and five years before Albert Einstein (1905). Larmor however did not possess the correct velocity transformations, which include the addition of velocities law, which were later discovered by Henri Poincaré. Larmor predicted the phenomenon of time dilation, at least for orbiting electrons, by writing (Larmor 1897): "... individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio (1 – "v"2/"c"2)1/2". He also verified that the FitzGerald–Lorentz contraction (length contraction) should occur for bodies whose atoms were held together by electromagnetic forces. In his book "Aether and Matter" (1900), he again presented the Lorentz transformations, time dilation and length contraction (treating these as dynamic rather than kinematic effects). Larmor was opposed to the spacetime interpretation of the Lorentz transformation in special relativity because he continued to believe in an absolute aether. He was also critical of the curvature of space of general relativity, to the extent that he claimed that an absolute time was essential to astronomy (Larmor 1924, 1927). Publications. Larmor edited the collected works of George Stokes, James Thomson and William Thomson. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_{1} =x\\epsilon^{\\frac{1}{2}}" }, { "math_id": 1, "text": "dt_{1} =dt^{\\prime}\\epsilon^{-\\frac{1}{2}}" }, { "math_id": 2, "text": "\\epsilon =\\left(1-v^{2}/c^{2}\\right)^{-1}" }, { "math_id": 3, "text": "t^{\\prime} =t-vx/c^{2}" }, { "math_id": 4, "text": "\\epsilon" }, { "math_id": 5, "text": "t^{\\prime\\prime} =t^{\\prime}-\\epsilon vx^{\\prime}/c^{2}" }, { "math_id": 6, "text": "x_{1} =\\epsilon^{\\frac{1}{2}}x^{\\prime}" }, { "math_id": 7, "text": "dt_{1} =\\epsilon^{-\\frac{1}{2}}dt^{\\prime\\prime}" } ]
https://en.wikipedia.org/wiki?curid=665108
66510903
Mary Mulry
American statistician Mary Helen Mulry (also published as Mary Mulry-Liggan) is an American demographic statistician who works for the United States Census Bureau and has published scholarly works about census accuracy. Education and career. Mulry majored in mathematics at Texas Christian University, graduating in 1972 as the university's top mathematics student. She went to Indiana University Bloomington for graduate study in mathematics, earning a master's degree in mathematics in 1975, a second master's degree in statistics in 1977, and a Ph.D. in mathematics in 1978. Her dissertation, "Equivariant formula_0-Extension Properties", concerned equivariant topology and was supervised by Jan Jaworowski. Since completing her doctorate, Mulry has alternated between working for industry (at the System Planning Corporation, Lockheed Martin, M/A/R/C Research, and as an independent consultant) and for the United States Census Bureau (1980–1983, 1984–1997, and 2001–present). Since 2001 she has been a principal researcher for the Census Bureau, in the Center for Statistical Research and Methodology. Mulry chaired the methodology section of the Washington Statistical Society in 1986–1987. She was vice president of the American Statistical Association from 2011 to 2013. Recognition. Mulry was elected as a Fellow of the American Statistical Association in 1994. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}_p" } ]
https://en.wikipedia.org/wiki?curid=66510903
66515835
Proportional item allocation
Fair item allocation problem Proportional item allocation is a fair item allocation problem, in which the fairness criterion is proportionality - each agent should receive a bundle that they value at least as much as 1/"n" of the entire allocation, where "n" is the number of agents. Since the items are indivisible, a proportional assignment may not exist. The simplest case is when there is a single item and at least two agents: if the item is assigned to one agent, the other will have a value of 0, which is less than 1/2. Therefore, the literature considers various relaxations of the proportionality requirement. Proportional allocation. An allocation of objects is called proportional (PROP) if every agent "i" values his bundle at least 1/"n" of the total. Formally, for all "i" (where "M" is the set of all goods): A proportional division may not exist. For example, if the number of people is larger than the number of items, then some people will get no item at all and their value will be zero. Nevertheless, such a division exists with high probability for indivisible items under certain assumptions on the valuations of the agents. Deciding whether a PROP allocation exists: cardinal utilities. Suppose the agents have cardinal utility functions on items. Then, the problem of deciding whether a proportional allocation exists is NP-complete: it can be reduced from the partition problem. Deciding whether a PROP allocation exists: ordinal rankings. Suppose the agents have ordinal rankings on items. An allocation is called necessary-proportional (or sd-proportional) if it is proportional according to "all" valuations consistent with the rankings. It is called possibly-proportional if it is proportional according to "at least one" set of consistent valuations. Relation to other fairness criteria. With additive valuations: PROP1 allocations. An allocation is called proportional up to the best "c" items (PROPc) if for every agent "i", there exists a subset of at most "c" items that, if given to "i", brings the total value of "i" to at least 1/"n" of the total. Formally, for all "i" (where "M" is the set of all goods): An equivalent definition is: the value of each agent "i" is at least (1/"n" of the total) minus (the most valuable "c" items not assigned to "i"): PROP0 is equivalent to proportionality, which might not exist. In contrast, a PROP1 allocation always exists and can be found e.g. by round-robin item allocation. The interesting question is how to combine it with efficiency conditions such as Pareto efficiency (PE). Finding efficient PROP1 allocations. Conitzer, Freeman and Shah proved that, in the context of fair public decision making, a PROP1 allocation that is also PE. Barman and Krishnamurthy presented a strongy-polynomial-time algorithm finding a PE+PROP1 allocation for "goods" (objects with positive utility). Branzei and Sandomirskiy extended the condition of PROP1 to "chores" (objects with negative utility). Formally, for all "i": They presented an algorithm finding a PE+PROP1 allocation of chores. The algorithm is strongly-polynomial-time if either the number of objects or the number of agents (or both) are fixed. Aziz, Caragiannis, Igarashi and Walsh extended the condition of PROP1 to "mixed valuations" (objects can have both positive and negative utilities). In this setting, an allocation is called PROP1 if, for each agent "i", if we remove one negative item from i's bundle, or add one positive item to i's bundle, then i's utility is at least 1/"n" of the total. Their Generalized Adjusted Winner algorithm finds a PE+EF1 allocation for two agents; such an allocation is also PROP1. Aziz, Moulin and Sandomirskiy presented a strongly-polynomial-time algorithm for finding an allocation that is fractionally-PE (stronger than PE) and PROP1, with general mixed valuations, even if the number of agents or objects is not fixed, and even if the agents have different entitlements. Relation to other fairness criteria. With additive valuations: PROP*("n"-1) allocations. An allocation is called proportional from all except "c" items (PROP*"c") for an agent "i" if there exists a set of at most "c" items that, if removed from the set of "all" items, then "i" values his bundle at least 1/"n" of the remainder. Formally, for all "i": PROP*("n"-1) is slightly stronger than PROP1: when "n"=2, PROP*("n"-1) is equivalent to EF1, but PROP1 is weaker. A PROP*("n"-1) allocation always exists and can be found e.g. by round-robin item allocation. Relation to other fairness criteria. With additive valuations: The following maximin-share approximations are implied by PROP*("n"-1): PROPx allocations. An allocation is called proportional up to the worst item (PROPx) if for every agent "i", for any subset with one item not allocated to "i", if the subset is given to "i", then it brings his value to at least 1/"n" of the total. Formally, for all "i": An equivalent definition is: the value of each agent "i" is at least (1/"n" of the total) minus (the "least" valuable item not assigned to "i"): Obviously, PROPx is stronger than PROP1. Moreover, while PROP1 allocations always exist, PROPx allocations may not exist. PROPm allocations. An allocation is called proportional up to the maximin item (PROPm) if the value of each agent "i" is at least (1/"n" of the total) minus (the "maximin" item not assigned to "i"), where the maximin item the maximum over all the other "n"-1 agents "j", of the least-valuable item allocated to "j". Formally: Obviously, PROPx is stronger than PROPm, which is stronger than PROP1. A PROPm allocation exists when the number of agents is at most 5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_i(X_i) \\geq V_i(M)/n" }, { "math_id": 1, "text": "\\exists Y\\subseteq M \\setminus X_i: ~~~|Y|\\leq c, ~~~V_i(X_i \\cup Y) \\geq V_i(M)/n" }, { "math_id": 2, "text": "V_i(X_i)~ \\geq~ V_i(M)/n~ - ~\\max_{Y \\subseteq M\\setminus X_i, |Y|\\leq c}V_i(Y)" }, { "math_id": 3, "text": "\\exists Y\\subseteq X_i: ~~~|Y|\\leq 1, ~~~V_i(X_i \\setminus Y) \\geq V_i(M)/n" }, { "math_id": 4, "text": "\\exists Y\\subseteq M: ~~~|Y|\\leq c, ~~~V_i(X_i) \\geq V_i(M\\setminus Y)/n" }, { "math_id": 5, "text": "\\forall Y\\subseteq M \\setminus X_i: ~~~|Y| = 1, ~~~V_i(X_i \\cup Y) \\geq V_i(M)/n" }, { "math_id": 6, "text": "V_i(X_i)~ \\geq~ V_i(M)/n~ - ~\\min_{Y \\subseteq M\\setminus X_i, |Y|= 1}V_i(Y)" }, { "math_id": 7, "text": "V_i(X_i)~ \\geq~ V_i(M)/n~ - ~\\max_{j\\neq i} \\min_{Y \\subseteq X_j, |Y|= 1}V_i(Y)" } ]
https://en.wikipedia.org/wiki?curid=66515835
665204
Join (SQL)
SQL clause A join clause in the Structured Query Language (SQL) combines columns from one or more tables into a new table. The operation corresponds to a join operation in relational algebra. Informally, a join stitches two tables and puts on the same row records with matching fields : codice_0, codice_1, codice_2, codice_3 and codice_4. Example tables. To explain join types, the rest of this article uses the following tables: codice_5 is the primary key of the codice_6 table, whereas codice_7 is a foreign key. Note that in codice_8, "Williams" has not yet been assigned to a department. Also, no employees have been assigned to the "Marketing" department. These are the SQL statements to create the above tables: CREATE TABLE department( DepartmentID INT PRIMARY KEY NOT NULL, DepartmentName VARCHAR(20) CREATE TABLE employee ( LastName VARCHAR(20), DepartmentID INT REFERENCES department(DepartmentID) INSERT INTO department VALUES (31, 'Sales'), (33, 'Engineering'), (34, 'Clerical'), (35, 'Marketing'); INSERT INTO employee VALUES ('Rafferty', 31), ('Jones', 33), ('Heisenberg', 33), ('Robinson', 34), ('Smith', 34), ('Williams', NULL); Cross join. codice_9 returns the Cartesian product of rows from tables in the join. In other words, it will produce rows which combine each row from the first table with each row from the second table. Example of an explicit cross join: SELECT * FROM employee CROSS JOIN department; Example of an implicit cross join: SELECT * FROM employee, department; The cross join can be replaced with an inner join with an always-true condition: SELECT * FROM employee INNER JOIN department ON 1=1; codice_9 does not itself apply any predicate to filter rows from the joined table. The results of a codice_9 can be filtered using a codice_12 clause, which may then produce the equivalent of an inner join. In the standard, cross joins are part of the optional F401, "Extended joined table", package. Normal uses are for checking the server's performance. Inner join. An inner join (or join) requires each row in the two joined tables to have matching column values, and is a commonly used join operation in applications but should not be assumed to be the best choice in all situations. Inner join creates a new result table by combining column values of two tables (A and B) based upon the join-predicate. The query compares each row of A with each row of B to find all pairs of rows that satisfy the join-predicate. When the join-predicate is satisfied by matching non-NULL values, column values for each matched pair of rows of A and B are combined into a result row. The result of the join can be defined as the outcome of first taking the cartesian product (or cross join) of all rows in the tables (combining every row in table A with every row in table B) and then returning all rows that satisfy the join predicate. Actual SQL implementations normally use other approaches, such as hash joins or sort-merge joins, since computing the Cartesian product is slower and would often require a prohibitively large amount of memory to store. SQL specifies two different syntactical ways to express joins: the "explicit join notation" and the "implicit join notation". The "implicit join notation" is no longer considered a best practice, although database systems still support it. The "explicit join notation" uses the codice_13 keyword, optionally preceded by the codice_0 keyword, to specify the table to join, and the codice_15 keyword to specify the predicates for the join, as in the following example: SELECT employee.LastName, employee.DepartmentID, department.DepartmentName FROM employee INNER JOIN department ON employee.DepartmentID = department.DepartmentID; The "implicit join notation" simply lists the tables for joining, in the codice_16 clause of the codice_17 statement, using commas to separate them. Thus it specifies a cross join, and the codice_12 clause may apply additional filter-predicates (which function comparably to the join-predicates in the explicit notation). The following example is equivalent to the previous one, but this time using implicit join notation: SELECT employee.LastName, employee.DepartmentID, department.DepartmentName FROM employee, department WHERE employee.DepartmentID = department.DepartmentID; The queries given in the examples above will join the Employee and department tables using the DepartmentID column of both tables. Where the DepartmentID of these tables match (i.e. the join-predicate is satisfied), the query will combine the "LastName", "DepartmentID" and "DepartmentName" columns from the two tables into a result row. Where the DepartmentID does not match, no result row is generated. Thus the result of the execution of the query above will be: The employee "Williams" and the department "Marketing" do not appear in the query execution results. Neither of these has any matching rows in the other respective table: "Williams" has no associated department, and no employee has the department ID 35 ("Marketing"). Depending on the desired results, this behavior may be a subtle bug, which can be avoided by replacing the inner join with an outer join. Inner join and NULL values. Programmers should take special care when joining tables on columns that can contain NULL values, since NULL will never match any other value (not even NULL itself), unless the join condition explicitly uses a combination predicate that first checks that the joins columns are codice_19 before applying the remaining predicate condition(s). The Inner Join can only be safely used in a database that enforces referential integrity or where the join columns are guaranteed not to be NULL. Many transaction processing relational databases rely on atomicity, consistency, isolation, durability (ACID) data update standards to ensure data integrity, making inner joins an appropriate choice. However, transaction databases usually also have desirable join columns that are allowed to be NULL. Many reporting relational database and data warehouses use high volume extract, transform, load (ETL) batch updates which make referential integrity difficult or impossible to enforce, resulting in potentially NULL join columns that an SQL query author cannot modify and which cause inner joins to omit data with no indication of an error. The choice to use an inner join depends on the database design and data characteristics. A left outer join can usually be substituted for an inner join when the join columns in one table may contain NULL values. Any data column that may be NULL (empty) should never be used as a link in an inner join, unless the intended result is to eliminate the rows with the NULL value. If NULL join columns are to be deliberately removed from the result set, an inner join can be faster than an outer join because the table join and filtering is done in a single step. Conversely, an inner join can result in disastrously slow performance or even a server crash when used in a large volume query in combination with database functions in an SQL Where clause. A function in an SQL Where clause can result in the database ignoring relatively compact table indexes. The database may read and inner join the selected columns from both tables before reducing the number of rows using the filter that depends on a calculated value, resulting in a relatively enormous amount of inefficient processing. When a result set is produced by joining several tables, including master tables used to look up full-text descriptions of numeric identifier codes (a Lookup table), a NULL value in any one of the foreign keys can result in the entire row being eliminated from the result set, with no indication of error. A complex SQL query that includes one or more inner joins and several outer joins has the same risk for NULL values in the inner join link columns. A commitment to SQL code containing inner joins assumes NULL join columns will not be introduced by future changes, including vendor updates, design changes and bulk processing outside of the application's data validation rules such as data conversions, migrations, bulk imports and merges. One can further classify inner joins as equi-joins, as natural joins, or as cross-joins. Equi-join. An equi-join is a specific type of comparator-based join, that uses only equality comparisons in the join-predicate. Using other comparison operators (such as codice_20) disqualifies a join as an equi-join. The query shown above has already provided an example of an equi-join: SELECT * FROM employee JOIN department ON employee.DepartmentID = department.DepartmentID; We can write equi-join as below, SELECT * FROM employee, department WHERE employee.DepartmentID = department.DepartmentID; If columns in an equi-join have the same name, SQL-92 provides an optional shorthand notation for expressing equi-joins, by way of the codice_21 construct: SELECT * FROM employee INNER JOIN department USING (DepartmentID); The codice_21 construct is more than mere syntactic sugar, however, since the result set differs from the result set of the version with the explicit predicate. Specifically, any columns mentioned in the codice_21 list will appear only once, with an unqualified name, rather than once for each table in the join. In the case above, there will be a single codice_24 column and no codice_25 or codice_26. The codice_21 clause is not supported by MS SQL Server and Sybase. Natural join. The natural join is a special case of equi-join. Natural join (⋈) is a binary operator that is written as ("R" ⋈ "S") where "R" and "S" are relations. The result of the natural join is the set of all combinations of tuples in "R" and "S" that are equal on their common attribute names. For an example consider the tables "Employee" and "Dept" and their natural join: This can also be used to define composition of relations. For example, the composition of "Employee" and "Dept" is their join as shown above, projected on all but the common attribute "DeptName". In category theory, the join is precisely the fiber product. The natural join is arguably one of the most important operators since it is the relational counterpart of logical AND. Note that if the same variable appears in each of two predicates that are connected by AND, then that variable stands for the same thing and both appearances must always be substituted by the same value. In particular, the natural join allows the combination of relations that are associated by a foreign key. For example, in the above example a foreign key probably holds from "Employee"."DeptName" to "Dept"."DeptName" and then the natural join of "Employee" and "Dept" combines all employees with their departments. This works because the foreign key holds between attributes with the same name. If this is not the case such as in the foreign key from "Dept"."manager" to "Employee"."Name" then these columns have to be renamed before the natural join is taken. Such a join is sometimes also referred to as an equi-join. More formally the semantics of the natural join are defined as follows: formula_0, where "Fun" is a predicate that is true for a relation "r" if and only if "r" is a function. It is usually required that "R" and "S" must have at least one common attribute, but if this constraint is omitted, and "R" and "S" have no common attributes, then the natural join becomes exactly the Cartesian product. The natural join can be simulated with Codd's primitives as follows. Let "c"1, ..., "c""m" be the attribute names common to "R" and "S", "r"1, ..., "r""n" be the attribute names unique to "R" and let "s"1, ..., "s""k" be the attributes unique to "S". Furthermore, assume that the attribute names "x"1, ..., "x""m" are neither in "R" nor in "S". In a first step the common attribute names in "S" can now be renamed: formula_1 Then we take the Cartesian product and select the tuples that are to be joined: formula_2 A natural join is a type of equi-join where the join predicate arises implicitly by comparing all columns in both tables that have the same column-names in the joined tables. The resulting joined table contains only one column for each pair of equally named columns. In the case that no columns with the same names are found, the result is a cross join. Most experts agree that NATURAL JOINs are dangerous and therefore strongly discourage their use. The danger comes from inadvertently adding a new column, named the same as another column in the other table. An existing natural join might then "naturally" use the new column for comparisons, making comparisons/matches using different criteria (from different columns) than before. Thus an existing query could produce different results, even though the data in the tables have not been changed, but only augmented. The use of column names to automatically determine table links is not an option in large databases with hundreds or thousands of tables where it would place an unrealistic constraint on naming conventions. Real world databases are commonly designed with foreign key data that is not consistently populated (NULL values are allowed), due to business rules and context. It is common practice to modify column names of similar data in different tables and this lack of rigid consistency relegates natural joins to a theoretical concept for discussion. The above sample query for inner joins can be expressed as a natural join in the following way: SELECT * FROM employee NATURAL JOIN department; As with the explicit codice_21 clause, only one DepartmentID column occurs in the joined table, with no qualifier: PostgreSQL, MySQL and Oracle support natural joins; Microsoft T-SQL and IBM DB2 do not. The columns used in the join are implicit so the join code does not show which columns are expected, and a change in column names may change the results. In the standard, natural joins are part of the optional F401, "Extended joined table", package. In many database environments the column names are controlled by an outside vendor, not the query developer. A natural join assumes stability and consistency in column names which can change during vendor mandated version upgrades. Outer join. The joined table retains each row—even if no other matching row exists. Outer joins subdivide further into left outer joins, right outer joins, and full outer joins, depending on which table's rows are retained: left, right, or both (in this case "left" and "right" refer to the two sides of the codice_13 keyword). Like inner joins, one can further sub-categorize all types of outer joins as equi-joins, natural joins, codice_30 ("θ"-join), etc. No implicit join-notation for outer joins exists in standard SQL. Left outer join. The result of a left outer join (or simply left join) for tables A and B always contains all rows of the "left" table (A), even if the join-condition does not find any matching row in the "right" table (B). This means that if the codice_15 clause matches 0 (zero) rows in B (for a given row in A), the join will still return a row in the result (for that row)—but with NULL in each column from B. A left outer join returns all the values from an inner join plus all values in the left table that do not match to the right table, including rows with NULL (empty) values in the link column. For example, this allows us to find an employee's department, but still shows employees that have not been assigned to a department (contrary to the inner-join example above, where unassigned employees were excluded from the result). Example of a left outer join (the codice_32 keyword is optional), with the additional result row (compared with the inner join) italicized: SELECT * FROM employee LEFT OUTER JOIN department ON employee.DepartmentID = department.DepartmentID; Alternative syntaxes. Oracle supports the deprecated syntax: SELECT * FROM employee, department WHERE employee.DepartmentID = department.DepartmentID(+) Sybase supports the syntax (Microsoft SQL Server deprecated this syntax since version 2000): SELECT * FROM employee, department WHERE employee.DepartmentID *= department.DepartmentID IBM Informix supports the syntax: SELECT * FROM employee, OUTER department WHERE employee.DepartmentID = department.DepartmentID Right outer join. A right outer join (or right join) closely resembles a left outer join, except with the treatment of the tables reversed. Every row from the "right" table (B) will appear in the joined table at least once. If no matching row from the "left" table (A) exists, NULL will appear in columns from A for those rows that have no match in B. A right outer join returns all the values from the right table and matched values from the left table (NULL in the case of no matching join predicate). For example, this allows us to find each employee and his or her department, but still show departments that have no employees. Below is an example of a right outer join (the codice_32 keyword is optional), with the additional result row italicized: SELECT * FROM employee RIGHT OUTER JOIN department ON employee.DepartmentID = department.DepartmentID; Right and left outer joins are functionally equivalent. Neither provides any functionality that the other does not, so right and left outer joins may replace each other as long as the table order is switched. Full outer join. Conceptually, a full outer join combines the effect of applying both left and right outer joins. Where rows in the full outer joined tables do not match, the result set will have NULL values for every column of the table that lacks a matching row. For those rows that do match, a single row will be produced in the result set (containing columns populated from both tables). For example, this allows us to see each employee who is in a department and each department that has an employee, but also see each employee who is not part of a department and each department which doesn't have an employee. Example of a full outer join (the codice_32 keyword is optional): SELECT * FROM employee FULL OUTER JOIN department ON employee.DepartmentID = department.DepartmentID; Some database systems do not support the full outer join functionality directly, but they can emulate it through the use of an inner join and UNION ALL selects of the "single table rows" from left and right tables respectively. The same example can appear as follows: SELECT employee.LastName, employee.DepartmentID, department.DepartmentName, department.DepartmentID FROM employee INNER JOIN department ON employee.DepartmentID = department.DepartmentID UNION ALL SELECT employee.LastName, employee.DepartmentID, cast(NULL as varchar(20)), cast(NULL as integer) FROM employee WHERE NOT EXISTS ( SELECT * FROM department WHERE employee.DepartmentID = department.DepartmentID) UNION ALL SELECT cast(NULL as varchar(20)), cast(NULL as integer), department.DepartmentName, department.DepartmentID FROM department WHERE NOT EXISTS ( SELECT * FROM employee WHERE employee.DepartmentID = department.DepartmentID) Another approach could be UNION ALL of left outer join and right outer join MINUS inner join. Self-join. A self-join is joining a table to itself. Example. If there were two separate tables for employees and a query which requested employees in the first table having the same country as employees in the second table, a normal join operation could be used to find the answer table. However, all the employee information is contained within a single large table. Consider a modified codice_8 table such as the following: An example solution query could be as follows: SELECT F.EmployeeID, F.LastName, S.EmployeeID, S.LastName, F.Country FROM Employee F INNER JOIN Employee S ON F.Country = S.Country WHERE F.EmployeeID &lt; S.EmployeeID ORDER BY F.EmployeeID, S.EmployeeID; Which results in the following table being generated. For this example: Only one of the two middle pairings is needed to satisfy the original question, and the topmost and bottommost are of no interest at all in this example. Alternatives. The effect of an outer join can also be obtained using a UNION ALL between an INNER JOIN and a SELECT of the rows in the "main" table that do not fulfill the join condition. For example, SELECT employee.LastName, employee.DepartmentID, department.DepartmentName FROM employee LEFT OUTER JOIN department ON employee.DepartmentID = department.DepartmentID; can also be written as SELECT employee.LastName, employee.DepartmentID, department.DepartmentName FROM employee INNER JOIN department ON employee.DepartmentID = department.DepartmentID UNION ALL SELECT employee.LastName, employee.DepartmentID, cast(NULL as varchar(20)) FROM employee WHERE NOT EXISTS ( SELECT * FROM department WHERE employee.DepartmentID = department.DepartmentID) Implementation. Much work in database-systems has aimed at efficient implementation of joins, because relational systems commonly call for joins, yet face difficulties in optimising their efficient execution. The problem arises because inner joins operate both commutatively and associatively. In practice, this means that the user merely supplies the list of tables for joining and the join conditions to use, and the database system has the task of determining the most efficient way to perform the operation. The choices become more complex as the number of tables involved in a query increases, with each table having different characteristics in record count, average record length (considering NULL fields) and available indexes. Where Clause filters can also significantly impact query volume and cost. A query optimizer determines how to execute a query containing joins. A query optimizer has two basic freedoms: Many join-algorithms treat their inputs differently. One can refer to the inputs to a join as the "outer" and "inner" join operands, or "left" and "right", respectively. In the case of nested loops, for example, the database system will scan the entire inner relation for each row of the outer relation. One can classify query-plans involving joins as follows: These names derive from the appearance of the query plan if drawn as a tree, with the outer join relation on the left and the inner relation on the right (as convention dictates). Join algorithms. Three fundamental algorithms for performing a binary join operation exist: nested loop join, sort-merge join and hash join. Worst-case optimal join algorithms are asymptotically faster than binary join algorithms for joins between more than two relations in the worst case. Join indexes. Join indexes are database indexes that facilitate the processing of join queries in data warehouses: they are currently (2012) available in implementations by Oracle and Teradata. In the Teradata implementation, specified columns, aggregate functions on columns, or components of date columns from one or more tables are specified using a syntax similar to the definition of a database view: up to 64 columns/column expressions can be specified in a single join index. Optionally, a column that defines the primary key of the composite data may also be specified: on parallel hardware, the column values are used to partition the index's contents across multiple disks. When the source tables are updated interactively by users, the contents of the join index are automatically updated. Any query whose WHERE clause specifies any combination of columns or column expressions that are an exact subset of those defined in a join index (a so-called "covering query") will cause the join index, rather than the original tables and their indexes, to be consulted during query execution. The Oracle implementation limits itself to using bitmap indexes. A "bitmap join index" is used for low-cardinality columns (i.e., columns containing fewer than 300 distinct values, according to the Oracle documentation): it combines low-cardinality columns from multiple related tables. The example Oracle uses is that of an inventory system, where different suppliers provide different parts. The schema has three linked tables: two "master tables", Part and Supplier, and a "detail table", Inventory. The last is a many-to-many table linking Supplier to Part, and contains the most rows. Every part has a Part Type, and every supplier is based in the US, and has a State column. There are not more than 60 states+territories in the US, and not more than 300 Part Types. The bitmap join index is defined using a standard three-table join on the three tables above, and specifying the Part_Type and Supplier_State columns for the index. However, it is defined on the Inventory table, even though the columns Part_Type and Supplier_State are "borrowed" from Supplier and Part respectively. As for Teradata, an Oracle bitmap join index is only utilized to answer a query when the query's WHERE clause specifies columns limited to those that are included in the join index. Straight join. Some database systems allow the user to force the system to read the tables in a join in a particular order. This is used when the join optimizer chooses to read the tables in an inefficient order. For example, in MySQL the command codice_42 reads the tables in exactly the order listed in the query. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "R \\bowtie S = \\left\\{ t \\cup s \\mid t \\in R \\ \\land \\ s \\in S \\ \\land \\ \\mathit{Fun}(t \\cup s) \\right\\}" }, { "math_id": 1, "text": "T = \\rho_{x_1/c_1,\\ldots,x_m/c_m}(S) = \\rho_{x_1/c_1}(\\rho_{x_2/c_2}(\\ldots\\rho_{x_m/c_m}(S)\\ldots))" }, { "math_id": 2, "text": "U = \\pi_{r_1,\\ldots,r_n,c_1,\\ldots,c_m,s_1,\\ldots,s_k}(P)" } ]
https://en.wikipedia.org/wiki?curid=665204
6652599
Quasinorm
In linear algebra, functional analysis and related areas of mathematics, a quasinorm is similar to a norm in that it satisfies the norm axioms, except that the triangle inequality is replaced by formula_0 for some formula_1 Definition. A on a vector space formula_2 is a real-valued map formula_3 on formula_2 that satisfies the following conditions: A is a quasi-seminorm that also satisfies: A pair formula_4 consisting of a vector space formula_2 and an associated quasi-seminorm formula_3 is called a . If the quasi-seminorm is a quasinorm then it is also called a . Multiplier The infimum of all values of formula_5 that satisfy condition (3) is called the of formula_6 The multiplier itself will also satisfy condition (3) and so it is the unique smallest real number that satisfies this condition. The term formula_5-quasi-seminorm is sometimes used to describe a quasi-seminorm whose multiplier is equal to formula_7 A norm (respectively, a seminorm) is just a quasinorm (respectively, a quasi-seminorm) whose multiplier is formula_8 Thus every seminorm is a quasi-seminorm and every norm is a quasinorm (and a quasi-seminorm). Topology. If formula_3 is a quasinorm on formula_2 then formula_3 induces a vector topology on formula_2 whose neighborhood basis at the origin is given by the sets: formula_9 as formula_10 ranges over the positive integers. A topological vector space with such a topology is called a or just a . Every quasinormed topological vector space is pseudometrizable. A complete quasinormed space is called a . Every Banach space is a quasi-Banach space, although not conversely. Related definitions. A quasinormed space formula_11 is called a if the vector space formula_12 is an algebra and there is a constant formula_13 such that formula_14 for all formula_15 A complete quasinormed algebra is called a . Characterizations. A topological vector space (TVS) is a quasinormed space if and only if it has a bounded neighborhood of the origin. Examples. Since every norm is a quasinorm, every normed space is also a quasinormed space. formula_16 spaces with formula_17 The formula_16 spaces for formula_17 are quasinormed spaces (indeed, they are even F-spaces) but they are not, in general, normable (meaning that there might not exist any norm that defines their topology). For formula_18 the Lebesgue space formula_19 is a complete metrizable TVS (an F-space) that is not locally convex (in fact, its only convex open subsets are itself formula_19 and the empty set) and the only continuous linear functional on formula_19 is the constant formula_20 function . In particular, the Hahn-Banach theorem does not hold for formula_19 when formula_21 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\|x + y\\| \\leq K(\\|x\\| + \\|y\\|)" }, { "math_id": 1, "text": "K > 1." }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "(X, p)" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "p." }, { "math_id": 7, "text": "k." }, { "math_id": 8, "text": "1." }, { "math_id": 9, "text": "\\{x \\in X : p(x) < 1/n\\}" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "(A, \\| \\,\\cdot\\, \\|)" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": "K > 0" }, { "math_id": 14, "text": "\\|x y\\| \\leq K \\|x\\| \\cdot \\|y\\|" }, { "math_id": 15, "text": "x, y \\in A." }, { "math_id": 16, "text": "L^p" }, { "math_id": 17, "text": "0 < p < 1" }, { "math_id": 18, "text": "0 < p < 1," }, { "math_id": 19, "text": "L^p([0, 1])" }, { "math_id": 20, "text": "0" }, { "math_id": 21, "text": "0 < p < 1." } ]
https://en.wikipedia.org/wiki?curid=6652599
66527786
Video super-resolution
Generating high-resolution video frames from given low-resolution ones Video super-resolution (VSR) is the process of generating high-resolution video frames from the given low-resolution video frames. Unlike single-image super-resolution (SISR), the main goal is not only to restore more fine details while saving coarse ones, but also to preserve motion consistency. There are many approaches for this task, but this problem still remains to be popular and challenging. Mathematical explanation. Most research considers the degradation process of frames as formula_0 where: formula_1 — original high-resolution frame sequence, formula_2 — blur kernel, formula_3 — convolution operation, formula_4 — downscaling operation, formula_5 — additive noise, formula_6 — low-resolution frame sequence. Super-resolution is an inverse operation, so its problem is to estimate frame sequence formula_7 from frame sequence formula_6 so that formula_7 is close to original formula_1. Blur kernel, downscaling operation and additive noise should be estimated for given input to achieve better results. Video super-resolution approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. Some most essential components for VSR are guided by four basic functionalities: Propagation, Alignment, Aggregation, and Upsampling. Methods. When working with video, temporal information could be used to improve upscaling quality. Single image super-resolution methods could be used too, generating high-resolution frames independently from their neighbours, but it's less effective and introduces temporal instability. There are a few traditional methods, which consider the video super-resolution task as an optimization problem. Last years deep learning based methods for video upscaling outperform traditional ones. Traditional methods. There are several traditional methods for video upscaling. These methods try to use some natural preferences and effectively estimate motion between frames. The high-resolution frame is reconstructed based on both natural preferences and estimated motion. Frequency domain. Firstly the low-resolution frame is transformed to the frequency domain. The high-resolution frame is estimated in this domain. Finally, this result frame is transformed to the spatial domain. Some methods use Fourier transform, which helps to extend the spectrum of captured signal and though increase resolution. There are different approaches for these methods: using weighted least squares theory, total least squares (TLS) algorithm, space-varying or spatio-temporal varying filtering. Other methods use wavelet transform, which helps to find similarities in neighboring local areas. Later second-generation wavelet transform was used for video super resolution. Spatial domain. Iterative back-projection methods assume some function between low-resolution and high-resolution frames and try to improve their guessed function in each step of an iterative process. Projections onto convex sets (POCS), that defines a specific cost function, also can be used for iterative methods. Iterative adaptive filtering algorithms use Kalman filter to estimate transformation from low-resolution frame to high-resolution one. To improve the final result these methods consider temporal correlation among low-resolution sequences. Some approaches also consider temporal correlation among high-resolution sequence. To approximate Kalman filter a common way is to use least mean squares (LMS). One can also use steepest descent, least squares (LS), recursive least squares (RLS). Direct methods estimate motion between frames, upscale a reference frame, and warp neighboring frames to the high-resolution reference one. To construct result, these upscaled frames are fused together by median filter, weighted median filter, adaptive normalized averaging, AdaBoost classifier or SVD based filters. Non-parametric algorithms join motion estimation and frames fusion to one step. It is performed by consideration of patches similarities. Weights for fusion can be calculated by nonlocal-means filters. To strength searching for similar patches, one can use rotation invariance similarity measure or adaptive patch size. Calculating intra-frame similarity help to preserve small details and edges. Parameters for fusion also can be calculated by kernel regression. Probabilistic methods use statistical theory to solve the task. maximum likelihood (ML) methods estimate more probable image. Another group of methods use maximum a posteriori (MAP) estimation. Regularization parameter for MAP can be estimated by Tikhonov regularization. Markov random fields (MRF) is often used along with MAP and helps to preserve similarity in neighboring patches. Huber MRFs are used to preserve sharp edges. Gaussian MRF can smooth some edges, but remove noise. Deep learning based methods. Aligned by motion estimation and motion compensation. In approaches with alignment, neighboring frames are firstly aligned with target one. One can align frames by performing motion estimation and motion compensation (MEMC) or by using Deformable convolution (DC). Motion estimation gives information about the motion of pixels between frames. motion compensation is a warping operation, which aligns one frame to another based on motion information. Examples of such methods: Aligned by deformable convolution. Another way to align neighboring frames with target one is deformable convolution. While usual convolution has fixed kernel, deformable convolution on the first step estimate shifts for kernel and then do convolution. Examples of such methods: Aligned by homography. Some methods align frames by calculated homography between frames. Spatial non-aligned. Methods without alignment do not perform alignment as a first step and just process input frames. 3D convolutions. While 2D convolutions work on spatial domain, 3D convolutions use both spatial and temporal information. They perform motion compensation and maintain temporal consistency Recurrent neural networks. Recurrent convolutional neural networks perform video super-resolution by storing temporal dependencies. Videos. Non-local methods extract both spatial and temporal information. The key idea is to use all possible positions as a weighted sum. This strategy may be more effective than local approaches (the progressive fusion non-local method) extract spatio-temporal features by non-local residual blocks, then fuse them by progressive fusion residual block (PFRB). The result of these blocks is a residual image. The final result is gained by adding bicubically upsampled input frame Metrics. The common way to estimate the performance of video super-resolution algorithms is to use a few metrics: Currently, there aren't so many objective metrics to verify video super-resolution method's ability to restore real details. Research is currently underway in this area. Another way to assess the performance of the video super-resolution algorithm is to organize the subjective evaluation. People are asked to compare the corresponding frames, and the final mean opinion score (MOS) is calculated as the arithmetic mean overall ratings. Datasets. While deep learning approaches of video super-resolution outperform traditional ones, it's crucial to form a high-quality dataset for evaluation. It's important to verify models' ability to restore small details, text, and objects with complicated structure, to cope with big motion and noise. Benchmarks. A few benchmarks in video super-resolution were organized by companies and conferences. The purposes of such challenges are to compare diverse algorithms and to find the state-of-the-art for the task. NTIRE 2019 Challenge. The NTIRE 2019 Challenge was organized by CVPR and proposed two tracks for Video Super-Resolution: clean (only bicubic degradation) and blur (blur added firstly). Each track had more than 100 participants and 14 final results were submitted. &lt;br&gt; Dataset REDS was collected for this challenge. It consists of 30 videos of 100 frames each. The resolution of ground-truth frames is 1280×720. The tested scale factor is 4. To evaluate models' performance PSNR and SSIM were used. The best participants' results are performed in the table: Youku-VESR Challenge 2019. The Youku-VESR Challenge was organized to check models' ability to cope with degradation and noise, which are real for Youku online video-watching application. The proposed dataset consists of 1000 videos, each length is 4–6 seconds. The resolution of ground-truth frames is 1920×1080. The tested scale factor is 4. PSNR and VMAF metrics were used for performance evaluation. Top methods are performed in the table: AIM 2019 Challenge. The challenge was held by ECCV and had two tracks on video extreme super-resolution: first track checks the fidelity with reference frame (measured by PSNR and SSIM). The second track checks the perceptual quality of videos (MOS). Dataset consists of 328 video sequences of 120 frames each. The resolution of ground-truth frames is 1920×1080. The tested scale factor is 16. Top methods are performed in the table: AIM 2020 Challenge. Challenge's conditions are the same as AIM 2019 Challenge. Top methods are performed in the table: MSU Video Super-Resolution Benchmark. The MSU Video Super-Resolution Benchmark was organized by MSU and proposed three types of motion, two ways to lower resolution, and eight types of content in the dataset. The resolution of ground-truth frames is 1920×1280. The tested scale factor is 4. 14 models were tested. To evaluate models' performance PSNR and SSIM were used with shift compensation. Also proposed a few new metrics: ERQAv1.0, QRCRv1.0, and CRRMv1.0. Top methods are performed in the table: MSU Super-Resolution for Video Compression Benchmark. The MSU Super-Resolution for Video Compression Benchmark was organized by MSU. This benchmark tests models' ability to work with compressed videos. The dataset consists of 9 videos, compressed with different Video codec standards and different bitrates. Models are ranked by BSQ-rate over subjective score. The resolution of ground-truth frames is 1920×1080. The tested scale factor is 4. 17 models were tested. 5 video codecs were used to compress ground-truth videos. Top combinations of Super-Resolution methods and video codecs are performed in the table: Application. In many areas, working with video, we deal with different types of video degradation, including downscaling. The resolution of video can be degraded because of imperfections of measuring devices, such as optical degradations and limited size of camera sensors. Bad light and weather conditions add noise to video. Object and camera motion also decrease video quality. Super Resolution techniques help to restore the original video. It's useful in a wide range of applications, such as It also helps to solve task of object detection, face and character recognition (as preprocessing step). The interest to super-resolution is growing with the development of high definition computer displays and TVs. Video super-resolution finds its practical use in some modern smartphones and cameras, where it is used to reconstruct digital photographs. Reconstructing details on digital photographs is a difficult task since these photographs are already incomplete: the camera sensor elements measure only the intensity of the light, not directly its color. A process called demosaicing is used to reconstruct the photos from partial color information. A single frame doesn't give us enough data to fill in the missing colors, however, we can receive some of the missing information from multiple images taken one after the other. This process is known as burst photography and can be used to restore a single image of good quality from multiple sequential frames. When we capture a lot of sequential photos with a smartphone or handheld camera, there is always some movement present between the frames because of the hand motion. We can take advantage of this hand tremor by combining the information on those images. We choose a single image as the "base" or reference frame and align every other frame relative to it. There are situations where hand motion is simply not present because the device is stabilized (e.g. placed on a tripod). There is a way to simulate natural hand motion by intentionally slightly moving the camera. The movements are extremely small so they don't interfere with regular photos. You can observe these motions on Google Pixel 3 phone by holding it perfectly still (e.g. pressing it against the window) and maximally pinch-zooming the viewfinder. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{y\\} = (\\{x\\} * k)\\downarrow{_s} + \\{n\\} " }, { "math_id": 1, "text": "\\{x\\}" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "*" }, { "math_id": 4, "text": "\\downarrow{_s}" }, { "math_id": 5, "text": "\\{n\\}" }, { "math_id": 6, "text": "\\{y\\}" }, { "math_id": 7, "text": "\\{\\overline{x}\\}" } ]
https://en.wikipedia.org/wiki?curid=66527786
66535076
Single-particle trajectory
Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule. Molecules can now by visualized based on recent super-resolution microscopy, which allow routine collections of thousands of short and long trajectories. These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell, as emphasized in various cell types such as neuronal cells, astrocytes, immune cells and many others. SPTs allow observing moving molecules inside cells to collect statistics. SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization, but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data Assembling points into a trajectory based on tracking algorithms. Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points. Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise. Extract physical parameters from redundant SPTs. The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level. In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute the mean-square displacement (MSD) or second order statistical moment: formula_0 (average over realizations), where formula_1 is the called the anomalous exponent. For a Brownian motion, formula_2, where D is the diffusion coefficient, "n" is dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion. The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated. Physical model to recover spatial properties from redundant SPTs. Langevin and Smoluchowski equations as a model of motion. Statistical methods to extract information from SPTs are based on stochastic models, such as the Langevin equation or its Smoluchowski's limit and associated models that account for additional localization point identification noise or memory kernel. The Langevin equation describes a stochastic particle driven by a Brownian force formula_3 and a field of force (e.g., electrostatic, mechanical, etc.) with an expression formula_4: formula_5 where m is the mass of the particle and formula_6is the friction coefficient of a diffusing particle, formula_7 the viscosity. Here formula_3 is the formula_8-correlated Gaussian white noise. The force can derived from a potential well U so that formula_9 and in that case, the equation takes the form formula_10 where formula_11 is the energy and formula_12 the Boltzmann constant and "T" the temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations. In the large friction limit formula_13 the trajectories formula_14 of the Langevin equation converges in probability to those of the Smoluchowski's equation formula_15 where formula_16 is formula_8-correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vs Stratonovich integral representations or any others. General model equations. For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation formula_17 where formula_18 is the drift field and formula_19the diffusion matrix. The effective diffusion tensor can vary in space formula_20 (formula_21 denotes the transpose of formula_22). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficient formula_23 remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic. Statistical analysis of these trajectories. The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data. The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes. Empirical estimators for the drift and diffusion tensor of a stochastic process. Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells. The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference. The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions. The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the increments formula_24: formula_25 formula_26 Here the notation formula_27means averaging over all trajectories that are at point "x" at time "t". The coefficients of the Smoluchowski equation can be statistically estimated at each point "x" from an infinitely large sample of its trajectories in the neighborhood of the point "x" at time "t". Empirical estimation. In practice, the expectations for a and D are estimated by finite sample averages andformula_28 is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time step formula_29, where for tens to hundreds of points falling in any bin. This is usually enough for the estimation. To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square bins formula_30of side r and centre formula_31 and the local drift and diffusion are estimated for each of the square. Considering a sample with formula_32 trajectories formula_33 where formula_34 are the sampling times, the discretization of equation for the drift formula_35at position formula_31 is given for each spatial projection on the x and y axis by formula_36 formula_37 where formula_38is the number of points of trajectory that fall in the square formula_30. Similarly, the components of the effective diffusion tensor formula_39 are approximated by the empirical sums formula_40 formula_41 formula_42 The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radius "r" or by moving sliding windows (of the order of 50 to 100 nm). Automated recovery of the boundary of a nanodomain. Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle|X(t+\\Delta t)- X(t)|^2\\rangle \\sim t^\\alpha" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\langle|X(t+\\Delta t)- X(t)|^2\\rangle=2 n Dt" }, { "math_id": 3, "text": "\\Xi" }, { "math_id": 4, "text": "F(x,t)" }, { "math_id": 5, "text": "m\\ddot x+\\Gamma \\dot x-F(x,t)=\\Xi," }, { "math_id": 6, "text": "\\Gamma= 6\\pi a \\rho" }, { "math_id": 7, "text": "\\rho" }, { "math_id": 8, "text": "\\delta" }, { "math_id": 9, "text": "F(x,t)=- U'(x)" }, { "math_id": 10, "text": "m\\frac{d^2 x}{dt^2} +\\Gamma \\frac{d x}{dt} +\\nabla U(x)=\\sqrt{2\\varepsilon\\gamma}\\,\\frac{d\\eta}{dt}," }, { "math_id": 11, "text": "\\varepsilon=k_\\text{B} T," }, { "math_id": 12, "text": "k_\\text{B}" }, { "math_id": 13, "text": "\\gamma\\to\\infty" }, { "math_id": 14, "text": "x(t)" }, { "math_id": 15, "text": "\\gamma \\dot{x}+U^\\prime (x)=\\sqrt{2\\varepsilon\\gamma}\\,\\dot{w}," }, { "math_id": 16, "text": "\\dot w(t) " }, { "math_id": 17, "text": "\\dot{X}(t)={b}(X(t)) +\\sqrt{2}{B}_e(X(t))\\dot{w}(t), \\qquad\\qquad (1) " }, { "math_id": 18, "text": "{b}(X) " }, { "math_id": 19, "text": "{B}_e \n" }, { "math_id": 20, "text": "D(X)=\\frac{1}{2} B(X) B^T X^T" }, { "math_id": 21, "text": "X^T " }, { "math_id": 22, "text": " X " }, { "math_id": 23, "text": "\\gamma" }, { "math_id": 24, "text": "\\Delta X= X(t+\\Delta t)- X(t)" }, { "math_id": 25, "text": "a( x)=\\lim_{\\Delta t \\rightarrow 0} \\frac{E[\\Delta X(t)\\mid X(t)= x]}{\\Delta t}," }, { "math_id": 26, "text": "D( x)=\\lim_{\\Delta t \\rightarrow 0} \\frac{E[\\Delta X(t)^T\\,\\Delta X(t)\\mid X(t)= x]}{2\\,\\Delta t}." }, { "math_id": 27, "text": "E[\\cdot\\,|\\, X(t)= x]" }, { "math_id": 28, "text": "\\Delta t" }, { "math_id": 29, "text": "\\Delta\nt" }, { "math_id": 30, "text": "S( x_k,r)" }, { "math_id": 31, "text": "x_k" }, { "math_id": 32, "text": "N_t" }, { "math_id": 33, "text": "\\{x^i(t_1),\\dots, x^i(t_{N_s}) \\}," }, { "math_id": 34, "text": "t_j" }, { "math_id": 35, "text": "a(x_k)=(a_x(x_k),a_y(x_k))" }, { "math_id": 36, "text": "a_x(x_k) \\approx \\frac{1}{N_k}\\sum_{j=1}^{N_t} \\sum_{i=0, \\tilde x^j_i\\in S(x_k,r)}^{N_s-1}\\left(\\frac{ x^j_{i+1}- x^j_i}{\\Delta t} \\right)" }, { "math_id": 37, "text": "a_y(x_k) \\approx \\frac{1}{N_k}\\sum_{j=1}^{N_t}\\sum_{i=0, \\tilde x^j_i\\in S(x_k,r)}^{N_s-1} \\left(\\frac{ y^j_{i+1}- y^j_i}{\\Delta t}\\right)," }, { "math_id": 38, "text": "N_k" }, { "math_id": 39, "text": "D( x_k)" }, { "math_id": 40, "text": "D_{xx}(x_k) \\approx \\frac{1}{N_k} \\sum_{j=1}^{N_t} \\sum_{i=0, x_i\\in S(x_k,r)}^{N_s-1} \\frac{(x^j_{i+1}-x^j_i)^2} {2\\,\\Delta t}," }, { "math_id": 41, "text": "D_{yy}(x_k) \\approx \\frac{1}{N_k} \\sum_{j=1}^{N_t} \\sum_{i=0,x_i\\in S(x_k,r)}^{N_s-1} \\frac{(y^j_{i+1}-y^j_i)^2} {2\\,\\Delta t}," }, { "math_id": 42, "text": "D_{xy}(x_k) \\approx \\frac{1}{N_k}\\sum_{j=1}^{N_t}\\sum_{i=0,x_i\\in S(x_k,r)}^{N_s-1}\\frac{(x^j_{i+1}-x^j_i)(y^j_{i+1}-y^j_i)}{2\\,\\Delta t}." } ]
https://en.wikipedia.org/wiki?curid=66535076
66540523
Schamel equation
The Schamel equation (S-equation) is a nonlinear partial differential equation of first order in time and third order in space. Similar to a Korteweg–De Vries equation (KdV), it describes the development of a localized, coherent wave structure that propagates in a nonlinear dispersive medium. It was first derived in 1973 by Hans Schamel to describe the effects of electron trapping in the trough of the potential of a solitary electrostatic wave structure travelling with ion acoustic speed in a two-component plasma. It now applies to various localized pulse dynamics such as: The equation. The Schamel equation is formula_0 where formula_1 stands for formula_2. In the case of ion-acoustic solitary waves, the parameter formula_3 reflects the effect of electrons trapped in the trough of the electrostatic potential formula_4. It is given by formula_5, where formula_6, the trapping parameter, reflects the status of the trapped electrons, formula_7 representing a flat-topped stationary trapped electron distribution, formula_8 a dip or depression. It holds formula_9, where formula_10 is the wave amplitude. All quantities are normalized: the potential energy by electron thermal energy, the velocity by ion sound speed, time by inverse ion plasma frequency and space by electron Debye length. Note that for a KdV equation formula_11 is replaced by formula_4 such that the nonlinearity becomes bilinear (see later). Solitary wave solution. The steady state solitary wave solution, formula_12, is given in the comoving frame by: formula_13 formula_14 The speed of the structure is supersonic, formula_15, since formula_16 has to be positive, formula_17, which corresponds in the ion acoustic case to a depressed trapped electron distribution formula_18. Proof by pseudo-potential method. The proof of this solution uses the analogy to classical mechanics via formula_19 with formula_20, being the corresponding pseudo-potential. From this we get by an integration: formula_21, which represents the pseudo-energy, and from the Schamel equation: formula_22. Through the obvious demand, namely that at potential maximum, formula_23, the slope formula_24 of formula_4 vanishes we get: formula_25. This is a nonlinear dispersion relation (NDR) because it determines the phase velocity formula_26 given by the second expression. The canonical form of formula_27 is obtained by replacing formula_26 with the NDR. It becomes: formula_28 The use of this expression in formula_29, which follows from the pseudo-energy law, yields by integration: formula_30 This is the inverse function of formula_31 as given in the first equation. Note that the integral in the denominator of formula_32 exists and can be expressed by known mathematical functions. Hence formula_33 is a mathematically disclosed function. However, the structure often remains mathematically undisclosed, i.e. it cannot be expressed by known functions (see for instance Sect. Logarithmic Schamel equation). This generally happens if more than one trapping scenarios are involved, as e.g. in driven intermittent plasma turbulence. Non-integrability. In contrast to the KdV equation, the Schamel equation is an example of a non-integrable evolution equation. It only has a finite number of (polynomial) constants of motion and does not pass a Painlevé test. Since a so-called Lax pair ("L","P") does not exist, it is not integrable by the inverse scattering transform. Generalizations. Schamel–Korteweg–de Vries equation. Taking into account the next order in the expression for the expanded electron density, we get formula_34, from which we obtain the pseudo-potential -formula_35. The corresponding evolution equation then becomes: formula_36 which is the Schamel–Korteweg–de Vries equation. Its solitary wave solution reads formula_37 with formula_38 and formula_39. Depending on Q it has two limiting solitary wave solutions: For formula_40 we find formula_41, the Schamel solitary wave. For formula_42 we get formula_43 which represents the ordinary ion acoustic soliton. The latter is fluid-like and is achieved for formula_44 or formula_45 representing an isothermal electron equation of state. Note that the absence of a trapping effect ("b" = 0) does not imply the absence of trapping, a statement that is usually misrepresented in the literature, especially in textbooks. As long as formula_10 is nonzero, there is always a nonzero trapping width formula_46 in velocity space for the electron distribution function. Logarithmic Schamel equation. Another generalization of the S-equation is obtained in the case of ion acoustic waves by admitting a second trapping channel. By considering an additional, non-perturbative trapping scenario, Schamel received: formula_47, a generalization called logarithmic S-equation. In the absence of the square root nonlinearity, formula_44, it is solved by a Gaussian shaped hole solution: formula_48 with formula_49 and has a supersonic phase velocity formula_50. The corresponding pseudo-potential is given by formula_51. From this follows formula_52 which is the inverse function of the Gaussian mentioned. For a non-zero b, keeping formula_53, the integral to get formula_54 can no longer be solved analytically, i.e. by known mathematical functions. A solitary wave structure still exists, but cannot be reached in a disclosed form. Schamel equation with random coefficients. The fact that electrostatic trapping involves stochastic processes at resonance caused by chaotic particle trajectories has led to considering b in the S-equation as a stochastic quantity. This results in a Wick-type stochastic S-equation. Time-fractional Schamel equation. A further generalization is obtained by replacing the first time derivative by a Riesz fractional derivative yielding a time-fractional S-equation. It has applications e.g. for the broadband electrostatic noise observed by the Viking satellite. Schamel–Schrödinger equation. A connection between the Schamel equation and the nonlinear Schrödinger equation can be made within the context of a Madelung fluid. It results in the Schamel–Schrödinger equation. formula_55 and has applications in fiber optics and laser physics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\phi_t + (1 + b \\sqrt \\phi ) \\phi_x + \\phi_{xxx} = 0, " }, { "math_id": 1, "text": "\\phi_{(t,x)}" }, { "math_id": 2, "text": "\\partial_{(t,x)}\\phi" }, { "math_id": 3, "text": "b " }, { "math_id": 4, "text": "\\phi" }, { "math_id": 5, "text": "b=\\frac{1-\\beta}{\\sqrt \\pi}" }, { "math_id": 6, "text": "\\beta" }, { "math_id": 7, "text": "\\beta=0" }, { "math_id": 8, "text": "\\beta<0" }, { "math_id": 9, "text": "0\\le\\phi \\le\\psi\\ll 1" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "b\\sqrt \\phi" }, { "math_id": 12, "text": "\\phi(x-v_0t)" }, { "math_id": 13, "text": " \\phi(x)=\\psi \\operatorname{sech}^4 \\left( \\sqrt{\\frac{b\\sqrt \\psi}{30}} x \\right) " }, { "math_id": 14, "text": " v_0 = 1 + \\frac{8}{15} b \\sqrt \\psi." }, { "math_id": 15, "text": "v_0>1" }, { "math_id": 16, "text": "b" }, { "math_id": 17, "text": "0<b" }, { "math_id": 18, "text": "\\beta <1" }, { "math_id": 19, "text": "\\phi_{xx} =: - \\mathcal {V} '(\\phi) " }, { "math_id": 20, "text": "\\mathcal {V}(\\phi)" }, { "math_id": 21, "text": "\\frac{\\phi_x^2}{2} + \\mathcal{V(\\phi)} = 0" }, { "math_id": 22, "text": " - \\mathcal{V}(\\phi) = \\frac{(v_0 - 1)}{2} \\phi^2 - \\frac{4b}{15} \\phi^{5/2}" }, { "math_id": 23, "text": "\\phi=\\psi" }, { "math_id": 24, "text": "\\phi_x" }, { "math_id": 25, "text": "\\mathcal{V}(\\psi)=0" }, { "math_id": 26, "text": "v_0 " }, { "math_id": 27, "text": "\\mathcal {V}(\\phi) " }, { "math_id": 28, "text": " - \\mathcal {V}(\\phi) =\\frac{4}{15} b \\phi^2 (\\sqrt \\psi -\\sqrt \\phi). " }, { "math_id": 29, "text": "x(\\phi)= \\int_\\phi^\\psi \\frac{d\\xi}{\\sqrt{-2\\mathcal{V}(\\xi)}} " }, { "math_id": 30, "text": " x(\\phi) =\\sqrt{ \\frac{30}{b\\sqrt\\psi}} \\tanh^{-1} \\left( \\sqrt{1-\\sqrt\\frac{\\phi}{\\psi}}\\right). " }, { "math_id": 31, "text": "\\phi(x)" }, { "math_id": 32, "text": "x(\\phi)" }, { "math_id": 33, "text": "\\phi (x) " }, { "math_id": 34, "text": "n_e= 1 + \\phi - \\frac{4 b}{3} \\phi^{3/2} + \\frac{1}{2}\\phi^2 + \\cdots " }, { "math_id": 35, "text": "\\mathcal{V}(\\phi)=\\frac{8b}{15}\\phi^2 (\\sqrt\\psi -\\sqrt\\phi) +\\frac{1}{3}\\phi^2 (\\psi -\\phi)" }, { "math_id": 36, "text": " \\phi_t + (1 + b \\sqrt \\phi + \\phi) \\phi_x + \\phi_{xxx} = 0," }, { "math_id": 37, "text": "\\phi(x)=\\psi \\operatorname{sech}^4 (y)\\left[1 + \\frac{1}{1+Q}\\tanh^2(y) \\right]^{-2} " }, { "math_id": 38, "text": "y=\\frac{x}{2}\\sqrt{\\frac{\\psi(1+Q)}{12}}" }, { "math_id": 39, "text": "Q=\\frac{8b}{5\\sqrt{ \\psi}}" }, { "math_id": 40, "text": "1\\ll Q" }, { "math_id": 41, "text": "\\phi(x)=\\psi \\operatorname{sech}^4(\\sqrt{\\frac{b\\sqrt \\psi}{30}} x)" }, { "math_id": 42, "text": "1\\gg Q" }, { "math_id": 43, "text": "\\phi(x)=\\psi \\operatorname{sech}^2(\\sqrt{\\frac{\\psi}{12}} x)" }, { "math_id": 44, "text": "b=0" }, { "math_id": 45, "text": "\\beta=1" }, { "math_id": 46, "text": "2\\sqrt{2\\phi}" }, { "math_id": 47, "text": "\\qquad \\qquad \\phi_t + (1 + b \\sqrt \\phi - D \\ln\\phi) \\phi_x + \\phi_{xxx} = 0" }, { "math_id": 48, "text": "\\phi(x)=\\psi e^{Dx^2/4}" }, { "math_id": 49, "text": "D<0" }, { "math_id": 50, "text": "v_0=1 + D(\\ln \\psi - 3/2) > 1" }, { "math_id": 51, "text": "-\\mathcal {V}(\\phi)= D \\frac{\\phi^2}{2}\\ln \\frac{\\phi}{\\psi}" }, { "math_id": 52, "text": "x(\\phi)= 2 \\sqrt{-D \\ln \\frac{\\psi}{\\phi}}" }, { "math_id": 53, "text": "D" }, { "math_id": 54, "text": "x (\\phi) " }, { "math_id": 55, "text": " i\\phi_t + |\\phi|^{1/2}\\phi + \\phi_{xx} = 0" } ]
https://en.wikipedia.org/wiki?curid=66540523
66541277
Sack–Schamel equation
Mathematical Concept The Sack–Schamel equation describes the nonlinear evolution of the cold ion fluid in a two-component plasma under the influence of a self-organized electric field. It is a partial differential equation of second order in time and space formulated in Lagrangian coordinates. The dynamics described by the equation take place on an ionic time scale, which allows electrons to be treated as if they were in equilibrium and described by an isothermal Boltzmann distribution. Supplemented by suitable boundary conditions, it describes the entire configuration space of possible events the ion fluid is capable of, both globally and locally. The equation. The Sack–Schamel equation is in its simplest form, namely for isothermal electrons, given by formula_0 formula_1 is therein the specific volume of the ion fluid, formula_2 the Lagrangian mass variable and t the time (see the following text). Derivation and application. We treat, as an example, the plasma expansion into vacuum, i.e. a plasma that is confined initially in a half-space and is released at t=0 to occupy in course of time the second half. The dynamics of such a two-component plasma, consisting of isothermal Botzmann-like electrons and a cold ion fluid, is governed by the ion equations of continuity and momentum, formula_3 and formula_4, respectively. Both species are thereby coupled through the self-organized electric field formula_5, which satisfies Poisson's equation, formula_6. Supplemented by suitable initial and boundary conditions (b.c.s), they represent a self-consistent, intrinsically closed set of equations that represent the laminar ion flow in its full pattern on the ion time scale. Figs. 1a, 1b show an example of a typical evolution. Fig. 1a shows the ion density in "x"-space for different discrete times, Fig. 1b a small section of the density front. Most notable is the appearance of a spiky ion front associated with the collapse of density at a certain point in space-time formula_7. Here, the quantity formula_8 becomes zero. This event is known as "wave breaking" by analogy with a similar phenomenon that occurs with water waves approaching a beach. This result is obtained by a Lagrange numerical scheme, in which the Euler coordinates formula_9 are replaced by Lagrange coordinates formula_10, and by so-called open b.c.s, which are formulated by differential equations of the first order. This transformation is provided by formula_11, formula_12, where formula_13 is the Lagrangian mass variable. The inverse transformation is given by formula_14 and it holds the identity: formula_15. With this identity we get through an "x"-derivation formula_16 or formula_17. In the second step the definition of the mass variable was used which is constant along the trajectory of a fluid element: formula_18. This follows from the definition of formula_19, from the continuity equation and from the replacement of formula_20 by formula_21. Hence formula_22. The velocity of a fluid element coincides with the local fluid velocity. It immediately follows: formula_23 where the momentum equation has been used as well as formula_24, which follows from the definition of formula_19 and from formula_25. Replacing formula_26 by formula_27 we get from Poisson's equation: formula_28. Hence formula_29. Finally, replacing formula_30 in the formula_31 expression we get the desired equation:formula_32. Here formula_33 is a function of formula_34: formula_35 and for convenience we may replace formula_36 by formula_37. Further details on this transition from one to the other coordinate system can be found in. Note its unusual character because of the implicit occurrence of formula_31. Physically V represents the specific volume. It is equivalent with the Jacobian J of the transformation from Eulerian to Lagrangian coordinates since it holds formula_38 Wave-breaking solution. An analytical, global solution of the Sack–Schamel equation is generally not available. The same holds for the plasma expansion problem. This means that the data formula_39 for the collapse cannot be predicted, but have to be taken from the numerical solution. Nonetheless, it is possible, locally in space and time, to obtain a solution to the equation. This is presented in detail in Sect.6 "Theory of bunching and wave breaking in ion dynamics" of. The solution can be found in equation (6.37) and reads for small formula_19 and "t" formula_40 where formula_41 are constants and formula_42 stand for formula_43. The collapse is hence at formula_44. formula_45 is V-shaped in formula_46 and its minimum moves linearly with formula_47 towards the zero point (see Fig. 7 of ). This means that the density n diverges at formula_48 when we return to the original Lagrangian variables. It is easily seen that the slope of the velocity, formula_49, diverges as well when formula_50. In the final collapse phase, the Sack–Schamel equation transits into the quasi-neutral scalar wave equation: formula_51 and the ion dynamics obeys Euler's simple wave equation: formula_52. Generalization. A generalization is achieved by allowing different equations of state for the electrons. Assuming a polytropic equation of state, formula_53 or with formula_54: formula_55 where formula_56 refers to isothermal electrons, we get (see again Sect. 6 of ): formula_57 The limitation of formula_58 results from the demand that at infinity the electron density should vanish (for the expansion into vacuum problem). For more details, see Sect. 2: "The plasma expansion model" of or more explicitly Sect. 2.2: "Constraints on the electron dynamics". Fast Ion Bunching. These results are in two respects remarkable. The collapse, which could be resolved analytically by the Sack–Schamel equation, signalizes through its singularity the absence of real physics. A real plasma can continue in at least two ways. Either it enters into the kinetic collsionless Vlasov regime and develops multi-streaming and folding effects in phase space or it experiences dissipation (e.g. through Navier-Stokes viscosity in the momentum equation ) which controls furtheron the evolution in the subsequent phase. As a consequence the ion density peak saturates and continues its acceleration into vacuum maintaining its spiky nature. This phenomenon of fast ion bunching being recognized by its spiky fast ion front has received immense attention in the recent past in several fields. High-energy ion jets are of importance and promising in applications such as in the laser-plasma interaction, in the laser irradiation of solid targets, being also referred to as target normal sheath acceleration, in future plasma based particle accelerators and radiation sources (e.g. for tumor therapy) and in space plasmas. Fast ion bunches are hence a relic of wave breaking that is analytically completely described by the Sack–Schamel equation. (For more details especially about the spiky nature of the fast ion front in case of dissipation see http://www.hans-schamel.de or the original papers ). An article in which the Sack-Schamel's wave breaking mechanism is mentioned as the origin of a peak ion front was published e.g. by Beck and Pantellini (2009). Finally, the notability of the Sack–Schamel equation is clarified through a recently published molecular dynamics simulation. In the early phase of the plasma expansion a distinct ion peak could be observed, emphasizing the importance of the wave breaking scenario as predicted by the equation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\ddot V + \\partial_\\eta \\left[\\frac{1}{1-\\ddot V} \\partial_\\eta \\left(\\frac{1-\\ddot V}{V}\\right) \\right] =0.\n" }, { "math_id": 1, "text": " V(\\eta,t)" }, { "math_id": 2, "text": " \\eta " }, { "math_id": 3, "text": "\\partial_t n + \\partial_x(nv)=0" }, { "math_id": 4, "text": "\\partial_t v + v\\,\\partial_xv=-\\partial_x \\varphi" }, { "math_id": 5, "text": "E (x, t) = - \\partial_x \\varphi (x, t)" }, { "math_id": 6, "text": "\\partial _x^2\\varphi= e^\\varphi - n" }, { "math_id": 7, "text": "(x_ *, t _ *)" }, { "math_id": 8, "text": "V:=1/n" }, { "math_id": 9, "text": "(x, t)" }, { "math_id": 10, "text": "(\\eta, \\tau)" }, { "math_id": 11, "text": "\\eta = \\eta (x, t)" }, { "math_id": 12, "text": "\\tau = t" }, { "math_id": 13, "text": "\\eta(x,t)=\\int_0^x n(\\tilde x,t) \\, d\\tilde x" }, { "math_id": 14, "text": "x=x(\\eta,\\tau), t=\\tau" }, { "math_id": 15, "text": "x(\\eta(x,t),\\tau)=x" }, { "math_id": 16, "text": "\\partial_x \\eta \\, \\partial_\\eta x = 1" }, { "math_id": 17, "text": "\\partial_\\eta x =\\frac{1}{\\partial_x \\eta} = \\frac{1}{n}=V" }, { "math_id": 18, "text": "(\\partial_t + v \\partial_x)\\eta(x,t)=0" }, { "math_id": 19, "text": "\\eta" }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "\\partial_x \\eta" }, { "math_id": 22, "text": "\\partial_\\tau x(\\eta,\\tau) =: \\dot x(\\eta, \\tau) = v(\\eta,\\tau)" }, { "math_id": 23, "text": "\\ddot V=\\partial_\\eta \\ddot x=\\partial_\\eta \\dot v=\\partial_\\eta E =-\\partial_\\eta(\\frac{1}{V} \\, \\partial_\\eta \\varphi)" }, { "math_id": 24, "text": "\\partial_x\\varphi=\\frac{1}{V}\\partial_\\eta\\varphi" }, { "math_id": 25, "text": "\\partial_x \\eta=n=\\frac{1}{V}" }, { "math_id": 26, "text": "\\partial_x" }, { "math_id": 27, "text": "\\frac{1}{V}\\partial_\\eta" }, { "math_id": 28, "text": "\\partial_\\eta\\left( \\frac{1}{V} \\partial_\\eta\\varphi \\right)= V e^\\varphi-1 = -\\ddot V" }, { "math_id": 29, "text": "\\varphi=\\ln\\left(\\frac{1-\\ddot V}{V}\\right)" }, { "math_id": 30, "text": "\\varphi" }, { "math_id": 31, "text": "\\ddot V" }, { "math_id": 32, "text": "\\ddot V + \\partial_\\eta \\left[ \\frac{1}{1-\\ddot V} \\partial_\\eta \\left(\\frac{1-\\ddot V}{V}\\right)\\right]=0" }, { "math_id": 33, "text": "V" }, { "math_id": 34, "text": "(\\eta,\\tau)" }, { "math_id": 35, "text": "V(\\eta,\\tau)" }, { "math_id": 36, "text": "\\tau" }, { "math_id": 37, "text": "t" }, { "math_id": 38, "text": "dx= \\frac{dx}{d\\eta} \\, d\\eta= V \\, d\\eta= J \\,d\\eta." }, { "math_id": 39, "text": "(x _ *, t _ *) " }, { "math_id": 40, "text": "V(\\eta,t) = at \\left[ 1 + \\frac{t}{2a} - b\\eta + c(\\eta^2-\\Omega^2 t^2) + d(\\eta-\\Omega t)^2(\\eta + 2\\Omega t) +\\cdots\\right]" }, { "math_id": 41, "text": "a,b,c,d,\\Omega" }, { "math_id": 42, "text": "(\\eta,t)" }, { "math_id": 43, "text": "(\\eta_*-\\eta,t_*-t)" }, { "math_id": 44, "text": "(\\eta, t) = (0,0) " }, { "math_id": 45, "text": "V (\\eta, t) " }, { "math_id": 46, "text": "\\eta " }, { "math_id": 47, "text": "\\eta = \\Omega t" }, { "math_id": 48, "text": "( \\eta _ *, t _ *) " }, { "math_id": 49, "text": "\\partial_x v=\\frac{1}{V}\\partial_\\eta v" }, { "math_id": 50, "text": "V \\rightarrow 0" }, { "math_id": 51, "text": "\\ddot V + \\partial_\\eta^2 \\frac{1}{V}=0" }, { "math_id": 52, "text": "\\partial_t v + v\\,\\partial_xv=0" }, { "math_id": 53, "text": "p_e \\sim n_e^\\gamma" }, { "math_id": 54, "text": "p_e\\sim n_e T_e" }, { "math_id": 55, "text": "T_en_e^{1-\\gamma} = \\text{constant}," }, { "math_id": 56, "text": "\\gamma=1" }, { "math_id": 57, "text": "\\ddot V + \\partial_\\eta \\left[ \\frac{\\gamma}{V}\\left(\\frac{1-\\ddot V}{V}\\right)^{\\gamma-2} \\,\\partial_\\eta \\left(\\frac{1-\\ddot V}{V} \\right) \\right]=0,\\qquad1\\le\\gamma\\le2" }, { "math_id": 58, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=66541277
665435
American Mathematics Competitions
Secondary school US competition The American Mathematics Competitions (AMCs) are the first of a series of competitions in secondary school mathematics sponsored by the Mathematical Association of America that determine the United States of America's team for the International Mathematical Olympiad (IMO). The selection process takes place over the course of roughly five stages. At the last stage, the US selects six members to form the IMO team. There are three AMC competitions held each year: Students who perform well on the AMC 10 or AMC 12 competitions are invited to participate in the American Invitational Mathematics Examination (AIME). Students who perform exceptionally well on the AMC 12 and AIME are invited to the United States of America Mathematical Olympiad (USAMO), while students who perform exceptionally well on the AMC 10 and AIME are invited to United States of America Junior Mathematical Olympiad (USAJMO). Students who do exceptionally well on the USAMO (typically around 45 students based on score and grade level) and USAJMO (typically around the top 15 students) are invited to attend the Mathematical Olympiad Program (MOP). History. The AMC contest series includes the American Mathematics Contest 8 (AMC 8) (formerly the American Junior High School Mathematics Examination) for students in grades 8 and below, begun in 1985; the American Mathematics Contest 10 (AMC 10), for students in grades 9 and 10, begun in 2000; the American Mathematics Contest 12 (AMC 12) (formerly the American High School Mathematics Examination) for students in grades 11 and 12, begun in 1950; the American Invitational Mathematics Examination (AIME), begun in 1983; and the USA Mathematical Olympiad (USAMO), begun in 1972. Rules and scoring. AMC 8. The AMC 8 is a 25 multiple-choice question, 40-minute competition designed for middle schoolers. No problems require the use of a calculator, and their use has been banned since 2008. The competition was previously held on a Thursday in November. However, after 2022, the competition has been held in January. The AMC 8 is a standalone competition; students cannot qualify for the AIME via their AMC 8 score alone. The AMC 8 is scored based on the number of questions answered correctly only. There is no penalty for getting a question wrong, and each question has equal value. Thus, a student who answers 23 questions correctly and 2 questions incorrectly receives a score of 23. Rankings and awards. Ranking Based on questions correct: Awards AMC 10 and AMC 12. The AMC 10 and AMC 12 are 25 question, 75-minute multiple choice competitions in secondary school mathematics containing problems which can be understood and solved with pre-calculus concepts. Calculators have not been allowed on the AMC 10/12 since 2008. High scores on the AMC 10 or 12 can qualify the participant for the American Invitational Mathematics Examination (AIME). The competitions are scored based on the number of questions answered correctly and the number of questions left blank. A student receives 6 points for each question answered correctly, 1.5 points for each question left blank, and 0 points for incorrect answers. Thus, a student who answers 24 correctly, leaves 1 blank, and misses 0 gets formula_0 points. The maximum possible score is formula_1 points; in 2020, the AMC 12 had a total of 18 perfect scores between its two administrations, and the AMC 10 also had 18. From 1974 until 1999, the competition (then known as the American High School Math Examination, or AHSME) had 30 questions and was 90 minutes long, scoring 5 points for correct answers. Originally during this time, 1 point was awarded for leaving an answer blank, however, it was changed in the late 1980s to 2 points. When the competition was shortened as part of the 2000 rebranding from AHSME to AMC, the value of a correct answer was increased to 6 points and the number of questions reduced to 25 (keeping 150 as a perfect score). In 2001, the score of a blank was increased to 2.5 to penalize guessing. The 2007 competitions were the first with only 1.5 points awarded for a blank, to discourage students from leaving a large number of questions blank in order to assure qualification for the AIME. For example, prior to this change, on the AMC 12, a student could advance with only 11 correct answers, presuming the remaining questions were left blank. After the change, a student must answer 14 questions correctly to reach 100 points. The competitions have historically overlapped to an extent, with the medium-hard AMC 10 questions usually being the same as the medium-easy ones on the AMC 12. However, this trend has diverged recently, and questions that are in both the AMC 10 and 12 are in increasingly similar positions. Problem 18 on the 2022 AMC 10A was the same as problem 18 on the 2022 AMC 12A. Since 2002, two administrations have been scheduled, so as to avoid conflicts with school breaks. Students are eligible to compete in an A competition and a B competition, and may even take the AMC 10-A and the AMC 12-B, though they may not take both the AMC 10 and AMC 12 from the same date. If a student participates in both competitions, they may use either score towards qualification to the AIME or USAMO/USAJMO. In 2021, the competition format was changed to occur in the Fall instead of the Spring. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "24 \\times 6 + 1.5 \\times 1 = 145.5" }, { "math_id": 1, "text": "25 \\times 6 = 150" } ]
https://en.wikipedia.org/wiki?curid=665435
66544639
Sequence analysis of synthetic polymers
The methods for sequence analysis of synthetic polymers differ from the sequence analysis of biopolymers (e. g. DNA or proteins). Synthetic polymers are produced by chain-growth or step-growth polymerization and show thereby polydispersity, whereas biopolymers are synthesized by complex template-based mechanisms and are sequence-defined and monodisperse. Synthetic polymers are a mixture of macromolecules of different length and sequence and are analysed via statistical measures (e. g. the degree of polymerization, comonomer composition or dyad and triad fractions). NMR-based sequencing. Nuclear magnetic resonance (NMR) spectroscopy is known as the most widely applied and “one of the most powerful techniques” for the sequence analysis of synthetic copolymers.⁠ NMR spectroscopy allows determination of the relative abundance of comonomer sequences at the level of dyads and in cases of small repeat units even triads or more. It also allows the detection and quantification of chain defects and chain end groups, cyclic oligomers and by-products.⁠ However, limitations of NMR spectroscopy are that it cannot, so far, provide information about the sequence distribution along the chain, like gradients, clusters or a long-range order. Example: Copolymer of PET and PEN. Monitoring the relative abundance of comonomer sequences is a common technique and is used, for example, to observe the progress of transesterification reactions between polyethylene terephthalate (PET) and polyethylene naphthalate (PEN) in their blends. During such a transesterification reaction, three resonances representing four diads can be distinguished via 1H NMR spectroscopy by different chemical shifts of the oxyethylene units: The diads -terephthalate-oxyethylene-terephthalate- (TET) and -naphthalate-oxyethylene-naphthalate- (NEN), which are also present in the homopolymers polyethylene naphthalate und polyethylene terephthalate, as well as the (indistinguishable) diads -terephthalate-oxyethylene-naphthalate- (TEN) and -naphthalate-oxyethylene-terephthalate- (NET), which are exclusively present in the copolymer. In the spectrum of a 1:1 physical PET/PEN mixture, only the resonances corresponding to the diads TET and NEN are present at 4.90 and 5.00 ppm, respectively. Once a transesterification reaction occurs, a new resonance at 4.95 ppm emerges that increases in intensity with the reaction time, corresponding to the TEN / NET sequences. The example of polyethylene naphthalate and polyethylene terephthalate is relatively simple, as only the aromatic part of the polymers differ (naphthalate "vs." terephthalate). In a blend of polyethylene naphthalate and polytrimethylene terephthalate, already six resonances can be distinguished, since both, oxyethylene and oxypropylene, form three resonances. The sequence patterns can become even more complex, when triads can be distinguished spectroscopically.⁠ The extractable information is limited by the difference in chemical shift and the resonance width. In addition to 1H NMR spectroscopy, also 13C NMR spectroscopy is a common method for the sequencing shown above, which is characterized in particular by a very narrow resonance width. Deconvolution and assignment of these triad-based resonances allows a quantitative determination of the degree of randomness and the average block length via integration of the distinguishable resonances. In a 1:1 mixture of two linear two-component 1:1 polycondensates (A1B1)n and (A2B2)n (with molecular weight high enough to neglected chain-ends), the following two equations are valid: [ Ai] = [Bi], wherein (i = 1,2) (1) [ A1B2 ] = [ A2B1] (2) Equation 1 states that the molar ratio of all four repeat units is identical and equation 2 states that both types of copolymer are of identical concentration. In this case, the degree of randomness "χ" is calculated as given by equation 3: formula_0, wherein (i, j = 1, 2) (3) In the beginning of a transreaction process (e. g. transesterification or transamidation), the degree of randomness "χ" ≈ 0 as the system comprises a physical mixture of homopolymers or block copolymers. During the transreaction process "χ" increases up to "χ" = 1 for a fully random copolymer. If "χ" &gt; 1 it indicates a tendency of the monomers to form alternating structure, up to "χ" = 2 for a completely alternating copolymer.⁠ The degree of randomness "χ" gives thereby statistical information about the polymer sequence. The calculation can be modified for three-component⁠ and four-component⁠ polycondensates. Application. NMR spectroscopy is used in industrially relevant systems to study the sequence distribution of copolymers or the occurrence of transesterification in polyester blends. A change in sequence distribution can effect the crystallinity, and transesterification can affect the compatibility of two otherwise incompatible polyesters. Depending on their degree of randomness, copolyesters can show different thermal transitions and behaviours. Other sequencing. Other options besides traditional NMR spectroscopy for sequence analysis are listed here; these include Kerr-effect for characterization of polymer microstructures, MALDI-TOF mass spectrometry, depolymerization (controlled chemical degradation of macromolecules) via chain-end depolymerization (i.e., unzipping) and nanopore analysis (most of such reported studies, however, have focused on poly(ethylene glycol), PEG). References. " This article incorporates text by Marcus Knappert available under the license."
[ { "math_id": 0, "text": "\\chi = [\\frac{A_i B_j}{A_1 A_2 }]" } ]
https://en.wikipedia.org/wiki?curid=66544639
66545098
1 Chronicles 5
First Book of Chronicles, chapter 5 1 Chronicles 5 is the fifth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter focuses on the Transjordanian tribes, geographically from south to north: Reuben (verses 1–10), Gad (verses 11–17) and the half tribe of Manasseh (verses 23–24), as well as the account of the war against the Hagrites (verses 10, 18–22) and the reasoning why Transjordanian tribes were taken away into exile (verses 25–26). It belongs to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34). Text. This chapter was originally written in the Hebrew language. It is divided into 26 verses in English Bibles, but counted to 41 verses in Hebrew Bible using a different verse numbering (see below). Verse numbering. There are some differences in verse numbering of this chapter in English Bibles and Hebrew texts as follows: This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. The whole chapter belongs to an arrangement comprising 1 Chronicles 2:3–8:40 with the king-producing tribes of Judah (David; 2:3–4:43) and Benjamin (Saul; 8:1–40) bracketing the series of lists as the priestly tribe of Levi (6:1–81) anchors the center, in the following order: A David's royal tribe of Judah (2:3–4:43) B Northern tribes east of Jordan (5:1–26) X The priestly tribe of Levi (6:1–81) B' Northern tribes west of Jordan (7:1–40) A' Saul's royal tribe of Benjamin (8:1–40) Descendants of Reuben (5:1–10). This section begins with explanation (a kind of midrash) that Reuben did not receive the rights of a firstborn son of Jacob because he slept with Bilhah, his father's concubine (; cf. ). The firstborn rights were passed on to the two sons of Joseph, whereas the leadership was given to Judah (underlined in verse 2 and reflected in its prominence in the lists of tribes themselves) with an unnamed "chief ruler" (certainly pointing to David). Reuben's four sons are only named in verse 4. "Now the sons of Reuben the firstborn of Israel—he was indeed the firstborn, but because he defiled his father's bed, his birthright was given to the sons of Joseph, the son of Israel, so that the genealogy is not listed according to the birthright;" "For Judah prevailed above his brethren, and of him came the chief ruler; but the birthright was Joseph's:)." "Beerah his son, whom Tilgathpilneser king of Assyria carried away captive: he was prince of the Reubenites." Descendants of Gad (5:11–17). This section focuses on the tribe of Gad, which settled in the area east of the Jordan river ("Transjordan"), along with the tribes of Reuben and Manasseh (half of the tribe). The close relationship among these tribes is noted in ; ; . The sources of the genealogies of the descendants of Gad are the documents compiled during the reign of Jotham, King of Judah (c. 750–735 BCE), and Jeroboam, King of Israel (c. 793–753 BCE), that bear no resemblance to other parts of the Bible (cf. ; ). "And they dwelt in Gilead in Bashan, and in her towns, and in all the suburbs of Sharon, upon their borders." The war against the Hagrites (5:18–22). This section elaborates the conflict against the Hagrites (descendants of Hagar) during the reign of Saul, as briefly mentioned in verse 10 (also in , where the group was mentioned along with Edom, Ishmael, and Moab), over pastureland. Descendants of Manasseh (5:23–24). This section focuses on the half-tribe of Manasseh, which settled in the area east of the Jordan river ("Transjordan"), along with the tribes of Reuben and Gad. The close relationship among these tribes is noted in ; ; , . The exile of Transjordanian tribes (5:25–26). This passage combines the two-phases of the northern Israel kingdom ( and ; ) into a single exile of the Transjordanian tribes, by taking the name of the king from the first, whilst using the deportation place-names of the second phase. Historical documents only record that Tiglath-pileser conquered Gilead in the east of Jordan. "And the God of Israel stirred up the spirit of Pul king of Assyria, and the spirit of Tilgathpilneser king of Assyria, and he carried them away, even the Reubenites, and the Gadites, and the half tribe of Manasseh, and brought them unto Halah, and Habor, and Hara, and to the river Gozan, unto this day." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66545098
66547082
JPL sequence
JPL sequences or JPL codes consist of two linear feedback shift registers (LFSRs) whose code sequence lengths "L"a and "L"b must be prime (relatively prime). In this case the code sequence length of the generated overall sequence "L"c is equal to: formula_0 It is also possible for more than two LFSRs to be interconnected through multiple XORs at the output for as long as all code sequence lengths of the individual LFSR are relatively prime to one another. JPL sequences were originally developed in the Jet Propulsion Labs, from which the name for these code sequences is derived. Areas of application include distance measurements utilizing spread spectrum signals for satellites and in space technology. They are also utilized in the more precise military P/Y code used in the Global Positioning System (GPS). However, they are currently replaced by the new M-code. Due to the relatively long spreading sequences, they can be used to measure relatively long ranges without ambiguities, as required for deep space missions. By having a rough synchronziation between receiver and transmitter, this can be achieved with shorter sequences as well. Their major advantage is, that they produce relatively long sequences with only two LFSRs, which makes it energy efficient and very hard to detect due to huge spreading factor. The same structure can be used to realize a dither generator, used as an additive noise source to remove a numerical bias in digital computations (due to fixed point arithmetics, that have one more negative than positive number, i.e. the mean value is slightly negative). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_c = L_a \\cdot L_b = (2^n - 1)(2^m - 1)" } ]
https://en.wikipedia.org/wiki?curid=66547082
66564318
Maximum energy product
In magnetics, the maximum energy product is an important figure-of-merit for the strength of a permanent magnet material. It is often denoted ("BH")max and is typically given in units of either kJ/m3 (kilojoules per cubic meter, in SI electromagnetism) or MGOe (mega-gauss-oersted, in gaussian electromagnetism). 1 MGOe is equivalent to . During the 20th century, the maximum energy product of commercially available magnetic materials rose from around 1 MGOe (e.g. in KS Steel) to over 50 MGOe (in neodymium magnets). Other important permanent magnet properties include the remanence ("B"r) and coercivity ("H"c); these quantities are also determined from the saturation loop and are related to the maximum energy product, though not directly. Definition and significance. The maximum energy product is defined based on the magnetic hysteresis saturation loop (B-H curve), in the demagnetizing portion where the B and H fields are in opposition. It is defined as the maximal value of the product of B and H along this curve (actually, the maximum of the negative of the product, −"BH", since they have opposing signs): formula_0 Equivalently, it can be graphically defined as the area of the largest rectangle that can be drawn between the origin and the saturation demagnetization B-H curve (see figure). The significance of ("BH")max is that the volume of magnet necessary for any given application tends to be inversely proportional to ("BH")max. This is illustrated by considering a simple magnetic circuit containing a permanent magnet of volume Volmag and an air gap of volume Volgap, connected to each other by a magnetic core. Suppose the goal is to reach a certain field strength "B"gap in the gap. In such a situation, the total magnetic energy in the gap (volume-integrated magnetic energy density) is directly equal to half the volume-integrated −"BH" in the magnet: formula_1 thus in order to achieve the desired magnetic field in the gap, the required volume of magnet can be minimized by maximizing −"BH" in the magnet. By choosing a magnetic material with a high ("BH")max, and also choosing the aspect ratio of the magnet so that its −"BH" is equal to ("BH")max, the required volume of magnet to achieve a target flux density in the air gap is minimized. This expression assumes that the permeability in the core that is connecting the magnetic material to the air gap is infinite, so unlike the equation might imply, you cannot get arbitrarily large flux density in the air gap by decreasing the gap distance. A real core will eventually saturate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(BH)_{\\rm max} \\equiv \\operatorname{max}(-B \\cdot H)." }, { "math_id": 1, "text": "E_{\\rm gap} = \\frac{1}{2\\mu_0}(B_{\\rm gap})^2 {\\rm Vol}_{\\rm gap} = -\\frac{1}{2} B_{\\rm mag} H_{\\rm mag}{\\rm Vol}_{\\rm mag} = -E_{\\rm mag}, " } ]
https://en.wikipedia.org/wiki?curid=66564318
66570
Dalton's law
Empirical law of partial pressures Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total pressure exerted is equal to the sum of the partial pressures of the individual gases. This empirical law was observed by John Dalton in 1801 and published in 1802. Dalton's law is related to the ideal gas laws. Formula. Mathematically, the pressure of a mixture of non-reactive gases can be defined as the summation: formula_0 where "p"1, "p"2, ..., "pn" represent the partial pressures of each component. formula_1 where "xi" is the mole fraction of the "i"th component in the total mixture of "n" components . Volume-based concentration. The relationship below provides a way to determine the volume-based concentration of any individual gaseous component formula_2 where "ci" is the concentration of component "i". Dalton's law is not strictly followed by real gases, with the deviation increasing with pressure. Under such conditions the volume occupied by the molecules becomes significant compared to the free space between them. In particular, the short average distances between molecules increases intermolecular forces between gas molecules enough to substantially change the pressure exerted by them, an effect not included in the ideal gas model. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_\\text{total} = \\sum_{i=1}^n p_i = p_1+p_2+p_3+\\cdots+p_n" }, { "math_id": 1, "text": "p_{i} = p_\\text{total} x_i" }, { "math_id": 2, "text": "p_i = p_\\text{total} c_i" } ]
https://en.wikipedia.org/wiki?curid=66570
66570719
Kapteyn series
Kapteyn series is a series expansion of analytic functions on a domain in terms of the Bessel function of the first kind. Kapteyn series are named after Willem Kapteyn, who first studied such series in 1893. Let formula_0 be a function analytic on the domain formula_1 with formula_2. Then formula_0 can be expanded in the form formula_3 where formula_4 The path of the integration is the boundary of formula_5. Here formula_6, and for formula_7, formula_8 is defined by formula_9 Kapteyn's series are important in physical problems. Among other applications, the solution formula_10 of Kepler's equation formula_11 can be expressed via a Kapteyn series: formula_12 Relation between the Taylor coefficients and the "αn" coefficients of a function. Let us suppose that the Taylor series of formula_0 reads as formula_13 Then the formula_14 coefficients in the Kapteyn expansion of formula_0 can be determined as follows. formula_15 Examples. The Kapteyn series of the powers of formula_16 are found by Kapteyn himself: formula_17 For formula_18 it follows (see also ) formula_19 and for formula_20 formula_21 Furthermore, inside the region formula_22, formula_23 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "D_a = \\left\\{z\\in\\mathbb{C}:\\Omega(z)=\\left|\\frac{z\\exp\\sqrt{1-z^2}}{1+\\sqrt{1-z^2}}\\right|\\le a\\right\\}" }, { "math_id": 2, "text": "a<1" }, { "math_id": 3, "text": "f(z) = \\alpha_0 + 2\\sum_{n=1}^\\infty \\alpha_n J_n(nz)\\quad(z\\in D_a)," }, { "math_id": 4, "text": "\n\\alpha_n = \\frac{1}{2\\pi i}\\oint\\Theta_n(z)f(z)dz.\n" }, { "math_id": 5, "text": "D_a" }, { "math_id": 6, "text": "\\Theta_0(z)=1/z" }, { "math_id": 7, "text": "n>0" }, { "math_id": 8, "text": "\\Theta_n(z)" }, { "math_id": 9, "text": "\n\\Theta_n(z) = \\frac14\\sum_{k=0}^{\\left[\\frac{n}2\\right]}\\frac{(n-2k)^2(n-k-1)!}{k!}\\left(\\frac{nz}{2}\\right)^{2k-n}\n" }, { "math_id": 10, "text": "E" }, { "math_id": 11, "text": "M=E-e\\sin E" }, { "math_id": 12, "text": "\nE=M+2\\sum_{n=1}^\\infty\\frac{\\sin(nM)}{n}J_n(ne).\n" }, { "math_id": 13, "text": "\nf(z)=\\sum_{n=0}^\\infty a_nz^n.\n" }, { "math_id": 14, "text": "\\alpha_n" }, { "math_id": 15, "text": "\n\\begin{align}\n\\alpha_0 &= a_0,\\\\\n\\alpha_n &= \\frac14\\sum_{k=0}^{\\left\\lfloor\\frac{n}2 \\right\\rfloor}\\frac{(n-2k)^2(n-k-1)!}{k!(n/2)^{(n-2k+1)}}a_{n-2k}\\quad(n\\ge1).\n\\end{align}\n" }, { "math_id": 16, "text": "z" }, { "math_id": 17, "text": "\n\\left(\\frac{z}{2}\\right)^{n}=n^{2} \\sum_{m=0}^\\infty \\frac{(n+m-1)!}{(n+2 m)^{n+1}\\, m!} J_{n+2 m}\\{(n+2 m) z\\}\\quad(z\\in D_1).\n" }, { "math_id": 18, "text": "n = 1" }, { "math_id": 19, "text": "\nz = 2 \\sum_{k=0}^\\infty \\frac{J_{2k+1}((2k+1)z)}{(2k+1)^2},\n" }, { "math_id": 20, "text": "n = 2" }, { "math_id": 21, "text": "\nz^2 = 2 \\sum_{k=1}^\\infty \\frac{J_{2k}(2kz)}{k^2}.\n" }, { "math_id": 22, "text": "D_1" }, { "math_id": 23, "text": "\n\\frac{1}{1-z} = 1 + 2 \\sum_{k=1}^\\infty J_k(kz).\n" } ]
https://en.wikipedia.org/wiki?curid=66570719
66573106
Gaussian distribution on a locally compact Abelian group
Gaussian distribution on a locally compact Abelian group is a distribution formula_0 on a second countable locally compact Abelian group formula_1 which satisfies the conditions: (i) formula_0 is an infinitely divisible distribution; (ii) if formula_2, where formula_3 is the generalized Poisson distribution, associated with a finite measure formula_4, and formula_5 is an infinitely divisible distribution, then the measure formula_4 is degenerated at zero. This definition of the Gaussian distribution for the group formula_6 coincides with the classical one. The support of a Gaussian distribution is a coset of a connected subgroup of formula_1. Let formula_7 be the character group of the group formula_1. A distribution formula_0 on formula_1 is Gaussian () if and only if its characteristic function can be represented in the form formula_8, where formula_9 is the value of a character formula_10 at an element formula_11, and formula_12 is a continuous nonnegative function on formula_7 satisfying the equation formula_13. A Gaussian distribution is called symmetric if formula_14. Denote by formula_15 the set of Gaussian distributions on the group formula_1, and by formula_16 the set of symmetric Gaussian distribution on formula_1. If formula_17, then formula_0 is a continuous homomorphic image of a Gaussian distribution in a real linear space. This space is either finite dimensional or infinite dimensional (the space of all sequences of real numbers in the product topology) (). If a distribution formula_0 can be embedded in a continuous one-parameter semigroup formula_18, of distributions on formula_1, then formula_19 if and only if formula_20 for any neighbourhood of zero formula_21 in the group formula_1 (). Let formula_1 be a connected group, and formula_19. If formula_1 is not a locally connected, then formula_0 is singular (with respect of a Haar distribution on formula_1) (). If formula_1 is a locally connected and has a finite dimension, then formula_0 is either absolutely continuous or singular. The question of the validity of a similar statement on locally connected groups of infinite dimension is open, although on such groups it is possible to construct both absolutely continuous and singular Gaussian distributions. It is well known that two Gaussian distributions in a linear space are either mutually absolutely continuous or mutually singular. This alternative is true for Gaussian distributions on connected groups of finite dimension (). The following theorem is valid (), which can be considered as an analogue of Cramer's theorem on the decomposition of the normal distribution for locally compact Abelian groups. Cramer's theorem on the decomposition of the Gaussian distribution for locally compact Abelian groups. Let formula_22 be a random variable with values in a locally compact Abelian group formula_1 with a Gaussian distribution, and let formula_23, where formula_24 and formula_25 are independent random variables with values in formula_1. The random variables formula_24 and formula_25 are Gaussian if and only if the group formula_1 contains no subgroup topologically isomorphic to the circle group, i.e. the multiplicative group of complex numbers whose modulus is equal to 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\gamma=e(F)*\\nu" }, { "math_id": 3, "text": "e(F)" }, { "math_id": 4, "text": "F" }, { "math_id": 5, "text": "\\nu" }, { "math_id": 6, "text": "X=\\mathbb{R}^n" }, { "math_id": 7, "text": "Y" }, { "math_id": 8, "text": "\\hat{\\gamma}(y)=(x,y)\\exp\\{-\\varphi(y)\\}" }, { "math_id": 9, "text": "(x,y) " }, { "math_id": 10, "text": "y\\in Y " }, { "math_id": 11, "text": "x\\in X " }, { "math_id": 12, "text": "\\varphi(y) " }, { "math_id": 13, "text": "\\varphi(u+v)+\\varphi(u-v)=2[\\varphi(u)+\\varphi(v)], u, v\\in Y " }, { "math_id": 14, "text": "x=0 " }, { "math_id": 15, "text": "\\Gamma(X) " }, { "math_id": 16, "text": "\\Gamma^s(X) " }, { "math_id": 17, "text": "\\gamma\\in\\Gamma^s(X) " }, { "math_id": 18, "text": "\\gamma_t, t\\ge 0" }, { "math_id": 19, "text": "\\gamma\\in\\Gamma(X) " }, { "math_id": 20, "text": "\\lim_{t\\rightarrow 0} {\\gamma_t(X\\backslash U)\\over t}=0" }, { "math_id": 21, "text": "U" }, { "math_id": 22, "text": "\\xi" }, { "math_id": 23, "text": "\\xi=\\xi_1+\\xi_2" }, { "math_id": 24, "text": "\\xi_1" }, { "math_id": 25, "text": "\\xi_2" } ]
https://en.wikipedia.org/wiki?curid=66573106
665818
Breidbart Index
Most significant cancel index in Usenet The Breidbart Index, developed by Seth Breidbart, is the most significant "cancel index" in Usenet. A cancel index measures the dissemination intensity of substantively identical articles. If the index exceeds a threshold the articles are called newsgroup spam. They can then be removed using third party cancel controls. Cancel Index. The principal idea of the "Breidbart-Index" is to give these methods different weight. With a crossposted message less data needs to be transferred and stored. And excessive crossposts (ECP) are also a likely beginner's error, while excessive multiposts (EMP) suggest deliberate usage of special software. The crucial issue is categorizing multiple articles as "substantively identical". This includes Breidbart Index (BI). The Breidbart Index of a set of articles is defined as the sum of the square root of "n", where "n" is the number of newsgroups to which an article is cross posted. formula_0 Two copies of a posting are made, one to 9 groups, and one to 16. formula_1 Breidbart-Index, Version 2 (BI2). A more aggressive criterion, Breidbart Index Version 2, has been proposed. The BI2 is defined as the sum of the square root of "n", plus the sum of "n", divided by two. A single message would only need to be crossposted to 35 newsgroups to breach the threshold of 20. formula_2 Two copies of a posting are made, one to 9 groups, and one to 16. formula_3 Skirvin-Breidbart Index (SBI, BI3). The name "Skirvin-Breidbart Index" and the abbreviation SBI are mentioned in the "Spam Thresholds FAQ". However, in hierarchy nl.* this index is called BI3. The SBI is calculated similar to the BI2 but adds up the number of groups in Followup-to: (if present) instead of the number of groups in Newsgroups:. This encourages the use of Followup-to:. Two posts contain the same text. One is crossposted to 9 groups. The other is crossposted to 16, with four groups in Followup-to:. formula_4 BI7 and BI30. In hierarchy de.* the Breidbart index is used with a time range of seven days instead of 45. This is denoted by the abbreviation "BI7". In hierarchy codice_0 the Breidbart index is used with a time range of 30 days instead of 45. This is denoted by the abbreviation "BI30". Cancel Index in at.*. This is defined in the FAQ of the group at.usenet.cancel-reports. The term used in the Call for Votes and in the FAQ is "Cancel-Index". Unofficial abbreviations are "CI" and "ACI". The ACI of a single post equals 3 plus the number of groups that the post was sent to. The index of multiple posts is the sum of the indices of the individual posts. Thresholds. In fact a cancel message is a just a non-binding request to remove a certain article. News server operators can freely decide on how to implement the conflicting policies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{BI} = \\sum_{k=1}^m \\sqrt{n_k}" }, { "math_id": 1, "text": "\\sqrt{9} + \\sqrt{16} = 3 + 4 = 7" }, { "math_id": 2, "text": "\\mbox{BI2} = \\sum_{k=1}^m \\frac{n_k + \\sqrt{n_k}}{2}" }, { "math_id": 3, "text": "\\frac{\\sqrt{9} + \\sqrt{16} + 9 + 16}{2} =\n\\frac{3 + 4 + 9 + 16}{2} = \\frac{32}{2} = 16" }, { "math_id": 4, "text": "\\frac{\\sqrt{9} + \\sqrt{16} + 9 + 4}{2} =\n\\frac{3 + 4 + 9 + 4}{2} = \\frac{20}{2} = 10" } ]
https://en.wikipedia.org/wiki?curid=665818
665821
Injective hull
Notion in abstract algebra In mathematics, particularly in algebra, the injective hull (or injective envelope) of a module is both the smallest injective module containing it and the largest essential extension of it. Injective hulls were first described in . Definition. A module "E" is called the injective hull of a module "M", if "E" is an essential extension of "M", and "E" is injective. Here, the base ring is a ring with unity, though possibly non-commutative. Properties. Ring structure. In some cases, for "R" a subring of a self-injective ring "S", the injective hull of "R" will also have a ring structure. For instance, taking "S" to be a full matrix ring over a field, and taking "R" to be any ring containing every matrix which is zero in all but the last column, the injective hull of the right "R"-module "R" is "S". For instance, one can take "R" to be the ring of all upper triangular matrices. However, it is not always the case that the injective hull of a ring has a ring structure, as an example in shows. A large class of rings which do have ring structures on their injective hulls are the nonsingular rings. In particular, for an integral domain, the injective hull of the ring (considered as a module over itself) is the field of fractions. The injective hulls of nonsingular rings provide an analogue of the ring of quotients for non-commutative rings, where the absence of the Ore condition may impede the formation of the classical ring of quotients. This type of "ring of quotients" (as these more general "fields of fractions" are called) was pioneered in , and the connection to injective hulls was recognized in . Uniform dimension and injective modules. An "R" module "M" has finite uniform dimension (="finite rank") "n" if and only if the injective hull of "M" is a finite direct sum of "n" indecomposable submodules. Generalization. More generally, let C be an abelian category. An object "E" is an injective hull of an object "M" if "M" → "E" is an essential extension and "E" is an injective object. If C is locally small, satisfies Grothendieck's axiom AB5 and has enough injectives, then every object in C has an injective hull (these three conditions are satisfied by the category of modules over a ring). Every object in a Grothendieck category has an injective hull.
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\mathbb Q \\otimes_{\\mathbb Z} A" }, { "math_id": 2, "text": "(R,\\mathfrak{m},k)" }, { "math_id": 3, "text": "\\mathfrak{m} = x\\cdot R" }, { "math_id": 4, "text": "R_x/R" }, { "math_id": 5, "text": "\\mathbb{C}" }, { "math_id": 6, "text": "(\\mathbb{C}[[t]],(t),\\mathbb{C})" }, { "math_id": 7, "text": "\\mathbb{C}((t))/\\mathbb{C}[[t]]" } ]
https://en.wikipedia.org/wiki?curid=665821
66583954
1 Chronicles 6
First Book of Chronicles, chapter 6 1 Chronicles 6 is the sixth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter focuses on the tribe of Levi, divided into the line of the high priests (verses 1–15); the three lines of the families Gershom, Kohath, and Merari (verses 16–30); the lines of the musicians/singers (verses 31–47); duties of Levites and priests (verses 48–49); list of high priests (verses 50–53) and the Aaronites' and Levites' settlements (verses 54–81). It belongs to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34). Text. This chapter was originally written in the Hebrew language. It is divided into 81 verses in English Bibles, but only 66 verses in Hebrew Bible using a different verse numbering (see below). Verse numbering. There are some differences in verse numbering of this chapter in English Bibles and Hebrew texts as follows: This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Descendants of Levi (6:1–30; Hebrew: 5:27–6:15). The genealogy of priestly tribe of Levi, apart from that of Judah, is longer than any of other tribes, showing the focus of the Chronicler on the temple and temple workers, preserved by David's line. The list first names Levi and his three sons, apparently taken from Genesis 46:11 (also Exodus 6:16; Numbers 26:57). Subsequently, three generations of the Kohathites, continuing with only the branches leading to famous siblings: Moses, Aaron and Miriam, then to the Aaronite high priests. Miriam's name is this list, because of her significance in history, which has parallels in the Torah (cf. for instance Exodus 6:16–25). Verses 4–15 contain twenty-two successors of Aaron from the time of his death to the Babylonian exile, but the abridged version of the same list in Ezra 7:1–7 only has 15 names instead of 22. The list apparently serves as a legitimizing role, that the high priests in office during Chronicler's time could genealogically be traced back to Zadok and even further to Aaron, while omitting some names mentioned in other documents (such as Jehoiada, cf. 2 Chr 22:11–24:17). Omissions could be attributed to the confusion of the same names within the priestly families, such as recurrences of Amariah, Azariah and Zadok, leading to copyist errors. For examples, three Azariahs are listed here but one from the reign of Uzziah (2 Chronicles 26:20) and another from the reign of Hezekiah (2 Chronicles 31:10) are apparently overlooked. However, the narrative of the histories in the book and the writings of Josephus who provides a longer list (Antiquities 10:152-153) help to reconstruct a fairly complete genealogy. Two high-priests are given bits of narrative: Azariah son of Johanan "who served as priest in the house that Solomon built in Jerusalem" (verse 10) and Jehozadak son of Seraiah "who went into exile when the Lord sent Judah and Jerusalem into exile by the hand of Nebuchadnezzar" (vese 15), a witness to the destruction of Solomon's temple, therefore these two priests bracket the entire First Temple Period. The high-priestly lineage here ends with Jehozadak, but Nehemiah 12:10-11 continues where the list leaves off, with Joshua son of Jehozadak (cf. Haggai 1:1; 2:2, 4) and his line down to Jaddua II (born c. 420 BCE). Verses 16–30 list the Levites' genealogy (cf. Numbers 3:17–35; cf. Exodus 6:16–25); verses 16–19 for the genealogy of Levi's sons (up to his grandchildren), whereas verses 20–30 contain the lines of Gershom, Kohath, and Merari, starting with their eldest sons and continuing vertically for seven generations. "The sons of Levi; Gershon, Kohath, and Merari." "And Jehozadak went into captivity, when the Lord carried away Judah and Jerusalem by the hand of Nebuchadnezzar." Verse 15. This is the most explicit mention of Judah's exile; ; 2 Chronicles 36 only mention the exile of Jerusalem. Temple Musicians (6:31–48; Hebrew: 6:16–33). This section focus on the genealogy of the temple singers whose roles are explained extensively in 1 Chronicles 15–16. Until the construction of the temple, they performed their duty before the tent of meeting. There was no relevant law of Moses for these roles. David appointed them (verse 31) and from Solomon's time onwards they sang in the temple. They are entrusted with "service of song in the house of the Lord" (verse 31) after the ark is installed inside there. Three main singers are mentioned, representing three Levitical families, and familiar from the Psalms they contribute: In addition, Psalms 42, 44–49, 84, 85, 87 and 88 are associated with the Korahites, a subgroup of the Kohathites to which Heman belonged (cf. the title of Psalm 88, Exodus 6:24; 2 Chronicles 20:19). Heman is significantly noted as the leader among the three "with his brothers, Asaph and Ethan, standing to his right and left" (cf. Numbers 4:1–4 for Kohathites' preeminence). "And their brothers the Levites were appointed for all the service of the tabernacle of the house of God." Verse 48. This depicts a "traditional view of priestly institution" that the Levites have responsibilities for everything related to the temple, except for three tasks assigned to priests descended from Aaron (verse 49). Descendants of Aaron (6:49–53; Hebrew: 6:34–38). This section lists only the Aaronid priests until Zadok and his son, Ahimaaz, in the time of David. "But Aaron and his sons offered upon the altar of the burnt offering, and on the altar of incense, and were appointed for all the work of the place most holy, and to make an atonement for Israel, according to all that Moses the servant of God had commanded." Verse 49. Three tasks are specifically assigned to the priests descended from Aaron: Genetic studies on descendants of Aaron. A present-day Jewish priestly caste known as "Kohanim" (singular "Kohen", also spelled "Cohen") claims to be the direct descendants of Aaron. Genetic studies on the members of this group reveals that a majority of them share a pattern of values for six Y-STR markers, which researchers named the "Cohen Modal Haplotype" (CMH). Subsequent research using twelve Y-STR markers indicated that about half of contemporary Jewish Kohanim shared Y-chromosomal J1 M267, also called J1c3. Molecular phylogenetics research published in 2013, 2016, and 2020 for haplogroup J1 (J-M267) yield a hypothetical most recent common ancestor of the Kohanim, named Y-chromosomal Aaron with age estimate 2,638–3,280 years Before Present (yBP) within subhaplogroup Z18271. Following the findings, similar investigation was made of men who identify as Levites, because Aaron is recorded in Hebrew Bible as a descendant of Levi, son of Jacob. The 2003 Behar et al. investigation of Levites found high frequencies of multiple distinct markers, suggestive of multiple origins for the majority of non-Aaronid Levite families, although one marker presents in more than 50% of Eastern European (Ashkenazi) Jewish Levites, indicating a common male ancestor or very few male ancestors within the last 2000 years for many Levites of the Ashkenazi community. Subsequent publication by Rootsi, Behar, et al. in "Nature Communications" in December 2013 determined that among a set of 19 unique nucleotide substitutions defining the Ashkenazi R1a lineage, the M582 mutation is not found among Eastern Europeans, but the marker was present "in all sampled R1a Ashkenazi Levites, as well as in 33.8% of other R1a Ashkenazi Jewish males, and 5.9% of 303 R1a Near Eastern males, where it shows considerably higher diversity." Therefore, Rootsi, Behar, et al., concluded that this marker most likely originates in the pre-Diasporic Hebrews in the Near East. The Samaritan community in the Middle East maintained that the priests within the group, called "Samaritan Kohanim", also of the line of Aaron/Levi. Samaritans claim that the southern tribes of the House of Judah left the original worship as set forth by Joshua, and the schism took place in the twelfth century BC at the time of Eli. A 2004 Y-Chromosome study concluded that the Samaritan Kohanim belong to haplogroup E-M35, indicating a different patrilineal family line than the Jewish Kohanim. Dwelling Places of the Levites (6:54–81; Hebrew: 6:39–66). This section contains the list of living and grazing areas for the Levites, corresponding to that in Joshua 21:9–42, with some differences in the arrangement of its elements. The purpose is to show the areas where Levites actually settled among those designated in Joshua 21. The tribe of Levi was not given allotment of land because they are dedicated to God (Joshua 14:4), so the Chronicler clearly lists the cities where they were to settle. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66583954
66590304
Su–Schrieffer–Heeger model
Simple model of topological insulator In condensed matter physics, the Su–Schrieffer–Heeger (SSH) model is a one-dimensional lattice model that presents topological features. It was devised by Wu-Pei Su, John Robert Schrieffer, and Alan J. Heeger in 1979, to describe the increase of electrical conductivity of polyacetylene polymer chain when doped, based on the existence of solitonic defects. It is a quantum mechanical tight binding approach, that describes the hopping of spinless electrons in a chain with two alternating types of bonds. Electrons in a given site can only hop to adjacent sites. Depending on the ratio between the hopping energies of the two possible bonds, the system can be either in metallic phase (conductive) or in an insulating phase. The finite SSH chain can behave as a topological insulator, depending on the boundary conditions at the edges of the chain. For the finite chain, there exists an insulating phase, that is topologically non-trivial and allows for the existence of edge states that are localized at the boundaries. Description. The model describes a half-filled one-dimensional lattice, with two sites per unit cell, "A" and "B", which correspond to a single electron per unit cell. In this configuration each electron can either hop inside the unit cell or hop to an adjacent cell through nearest neighbor sites. As with any 1D model, with two sites per cell, there will be two bands in the dispersion relation (usually called optical and acoustic bands). If the bands do not touch, there is a band gap. If the gap lies at the Fermi level, then the system is considered to be an insulator. The tight binding Hamiltonian in a chain with "N" sites can be written as formula_0 where h.c. denotes the Hermitian conjugate, "v" is the energy required to hop from a site A to B inside the unit cell, and "w" is the energy required to hop between unit cells. Here the Fermi energy is fixed to zero. Bulk solution. The dispersion relation for the bulk can be obtained through a Fourier transform. Taking periodic boundary conditions formula_1, where formula_2, we pass to "k"-space by doing formula_3, which results in the following Hamiltonian formula_4 formula_5 where the eigenenergies are easily calculated as formula_6 and the corresponding eigenstates are formula_7 where formula_8 The eigenenergies are symmetrical under swap of formula_9, and the dispersion relation is mostly gapped (insulator) except when formula_10 (metal). By analyzing the energies, the problem is apparently symmetric about formula_10, the formula_11 has the same dispersion as formula_12. Nevertheless, not all properties of the system are symmetrical, for example the eigenvectors are very different under swap of formula_9. It can be shown for example that the Berry connection formula_13 integrated over the Brillouin zone formula_14, produces different winding numbers: formula_15 showing that the two insulating phases, formula_11 and formula_12, are topologically different (small changes in "v" and "w" change formula_16 but not formula_17 over the Brillouin zone). The winding number remains undefined for the metallic case formula_10. This difference in topology means that one cannot pass from an insulating phase to another without closing the gap (passing by the metallic phase). This phenomenon is called a topological phase transition. Finite chain solution and edge states. The physical consequences of having different winding number become more apparent for a finite chain with an even number of formula_18 lattice sites. It is much harder to diagonalize the Hamiltonian analytically in the finite case due to the lack of translational symmetry. Dimerized cases. There exist two limiting cases for the finite chain, either formula_19 or formula_20. In both of these cases, the chain is clearly an insulator as the chain is broken into dimers (dimerized). However one of the two cases would consist of formula_21 dimers, while the other case would consist of formula_22 dimers and two unpaired sites at the edges of the chain. In the latter case, as there is no on-site energy, if an electron finds itself on any of the two edge sites, its energy would be zero. So either the case formula_19 or the case formula_20 would necessarily have two eigenstates with zero energy, while the other case would not have zero-energy eigenstates. Contrary to the bulk case, the two limiting cases are not symmetrical in their spectrum. Intermediate values. By plotting the eigenstates of the finite chain as function of position, one can show that there are two distinct kinds of states. For non-zero eigenenergies, the corresponding wavefunctions would be delocalized all along the chain while the zero energy eigenstates would portray localized amplitudes at the edge sites. The latter are called edge states. Even if the eigenenergies lie in the gap, the edge states are localized and correspond to an insulating phase. By plotting the spectrum as a function of formula_23 for a fixed value of formula_24, the spectrum is divided into two insulating regions divided by the metallic intersection at formula_25. The spectrum would be gapped in both insulating regions, but one of the regions would show zero energy eigenstates and the other region would not, corresponding to the dimerized cases. The existence of edge states in one region and not in the other demonstrate the difference between insulating phases and it is this sharp transition at formula_25 that correspond to a topological phase transition. Correspondence between finite and bulk solutions. The bulk case allows to predict which insulating region would present edge states, depending on the value of the winding number in the bulk case. For the region where the winding number is formula_26 in the bulk, the corresponding finite chain with an even number of sites would present edge states, while for the region where the winding number is formula_27 in the bulk case, the corresponding finite chain would not. This relation between winding numbers in the bulk and edge states in the finite chain is called the bulk-edge correspondence.
[ { "math_id": 0, "text": "H=v\\sum_{n=1}^{N}|n,B\\rangle\\langle n,A|+w\\sum_{n=1}^{N}|n+1,A\\rangle\\langle n,B|+\\mathrm{h.c.}" }, { "math_id": 1, "text": "|N+1,X\\rangle=|1,X\\rangle" }, { "math_id": 2, "text": "X=A,B" }, { "math_id": 3, "text": "|n,X\\rangle=\\frac{1}{\\sqrt{N}}\\sum_{k}e^{ikn}|k,X\\rangle" }, { "math_id": 4, "text": "H=\\sum_k (v + e^{ik}w )|k,A\\rangle\\langle k,B| +\\text{h.c.}=\\sum_k H(k)|k\\rangle\\langle k|, " }, { "math_id": 5, "text": "H(k)=\\begin{pmatrix}0 & v+e^{ik}w \\\\ v+e^{-ik}w & 0\\end{pmatrix}" }, { "math_id": 6, "text": "E_{v w}=\\pm\\sqrt{v^2+w^2+2vw\\cos k}," }, { "math_id": 7, "text": "|k,\\pm\\rangle=\\frac{1}{\\sqrt{2}}\\left(\\pm e^{i\\phi_k}|k,A\\rangle + |k,B\\rangle\\right)," }, { "math_id": 8, "text": "\\tan\\phi_k=\\frac{w\\sin k}{v+w\\cos k}." }, { "math_id": 9, "text": "v\\leftrightarrow w" }, { "math_id": 10, "text": "v=w" }, { "math_id": 11, "text": "v>w" }, { "math_id": 12, "text": "v<w" }, { "math_id": 13, "text": "A_-(k)=i\\left\\langle k,-\\left|\\frac{\\mathrm{d}}{\\mathrm{d}k}\\right|k,-\\right\\rangle=-\\frac{1}{2}\\frac{\\mathrm{d}\\phi_k}{\\mathrm{d}k}," }, { "math_id": 14, "text": "k\\in\\{-\\pi,\\pi\\}" }, { "math_id": 15, "text": "g=-\\frac{1}{\\pi}\\int_{\\rm BZ}A_-(k)\\mathrm{d}k=\\left\\{\\begin{matrix} 0 & v>w\\\\ \\text{undefined} & v=w \\\\ 1 & v<w\\end{matrix}\\right.," }, { "math_id": 16, "text": "A_-(k)" }, { "math_id": 17, "text": "g" }, { "math_id": 18, "text": "N" }, { "math_id": 19, "text": "v=0" }, { "math_id": 20, "text": "w=0" }, { "math_id": 21, "text": "N/2" }, { "math_id": 22, "text": "(N-2)/2" }, { "math_id": 23, "text": "v" }, { "math_id": 24, "text": "w" }, { "math_id": 25, "text": "w=v" }, { "math_id": 26, "text": "g=1" }, { "math_id": 27, "text": "g=0" } ]
https://en.wikipedia.org/wiki?curid=66590304
6659305
Highest response ratio next
Highest response ratio next (HRRN) scheduling is a non-preemptive discipline. It was developed by Brinch Hansen as modification of shortest job next or shortest job first (SJN or SJF) to mitigate the problem of process starvation. In HRRN, the next job is not that with the shortest estimated run time, but that with the highest response ratio defined as formula_0 This means, the jobs that have spent a long time waiting compete against those estimated to have short run times. As you can see in the above equation of response ratio, if the waiting time of a process increases, its response ratio increases making the long-awaited process to execute next. So, this algorithm solves the starvation problem that exists in SJN scheduling algorithm. Algorithm. Given a Linked list Q, iterate through Q to find the highest ratio by comparing each ratio within the queue. Once a ratio of element N is greater than the element M with the highest ratio replace element M with element N as the highest ratio element in the list. Once the end of the list is reached dequeue the highest ratio element. If the element is at the start of the list, dequeue it and set the list to its next element, returning the element. Otherwise N's neighbours are reassigned to identify each other as their next and previous neighbour, returning the result of N.
[ { "math_id": 0, "text": "response\\ ratio = \\frac{waiting\\ time\\ of\\ a\\ process\\ so\\ far + estimated\\ run\\ time}{estimated\\ run\\ time} = 1 + \\frac{waiting\\ time\\ of\\ a\\ process\\ so\\ far}{estimated\\ run\\ time}" } ]
https://en.wikipedia.org/wiki?curid=6659305
66599
Noise pollution
Excessive, displeasing environmental noise Noise pollution, or sound pollution, is the propagation of noise or sound with ranging impacts on the activity of human or animal life, most of which are harmful to a degree. The source of outdoor noise worldwide is mainly caused by machines, transport and propagation systems. Poor urban planning may give rise to noise disintegration or pollution, side-by-side industrial and residential buildings can result in noise pollution in the residential areas. Some of the main sources of noise in residential areas include loud music, transportation (traffic, rail, airplanes, etc.), lawn care maintenance, construction, electrical generators, wind turbines, explosions and people. Documented problems associated with noise in urban environments go back as far as ancient Rome. Research suggests that noise pollution in the United States is the highest in low-income and racial minority neighborhoods, and noise pollution associated with household electricity generators is an emerging environmental degradation in many developing nations. High noise levels can contribute to cardiovascular effects in humans and an increased incidence of coronary artery disease. In animals, noise can increase the risk of death by altering predator or prey detection and avoidance, interfere with reproduction and navigation, and contribute to permanent hearing loss. A substantial amount of the noise that humans produce occurs in the ocean. Up until recently, most research on noise impacts has been focused on marine mammals, and to a lesser degree, fish. In the past few years, scientists have shifted to conducting studies on invertebrates and their responses to anthropogenic sounds in the marine environment. This research is essential, especially considering that invertebrates make up 75% of marine species, and thus compose a large percentage of ocean food webs. Of the studies that have been conducted, a sizable variety in families of invertebrates have been represented in the research. A variation in the complexity of their sensory systems exists, which allows scientists to study a range of characteristics and develop a better understanding of anthropogenic noise impacts on living organisms. Because the local civic noise environment can impact the perceived value of real estate, often the largest equity held by a home owner, personal stakes in the noise environment and the civic politics surrounding the noise environment can run extremely high. Noise assessment. Metrics of noise. Researchers measure noise in terms of pressure, intensity, and frequency. Sound pressure level (SPL) represents the amount of pressure relative to atmospheric pressure during sound wave propagation that can vary with time; this is also known as the sum of the amplitudes of a wave. Sound intensity, measured in Watts per meters-squared, represents the flow of sound over a particular area. Although sound pressure and intensity differ, both can describe the level of loudness by comparing the current state to the threshold of hearing; this results in decibel units on the logarithmic scale. The logarithmic scale accommodates the vast range of sound heard by the human ear. Frequency, or pitch, is measured in Hertz (Hz) and reflects the number of sound waves propagated through the air per second. The range of frequencies heard by the human ear range from 20 Hz to 20,000 Hz; however, sensitivity to hearing higher frequencies decreases with age. Some organisms, such as elephants, can register frequencies between 0 and 20 Hz (infrasound), and others, such as bats, can recognize frequencies above 20,000 Hz (ultrasound) to echolocate. Researchers use different weights to account for noise frequency with intensity, as humans do not perceive sound at the same loudness level. The most commonly used weighted levels are A-weighting, C-weighting, and Z-weighting. A-weighting mirrors the range of hearing, with frequencies of 20 Hz to 20,000 Hz. This gives more weight to higher frequencies and less weight to lower frequencies. C-weighting has been used to measure peak sound pressure or impulse noise, similar to loud short-lived noises from machinery in occupational settings. Z-weighting, also known as zero-weighting, represents noise levels without any frequency weights. Understanding sound pressure levels is key to assessing measurements of noise pollution. Several metrics describing noise exposure include: Researchers with the US National Park Service found that human activity doubles the background-noise levels in 63 percent of protected spaces like national parks, and increases them tenfold in 21 percent. In the latter places, "if you could have heard something 100 feet away, now you can only hear it 10 feet away," Instrumentation. Sound level meters. Sound can be measured in the air using a sound level meter, a device consisting of a microphone, an amplifier, and a time meter. Sound level meters can measure noise at different frequencies (usually A- and C-weighted levels). There are two settings for response time constants, "fast ("time constant = 0.125 seconds, similar to human hearing) or "slow" (1 second, used for calculating averages over widely varying sound levels). Sound level meters meet the required standards set by the International Electrotechnical Commission (IEC) and in the United States, the American National Standards Institute as type 0, 1, or 2 instruments. Type 0 devices are not required to meet the same criteria expected of types 1 and 2 since scientists use these as laboratory reference standards. Type 1 (precision) instruments are to study the precision of capturing sound measurements, while type 2 instruments are for general field use. Type 1 devices acceptable by the standards have a margin of error of ±1.5 dB, while type 2 instruments meet a margin of error of ±2.3 dB. Dosimeters. Sound can also be measured using a noise dosimeter, a device similar to a sound level meter. Individuals have used dosimeters to measure personal exposure levels in occupational settings given their smaller, more portable size. Unlike many sound level meters, a dosimeter microphone attaches to the worker and monitors levels throughout a work shift. Additionally, dosimeters can calculate the percent dose or time-weighted average (TWA). Smartphone applications. In recent years, scientists and audio engineers have been developing smartphone apps to conduct sound measurements, similar to the standalone sound level meters and dosimeters. In 2014, the National Institute for Occupational Safety and Health (NIOSH) within the Centers for Disease Control and Prevention (CDC) published a study examining the efficacy of 192 sound measurement apps on Apple and Android smartphones. The authors found that only 10 apps, all of which were on the App Store, met all acceptability criteria. Of these 10 apps, only 4 apps met accuracy criteria within 2 dB(A) from the reference standard. As a result of this study, they created the NIOSH Sound Level Meter App to increase accessibility and decrease costs of monitoring noise using crowdsourcing data with a tested and highly accurate application. The app is compliant with ANSI S1.4 and IEC 61672 requirements. The app calculates the following measures: total run time, instantaneous sound level, A-weighted equivalent sound level (LAeq), maximum level (LAmax), C-weighted peak sound level, time-weighted average (TWA), dose, and projected dose. Dose and projected dose are based on sound level and duration of noise exposure in relation to the NIOSH recommended exposure limit of 85 dB(A) for an eight-hour work shift. Using the phone's internal microphone (or an attached external microphone), the NIOSH Sound Level Meter measures instantaneous sound levels in real time and converts sound into electrical energy to calculate measurements in A-, C-, or Z-weighted decibels. App users are able to generate, save, and e-mail measurement reports. The NIOSH Sound Level Meter is currently only available on Apple iOS devices. Impacts. Human health. Noise pollution affects both health and behavior. Unwanted sound (noise) can damage physiological health and mental health. Noise pollution is associated with several health conditions, including cardiovascular disorders, hypertension, high stress levels, tinnitus, hearing loss, sleep disturbances, and other harmful and disturbing effects. According to a 2019 review of the existing literature, noise pollution was associated with faster cognitive decline. Across Europe, according to the European Environment Agency, it estimated 113 million people are affected by road traffic noise levels above 55 decibels, the threshold at which noise becomes harmful to human health by the WHO's definition. Sound becomes unwanted when it either interferes with normal activities such as sleep or conversation, or disrupts or diminishes one's quality of life. Noise-induced hearing loss can be caused by prolonged exposure to noise levels above 85 A-weighted decibels. A comparison of Maaban tribesmen, who were insignificantly exposed to transportation or industrial noise, to a typical U.S. population showed that chronic exposure to moderately high levels of environmental noise contributes to hearing loss. Noise exposure in the workplace can also contribute to noise-induced hearing loss and other health issues. Occupational hearing loss is one of the most common work-related illnesses in the U.S. and worldwide. It is less clear how humans adapt to noise subjectively. Tolerance for noise is frequently independent of decibel levels. Murray Schafer's soundscape research was groundbreaking in this regard. In his work, he makes compelling arguments about how humans relate to noise on a subjective level, and how such subjectivity is conditioned by culture. Schafer notes that sound is an expression of power in material culture. As such, fast cars or Harley Davidson motorcycles with aftermarket pipes tend to have louder engines not only for safety reasons, but for expressions of power by dominating the soundscape with a particular sound. Other key research in this area can be seen in Fong's comparative analysis of soundscape differences between Bangkok, Thailand, and Los Angeles, California, US. Based on Schafer's research, Fong's study showed how soundscapes differ based on the level of urban development in the area. He found that cities in the periphery have different soundscapes than inner city areas. Fong's findings tie not only soundscape appreciation to subjective views of sound, but also demonstrates how different sounds of the soundscape are indicative of class differences in urban environments. Noise pollution can have negative affects on adults and children on the autistic spectrum. Those with Autism Spectrum Disorder (ASD) can have hyperacusis, which is an abnormal sensitivity to sound. People with ASD who experience hyperacusis may have unpleasant emotions, such as fear and anxiety, and uncomfortable physical sensations in noisy environments with loud sounds. This can cause individuals with ASD to avoid environments with noise pollution, which in turn can result in isolation and negatively affect their quality of life. Sudden explosive noises typical of high-performance car exhausts and car alarms are types of noise pollution that can affect people with ASD. While the elderly may have cardiac problems due to noise, according to the World Health Organization, children are especially vulnerable to noise, and the effects that noise has on children may be permanent. Noise poses a serious threat to a child's physical and psychological health, and may negatively interfere with a child's learning and behavior. Exposure to persistent noise pollution shows how important maintaining environmental health is in keeping children and elderly healthy. Wildlife. Noise generated by traffic, ships, vehicles, and aircraft can affect the survivability of wildlife species and can reach undisturbed habitats. Although sounds are commonly present in the environment, anthropogenic noises are distinguishable due to differences in frequency and amplitude. Many animals use sounds to communicate with others of their species, whether that is for reproduction purposes, navigation, or to notify others of prey or predators. However, anthropogenic noises inhibit species from detecting these sounds, affecting overall communication within the population. Species such as birds, amphibians, reptiles, fishes, mammals, and invertebrates are examples of biological groups that are impacted by noise pollution. If animals cannot communicate with one another, this would result in reproduction to decline (not able to find mates), and higher mortality (lack of communication for predator detection). European robins living in urban environments are more likely to sing at night in places with high levels of noise pollution during the day, suggesting that they sing at night because it is quieter, and their message can propagate through the environment more clearly. The same study showed that daytime noise was a stronger predictor of nocturnal singing than night-time light pollution, to which the phenomenon often is attributed. Anthropogenic noise reduced the species richness of birds found in Neotropical urban parks. Zebra finches become less faithful to their partners when exposed to traffic noise. This could alter a population's evolutionary trajectory by selecting traits, sapping resources normally devoted to other activities and thus leading to profound genetic and evolutionary consequences. Why invertebrates are affected. Several reasons have been identified relating to hypersensitivity in invertebrates when exposed to anthropogenic noise. Invertebrates have evolved to pick up sound, and a large portion of their physiology is adapted for the purpose of detecting environmental vibrations. Antennae or hairs on the organism pick up particle motion. Anthropogenic noise created in the marine environment, such as pile driving and shipping, are picked up through particle motion; these activities exemplify near-field stimuli. The ability to detect vibration through mechanosensory structures is most important in invertebrates and fish. Mammals, also, depend on pressure detector ears to perceive the noise around them. Therefore, it is suggested that marine invertebrates are likely perceiving the effects of noise differently than marine mammals. It is reported that invertebrates can detect a large range of sounds, but noise sensitivity varies substantially between each species. Generally, however, invertebrates depend on frequencies under 10 kHz. This is the frequency at which a great deal of ocean noise occurs. Therefore, not only does anthropogenic noise often mask invertebrate communication, but it also negatively impacts other biological system functions through noise-induced stress. Another one of the leading causes of noise effects in invertebrates is because sound is used in multiple behavioral contexts by many groups. This includes regularly sound produced or perceived in the context of aggression or predator avoidance. Invertebrates also utilize sound to attract or locate mates, and often employ sound in the courtship process. Stress recorded in physiological and behavioral responses. Many of the studies that were conducted on invertebrate exposure to noise found that a physiological or behavioral response was triggered. Most of the time, this related to stress, and provided concrete evidence that marine invertebrates detect and respond to noise. Some of the most informative studies in this category focus on hermit crabs. In one study, it was found that the behavior of the hermit crab "Pagurus bernhardus", when attempting to choose a shell, was modified when subjected to noise. Proper selection of hermit crab shells strongly contributes to their ability to survive. Shells offer protection against predators, high salinity and desiccation. However, researchers determined that approach to shell, investigation of shell, and habitation of shell, occurred over a shorter time duration with anthropogenic noise as a factor. This indicated that assessment and decision-making processes of the hermit crab were both altered, even though hermit crabs are not known to evaluate shells using any auditory or mechanoreception mechanisms. In another study that focused on "Pagurus bernhardus" and the blue mussel, ("Mytilus edulis)" physical behaviors exhibited a stress response to noise. When the hermit crab and mussel were exposed to different types of noise, significant variation in the valve gape occurred in the blue mussel. The hermit crab responded to the noise by lifting the shell off of the ground multiple times, then vacating the shell to examine it before returning inside. The results from the hermit crab trials were ambiguous with respect to causation; more studies must be conducted in order to determine whether the behavior of the hermit crab can be attributed to the noise produced. Another study that demonstrates a stress response in invertebrates was conducted on the squid species "Doryteuthis pealeii". The squid was exposed to sounds of construction known as pile driving, which impacts the sea bed directly and produces intense substrate-borne and water-borne vibrations. The squid reacted by jetting, inking, pattern change and other startle responses. Since the responses recorded are similar to those identified when faced with a predator, it is implied that the squid initially viewed the sounds as a threat. However, it was also noted that the alarm responses decreased over a period of time, signifying that the squid had likely acclimated to the noise. Regardless, it is apparent that stress occurred in the squid, and although further investigation has not been pursued, researchers suspect that other implications exist that may alter the squid's survival habits. An additional study examined the impact noise exposure had on the Indo-Pacific humpbacked dolphin ("Sousa chinensis"). The dolphins were exposed to elevated noise levels due to construction in the Pearl River Estuary in China, specifically caused by the world's largest vibration hammer—the OCTA-KONG. The study suggested that while the dolphin's clicks were not affected, their whistles were because of susceptibility to auditory masking. The noise from the OCTA-KONG was found to have been detectable by the dolphins up to 3.5 km away from the original source, and while the noise was not found to be life-threatening it was indicated that prolonged exposure to this noise could be responsible for auditory damage. Marine life. Noise pollution is common in marine ecosystems, affecting at least 55 marine species. For many marine populations, sound is their primary sense used for their survival; able to detect sound hundreds to thousands of kilometers away from a source, while vision is limited to tens of meters underwater. As anthropogenic noises continue to increase, doubling every decade, this compromises the survivability of marine species. One study discovered that as seismic noises and naval sonar increases in marine ecosystems, cetacean, such as whales and dolphins, diversity decreases. Noise pollution has also impaired fish hearing, killed and isolated whale populations, intensified stress response in marine species, and changed species' physiology. Because marine species are sensitive to noise, most marine wildlife are located in undisturbed habitats or areas not exposed to significant anthropogenic noise, limiting suitable habitats to forage and mate. Whales have changed their migration route to avoid anthropogenic noise, as well as altering their calls. For many marine organisms, sound is the primary means of learning about their environments. For example, many species of marine mammals and fish use sound as their primary means of navigating, communicating, and foraging. Anthropogenic noise can have a detrimental effect on animals, increasing the risk of death by changing the delicate balance in predator or prey detection and avoidance, and interfering with the use of the sounds in communication, especially in relation to reproduction, and in navigation and echolocation. These effects then may alter more interactions within a community through indirect ("domino") effects. Acoustic overexposure can lead to temporary or permanent loss of hearing. Noise pollution may have caused the death of certain species of whales that beached themselves after being exposed to the loud sound of military sonar. (see also Marine mammals and sonar) Even marine invertebrates, such as crabs ("Carcinus maenas"), have been shown to be negatively affected by ship noise. Larger crabs were noted to be negatively affected more by the sounds than smaller crabs. Repeated exposure to the sounds did lead to acclimatization. Underwater noise pollution due to human activities is also prevalent in the sea, and given that sound travels faster through water than through air, is a major source of disruption of marine ecosystems and does significant harm to sea life, including marine mammals, fish and invertebrates. The once-calm sea environment is now noisy and chaotic due to ships, oil drilling, sonar equipment, and seismic testing. The principal anthropogenic noise sources come from merchant ships, naval sonar operations, underwater explosions (nuclear), and seismic exploration by oil and gas industries. Cargo ships generate high levels of noise due to propellers and diesel engines. This noise pollution significantly raises the low-frequency ambient noise levels above those caused by wind. Animals such as whales that depend on sound for communication can be affected by this noise in various ways. Higher ambient noise levels also cause animals to vocalize more loudly, which is called the Lombard effect. Researchers have found that humpback whales' song lengths were longer when low-frequency sonar was active nearby. Underwater noise pollution is not only limited to oceans, and can occur in freshwater environments as well. Noise pollution has been detected in the Yangtze River, and has resulted in the endangerment of Yangtze finless porpoises. A study conducted on noise pollution in the Yangtze River suggested that the elevated levels of noise pollution altered the temporal hearing threshold of the finless porpoises and posed a significant threat to their survival. Coral Reefs. Noise pollution has emerged as a prominent stressor on coral reef ecosystems. Coral reefs are among the most important ecosystems of the earth, as well as they are of great importance to several communities and cultures around the world, that depend on the reefs for the services they provide, such as fishing and tourism. The reefs contribute substantially to global biodiversity and productivity, and is a critical part of the support systems of the earth. Anthropogenic noise, originating from human activities, has increased underwater noise in the natural sound environment of the reefs. The preeminent sources of noise pollution on coral reefs are boat and ship activities. The sound created by the crossing of boats and ships overlaps with the natural sounds of the coral reef organisms. This pollution impacts the various organisms inhabiting the coral reefs in different ways, and ultimately damages the capabilities of the reef and may cause permanent deterioration. Healthy coral reefs are naturally noisy, consisting of the sounds of breaking waves and tumbling rocks, as well as the sounds produced by fish and other organisms. Marine organisms use sound for purposes such as navigating, foraging, communicating, and reproductive activities. The sensitivity and range of hearing varies across different organisms within the coral reef ecosystem. Among coral reef fish, sound detection and generation can span from 1 Hz to 200 kHz, while their hearing abilities encompasses frequencies within the range of 100 Hz to 1kHz. Several different types of anthropogenic noise are at the same frequencies as marine organisms in coral reefs use for navigation, communication, and other purposes, which disturbs the natural sound environment of the coral reefs. Anthropogenic sources of noise are generated by a range of different human activities, such as shipping, oil and gas exploration and fishing. The principal cause of noise pollution on coral reefs is boat and ship activities. The use of smaller motorboats, for purposes as fishing or tourism within coral reef areas, and larger vessels, such as cargo ships transporting goods, significantly amplifies disturbances to the natural marine soundscape. Noise from shipping and small boats is at the same frequency as sounds generated by marine organisms, and therefore acts as a disruptive element in the sound environment of coral reefs. Both longer-term and acute effects have been documented on coral reefs organisms after exposure to noise pollution. Anthropogenic noise is essentially a persistent stressor on coral reefs and its inhabitants. Both temporary and permanent noise pollution has been found to induce changes in the distributional, physiological, and behavioral patterns of coral reef organisms. Some of the observed changes has been compromised hearing, increased heart rate in coral fish and a reduction in the number of larvae reaching their settlement areas. Ultimately, the outcome of such changes results in reduced survival rates and altered patterns which potentially alters the entirety of the reef ecosystem. The white damselfish, a coral reef fish, has been found to have a compromised anti-predator behavior as a result to ship noise. The distraction of anthropogenic noise is possibly distracting the fish, and thereby affecting the escape response and routine swimming of the coral fish. A study conducted on species of coral larvae, which are crucial for the expansion of coral reefs, discovered that the larvae oriented towards the sound of healthy reefs. The noise created by anthropogenic activities could mask this soundscape, hindering the larvae from swimming towards the reef. Noise pollution ultimately poses a threat to the behavioral patterns of several coral organisms. Impacts on communication. Terrestrial anthropogenic noise affects the acoustic communications in grasshoppers while producing sound to attract a mate. The fitness and reproductive success of a grasshopper is dependent on its ability to attract a mating partner. Male "Corthippus biguttulus" grasshoppers attract females by using stridulation to produce courtship songs. The females produce acoustic signals that are shorter and primarily low frequency and amplitude, in response to the male's song. Research has found that this species of grasshopper changes its mating call in response to loud traffic noise. Lampe and Schmoll (2012) found that male grasshoppers from quiet habitats have a local frequency maximum of about 7319 Hz. In contrast, male grasshoppers exposed to loud traffic noise can create signals with a higher local frequency maximum of 7622 Hz. The higher frequencies are produced by the grasshoppers to prevent background noise from drowning out their signals. This information reveals that anthropogenic noise disturbs the acoustic signals produced by insects for communication. Similar processes of behavior perturbation, behavioral plasticity, and population level shifts in response to noise likely occur in sound-producing marine invertebrates, but more experimental research is needed. Impacts on development. Boat-noise has been shown to affect the embryonic development and fitness of the sea hare "Stylocheilus striatus". Anthropogenic noise can alter conditions in the environment that have a negative effect on invertebrate survival. Although embryos can adapt to normal changes in their environment, evidence suggests they are not well adapted to endure the negative effects of noise pollution. Studies have been conducted on the sea hare to determine the effects of boat noise on the early stages of life and the development of embryos. Researchers have studied sea hares from the lagoon of Moorea Island, French Polynesia. In the study, recordings of boat noise were made by using a hydrophone. In addition, recordings of ambient noise were made that did not contain boat noise. In contrast to ambient noise playbacks, mollusks exposed to boat noise playbacks had a 21% reduction in embryonic development. Additionally, newly hatched larvae experienced an increased mortality rate of 22% when exposed to boat noise playbacks. Impacts on ecosystem. Anthropogenic noise can have negative effects on invertebrates that aid in controlling environmental processes that are crucial to the ecosystem. There are a variety of natural underwater sounds produced by waves in coastal and shelf habitats, and biotic communication signals that do not negatively impact the ecosystem. The changes in behavior of invertebrates vary depending on the type of anthropogenic noise and is similar to natural noisescapes. Experiments have examined the behavior and physiology of the clam ("Ruditapes philippinarum"), the decapod ("Nephrops norvegicus"), and the brittlestar ("Amphiura filiformis") that are affected by sounds resembling shipping and building noises. The three invertebrates in the experiment were exposed to continuous broadband noise and impulsive broadband noise. The anthropogenic noise impeded the bioirrigation and burying behavior of "Nephrops norvegicus". In addition, the decapod exhibited a reduction in movement. "Ruditapes philippinarum" experienced stress which caused a reduction in surface relocation. The anthropogenic noise caused the clams to close their valves and relocate to an area above the interface of the sediment-water. This response inhibits the clam from mixing the top layer of the sediment profile and hinders suspension feeding. Sound causes "Amphiura filiformis" to experience changes in physiological processes which results in irregularity of bioturbation behavior. These invertebrates play an important role in transporting substances for benthic nutrient cycling. As a result, ecosystems are negatively impacted when species cannot perform natural behaviors in their environment. Locations with shipping lanes, dredging, or commercial harbors are known as continuous broadband sound. Pile-driving, and construction are sources that exhibit impulsive broadband noise. The different types of broadband noise have different effects on the varying species of invertebrates and how they behave in their environment. Another study found that the valve closures in the Pacific oyster "Magallana gigas" was a behavioral response to varying degrees of acoustic amplitude levels and noise frequencies. Oysters perceive near-field sound vibrations by utilizing statocysts. In addition, they have superficial receptors that detect variations in water pressure. Sound pressure waves from shipping can be produced below 200Hz. Pile driving generates noise between 20 and 1000 Hz. In addition, large explosions can create frequencies ranging from 10 to 200Hz. "M. gigas" can detect these noise sources because their sensory system can detect sound in the 10 to &lt; 1000Hz range. The anthropogenic noise produced by human activity has been shown to negatively impact oysters. Studies have revealed that wide and relaxed valves are indicative of healthy oysters. The oysters are stressed when they do not open their valves as frequently in response to environmental noise. This provides support that the oysters detect noise at low acoustic energy levels. While we generally understand that marine noise pollution influences charismatic megafauna like whales and dolphins, understanding  how invertebrates like oysters perceive and respond to human generated sound can provide further insight about the effects of anthropogenic noise on the larger ecosystem. The aquatic ecosystems are known to use sound to navigate, find food, and protect themselves. In 2020, one of the worst mass stranding of whales occurred in Australia. Experts suggest that noise pollution plays a major role in the mass stranding of whales. Noise pollution has also altered avian communities and diversity. Anthropogenic noises have a similar effect on bird population as seen in marine ecosystems, where noises reduce reproductive success; cannot detect predators due to interferences of anthropogenic noises, minimize nesting areas, increase stress response, and species abundances and richness declining. Certain avian species are more sensitive to noises compared to others, resulting in highly-sensitive birds migrating to less disturbed habitats. There has also been evidence of indirect positive effects of anthropogenic noises on avian populations. It was found that nesting bird predators, such as the western scrub-jay ("Aphelocoma californica"), were uncommon in noisy environments (western scrub-jay are sensitive to noise). Therefore, reproductive success for nesting prey communities was higher due to the lack of predators. Noise pollution can alter the distribution and abundance of prey species, which can then impact predator populations. Noise control. The Hierarchy of Controls concept is often used to reduce noise in the environment or the workplace. Engineering noise controls can be used to reduce noise propagation and protect individuals from overexposure. When noise controls are not feasible or adequate, individuals can also take steps to protect themselves from the harmful effects of noise pollution. If people must be around loud sounds, they can protect their ears with hearing protection (e.g., ear plugs or ear muffs). Buy Quiet programs and initiatives have arisen in an effort to combat occupational noise exposures. These programs promote the purchase of quieter tools and equipment and encourage manufacturers to design quieter equipment. Noise from roadways and other urban factors can be mitigated by urban planning and better design of roads. Roadway noise can be reduced by the use of noise barriers, limitation of vehicle speeds, alteration of roadway surface texture, limitation of heavy vehicles, use of traffic controls that smooth vehicle flow to reduce braking and acceleration, and tyre design. An important factor in applying these strategies is a computer model for roadway noise, that is capable of addressing local topography, meteorology, traffic operations, and hypothetical mitigation. Costs of building-in mitigation can be modest, provided these solutions are sought in the planning stage of a roadway project. Aircraft noise can be reduced by using quieter jet engines. Altering flight paths and time of day runway has benefited residents near airports. Legal status and regulation. Country-specific regulations. Up until the 1970s governments tended to view noise as a "nuisance" rather than an environmental problem. Many conflicts over noise pollution are handled by negotiation between the emitter and the receiver. Escalation procedures vary by country, and may include action in conjunction with local authorities, in particular the police. Egypt. In 2007, the Egyptian National Research Center found that the average noise level in central Cairo was 90 decibels and that the noise never fell below 70 decibels. Noise limits set by law in 1994 are not enforced. In 2018, the World Hearing Index declared Cairo to be the world's second-noisiest city. India. Noise pollution is a major problem in India. The government of India has rules and regulations against firecrackers and loudspeakers, but enforcement is extremely lax. Awaaz Foundation is a non-governmental organization in India working to control noise pollution from various sources through advocacy, public interest litigation, awareness, and educational campaigns since 2003. Despite increased enforcement and stringency of laws now being practiced in urban areas, rural areas are still affected. The Supreme Court of India had banned playing of music on loudspeakers after 10pm. In 2015, The National Green Tribunal directed authorities in Delhi to ensure strict adherence to guidelines on noise pollution, saying noise is more than just a nuisance as it can produce serious psychological stress. However, implementation of the law remains poor. Sweden. How noise emissions should be reduced, without the industry being hit too hard, is a major problem in environmental care in Sweden today. The Swedish Work Environment Authority has set an input value of 80 dB for maximum sound exposure for eight hours. In workplaces where there is a need to be able to converse comfortably the background noise level should not exceed 40 dB. The government of Sweden has taken soundproofing and acoustic absorbing actions, such as noise barriers and active noise control. United Kingdom. Figures compiled by rockwool, the mineral wool insulation manufacturer, based on responses from local authorities to a Freedom of Information Act (FOI) request reveal in the period April 2008 – 2009 UK councils received 315,838 complaints about noise pollution from private residences. This resulted in environmental health officers across the UK serving 8,069 noise abatement notices or citations under the terms of the Anti-Social Behavior (Scotland) Act. In the last 12 months, 524 confiscations of equipment have been authorized involving the removal of powerful speakers, stereos and televisions. Westminster City Council has received more complaints per head of population than any other district in the UK with 9,814 grievances about noise, which equates to 42.32 complaints per thousand residents. Eight of the top 10 councils ranked by complaints per 1,000 residents are located in London. United States. The Noise Control Act of 1972 established a U.S. national policy to promote an environment for all Americans free from noise that jeopardizes their health and welfare. In the past, Environmental Protection Agency coordinated all federal noise control activities through its Office of Noise Abatement and Control. The EPA phased out the office's funding in 1982 as part of a shift in federal noise control policy to transfer the primary responsibility of regulating noise to state and local governments. However, the Noise Control Act of 1972 and the Quiet Communities Act of 1978 were never rescinded by Congress and remain in effect today, although essentially unfunded. The National Institute for Occupational Safety and Health (NIOSH) at the Centers for Disease Control and Prevention (CDC) researches noise exposure in occupational settings and recommends a Recommended Exposure Limit (REL) for an 8-hour time-weighted average (TWA) or work shift of 85 dB(A) and for impulse noise (instant events such as bangs or crashes) of 140 dB(A). The agency published this recommendation along with its origin, noise measurement devices, hearing loss prevention programs, and research needs in 1972 (later revised June 1998) as an approach in preventing occupational noise-related hearing loss. The Occupational Safety and Health Administration (OSHA) within the Department of Labor issues enforceable standards to protect workers from occupational noise hazards. The permissible exposure limit (PEL) for noise is a TWA of 90 dB(A) for an eight-hour work day. However, in manufacturing and service industries, if the TWA is greater than 85 dB(A), employers must implement a Hearing Conservation Program. The Federal Aviation Administration (FAA) regulates aircraft noise by specifying the maximum noise level that individual civil aircraft can emit through requiring aircraft to meet certain noise certification standards. These standards designate changes in maximum noise level requirements by "stage" designation. The U.S. noise standards are defined in the Code of Federal Regulations (CFR) Title 14 Part 36 – Noise Standards: Aircraft Type and Airworthiness Certification (14 CFR Part 36). The FAA also pursues a program of aircraft noise control in cooperation with the aviation community. The FAA has set up a process to report for anyone who may be impacted by aircraft noise. The Federal Highway Administration (FHWA) developed noise regulations to control highway noise as required by the Federal-Aid Highway Act of 1970. The regulations requires promulgation of traffic noise-level criteria for various land use activities, and describe procedures for the abatement of highway traffic noise and construction noise. The Department of Housing and Urban Development (HUD) noise standards as described in 24 CFR part 51, Subpart B provides minimum national standards applicable to HUD programs to protect citizen against excessive noise in their communities and places of residence. For instance, all sites whose environmental or community noise exposure exceeds the day night average sound level (DNL) of 65 (dB) are considered noise-impacted areas, it defines "Normally Unacceptable" noise zones where community noise levels are between 65 and 75 dB, for such locations, noise abatement and noise attenuation features must be implemented. Locations where the DNL is above 75 dB are considered "Unacceptable" and require approval by the Assistant Secretary for Community Planning and Development. The Department of Transportation's Bureau of Transportation Statistics has created a to provide access to comprehensive aircraft and road noise data on national and county levels. The map aims to assist city planners, elected officials, scholars, and residents to gain access to up-to-date aviation and Interstate highway noise information. States and local governments typically have very specific statutes on building codes, urban planning, and roadway development. Noise laws and ordinances vary widely among municipalities and indeed do not even exist in some cities. An ordinance may contain a general prohibition against making noise that is a nuisance, or it may set out specific guidelines for the level of noise allowable at certain times of the day and for certain activities. Noise laws classify sound into three categories. First is ambient noise, which refers to sound pressure of all-encompassing noise associated with a given environment. The second is continuous noise, which could be steady or fluctuating, but continues for more than an hour. The third is cyclically varying noise, which could be steady or fluctuating, but occurs repetitively at reasonably uniform intervals of time. New York City instituted the first comprehensive noise code in 1985. The Portland Noise Code includes potential fines of up to $5000 per infraction and is the basis for other major U.S. and Canadian city noise ordinances. World Health Organization. European Region. In 1995, the World Health Organization (WHO) European Region released guidelines on regulating community noise. The WHO European Region subsequently released other versions of the guidelines, with the most recent version circulated in 2018. The guidelines provide the most up-to-date evidence from research conducted in Europe and other parts of the world on non-occupational noise exposure and its relationship to physical and mental health outcomes. The guidelines provide recommendations for limits and preventive measures regarding various noise sources (road traffic, railway, aircraft, wind turbine) for day-evening-night average and nighttime average levels. Recommendations for leisure noise in 2018 were conditional and based on the equivalent sound pressure level during an average 24-hour period in a year without weights for nighttime noise (LAeq, 24 hrs); WHO set the recommended limit to 70 dB(A). See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_{dn}= 10 \\cdot \\log_{10} \\frac{1}{24} \\left ( 15 \\cdot 10^\\frac{L_{day}}{10} + 9 \\cdot 10^\\frac{L_{night} + 10}{10} \\right ) " }, { "math_id": 1, "text": "L_{den}= 10 \\cdot \\log_{10} \\frac{1}{24} \\left ( 12 \\cdot 10^\\frac{L_{day}}{10} + 4 \\cdot 10^\\frac{L_{evening} + 5}{10} + 8 \\cdot 10^\\frac{L_{night} + 10}{10} \\right ) " } ]
https://en.wikipedia.org/wiki?curid=66599
6660265
Equilibrium unfolding
Biochemistry process In biochemistry, equilibrium unfolding is the process of unfolding a protein or RNA molecule by gradually changing its environment, such as by changing the temperature or pressure, pH, adding chemical denaturants, or applying force as with an atomic force microscope tip. If the equilibrium was maintained at all steps, the process theoretically should be reversible during equilibrium folding. Equilibrium unfolding can be used to determine the thermodynamic stability of the protein or RNA structure, i.e. free energy difference between the folded and unfolded states. Theoretical background. In its simplest form, equilibrium unfolding assumes that the molecule may belong to only two thermodynamic states, the "folded state" (typically denoted "N" for "native" state) and the unfolded state (typically denoted "U"). This "all-or-none" model of protein folding was first proposed by Tim Anson in 1945, but is believed to hold only for small, single structural domains of proteins (Jackson, 1998); larger domains and multi-domain proteins often exhibit intermediate states. As usual in statistical mechanics, these states correspond to ensembles of molecular conformations, not just one conformation. The molecule may transition between the native and unfolded states according to a simple kinetic model N ⇌ U with rate constants formula_0 and formula_1 for the folding (&lt;chem&gt;U -&gt; N&lt;/chem&gt;) and unfolding (&lt;chem&gt;N -&gt; U&lt;/chem&gt;) reactions, respectively. The dimensionless equilibrium constant formula_2 can be used to determine the conformational stability formula_3 by the equation formula_4 where formula_5 is the gas constant and formula_6 is the absolute temperature in kelvin. Thus, formula_3 is positive if the unfolded state is less stable (i.e., disfavored) relative to the native state. The most direct way to measure the conformational stability formula_3 of a molecule with two-state folding is to measure its kinetic rate constants formula_0 and formula_1 under the solution conditions of interest. However, since protein folding is typically completed in milliseconds, such measurements can be difficult to perform, usually requiring expensive stopped flow or (more recently) continuous-flow mixers to provoke folding with a high time resolution. Dual polarisation interferometry is an emerging technique to directly measure conformational change and formula_3. Chemical denaturation. In the less extensive technique of equilibrium unfolding, the fractions of folded and unfolded molecules (denoted as formula_7 and formula_8, respectively) are measured as the solution conditions are gradually changed from those favoring the native state to those favoring the unfolded state, e.g., by adding a denaturant such as guanidinium hydrochloride or urea. (In equilibrium folding, the reverse process is carried out.) Given that the fractions must sum to one and their ratio must be given by the Boltzmann factor, we have formula_9 formula_10 Protein stabilities are typically found to vary linearly with the denaturant concentration. A number of models have been proposed to explain this observation prominent among them being the denaturant binding model, solvent-exchange model (both by John Schellman) and the Linear Extrapolation Model (LEM; by Nick Pace). All of the models assume that only two thermodynamic states are populated/de-populated upon denaturation. They could be extended to interpret more complicated reaction schemes. The denaturant binding model assumes that there are specific but independent sites on the protein molecule (folded or unfolded) to which the denaturant binds with an effective (average) binding constant "k". The equilibrium shifts towards the unfolded state at high denaturant concentrations as it has more binding sites for the denaturant relative to the folded state (formula_11). In other words, the increased number of potential sites exposed in the unfolded state is seen as the reason for denaturation transitions. An elementary treatment results in the following functional form: formula_12 where formula_13 is the stability of the protein in water and [D] is the denaturant concentration. Thus the analysis of denaturation data with this model requires 7 parameters: formula_13,formula_11, "k", and the slopes and intercepts of the folded and unfolded state baselines. The solvent exchange model (also called the ‘weak binding model’ or ‘selective solvation’) of Schellman invokes the idea of an equilibrium between the water molecules bound to independent sites on protein and the denaturant molecules in solution. It has the form: formula_14 where formula_15 is the equilibrium constant for the exchange reaction and formula_16 is the mole-fraction of the denaturant in solution. This model tries to answer the question of whether the denaturant molecules actually bind to the protein or they "seem" to be bound just because denaturants occupy about 20-30% of the total solution volume at high concentrations used in experiments, i.e. non-specific effects – and hence the term ‘weak binding’. As in the denaturant-binding model, fitting to this model also requires 7 parameters. One common theme obtained from both these models is that the binding constants (in the molar scale) for urea and guanidinium hydrochloride are small: ~ 0.2 formula_17 for urea and 0.6 formula_17 for GuHCl. Intuitively, the difference in the number of binding sites between the folded and unfolded states is directly proportional to the differences in the accessible surface area. This forms the basis for the LEM which assumes a simple linear dependence of stability on the denaturant concentration. The resulting slope of the plot of stability versus the denaturant concentration is called the m-value. In pure mathematical terms, m-value is the derivative of the change in stabilization free energy upon the addition of denaturant. However, a strong correlation between the accessible surface area (ASA) exposed upon unfolding, i.e. difference in the ASA between the unfolded and folded state of the studied protein (dASA), and the m-value has been documented by Pace and co-workers. In view of this observation, the m-values are typically interpreted as being proportional to the dASA. There is no physical basis for the LEM and it is purely empirical, though it is widely used in interpreting solvent-denaturation data. It has the general form: formula_18 where the slope formula_19 is called the ""m"-value"(&gt; 0 for the above definition) and formula_20 (also called "Cm") represents the denaturant concentration at which 50% of the molecules are folded (the "denaturation midpoint" of the transition, where formula_21). In practice, the observed experimental data at different denaturant concentrations are fit to a two-state model with this functional form for formula_22, together with linear baselines for the folded and unfolded states. The formula_19 and formula_20 are two fitting parameters, along with four others for the linear baselines (slope and intercept for each line); in some cases, the slopes are assumed to be zero, giving four fitting parameters in total. The conformational stability formula_22 can be calculated for any denaturant concentration (including the stability at zero denaturant) from the fitted parameters formula_19 and formula_20. When combined with kinetic data on folding, the "m"-value can be used to roughly estimate the amount of buried hydrophobic surface in the folding transition state. Structural probes. Unfortunately, the probabilities formula_7 and formula_8 cannot be measured directly. Instead, we assay the relative population of folded molecules using various structural probes, e.g., absorbance at 287 nm (which reports on the solvent exposure of tryptophan and tyrosine), far-ultraviolet circular dichroism (180-250 nm, which reports on the secondary structure of the protein backbone), dual polarisation interferometry (which reports the molecular size and fold density) and near-ultraviolet fluorescence (which reports on changes in the environment of tryptophan and tyrosine). However, nearly any probe of folded structure will work; since the measurement is taken at equilibrium, there is no need for high time resolution. Thus, measurements can be made of NMR chemical shifts, intrinsic viscosity, solvent exposure (chemical reactivity) of side chains such as cysteine, backbone exposure to proteases, and various hydrodynamic measurements. To convert these observations into the probabilities formula_7 and formula_8, one generally assumes that the observable formula_23 adopts one of two values, formula_24 or formula_25, corresponding to the native or unfolded state, respectively. Hence, the observed value equals the linear sum formula_26 By fitting the observations of formula_23 under various solution conditions to this functional form, one can estimate formula_24 and formula_25, as well as the parameters of formula_22. The fitting variables formula_24 and formula_25 are sometimes allowed to vary linearly with the solution conditions, e.g., temperature or denaturant concentration, when the asymptotes of formula_23 are observed to vary linearly under strongly folding or strongly unfolding conditions. Thermal denaturation. Assuming a two state denaturation as stated above, one can derive the fundamental thermodynamic parameters namely, formula_27, formula_28 and formula_22 provided one has knowledge on the formula_29 of the system under investigation. The thermodynamic observables of denaturation can be described by the following equations: formula_30 where formula_31, formula_32 and formula_33 indicate the enthalpy, entropy and Gibbs free energy of unfolding under a constant pH and pressure. The temperature, formula_34 is varied to probe the thermal stability of the system and formula_35 is the temperature at which half of the molecules in the system are unfolded. The last equation is known as the Gibbs–Helmholtz equation. Determining the heat capacity of proteins. In principle one can calculate all the above thermodynamic observables from a single differential scanning calorimetry thermogram of the system assuming that the formula_36 is independent of the temperature. However, it is difficult to obtain accurate values for formula_36 this way. More accurately, the formula_36 can be derived from the variations in formula_37 vs. formula_38 which can be achieved from measurements with slight variations in pH or protein concentration. The slope of the linear fit is equal to the formula_36. Note that any non-linearity of the datapoints indicates that formula_29 is probably not independent of the temperature. Alternatively, the formula_36 can also be estimated from the calculation of the accessible surface area (ASA) of a protein prior and after thermal denaturation as follows: formula_39 For proteins that have a known 3d structure, the formula_40 can be calculated through computer programs such as Deepview (also known as swiss PDB viewer). The formula_41 can be calculated from tabulated values of each amino acid through the semi-empirical equation: formula_42 where the subscripts polar, non-polar and aromatic indicate the parts of the 20 naturally occurring amino acids. Finally for proteins, there is a linear correlation between formula_43 and formula_36 through the following equation: formula_44 Assessing two-state unfolding. Furthermore, one can assess whether the folding proceeds according to a two-state unfolding as described above. This can be done with differential scanning calorimetry by comparing the calorimetric enthalpy of denaturation i.e. the area under the peak, formula_45 to the van 't Hoff enthalpy described as follows: formula_46 at formula_47 the formula_48 can be described as: formula_49 When a two-state unfolding is observed the formula_50. The formula_51 is the height of the heat capacity peak. Generalization to protein complexes and multi-domain proteins. Using the above principles, equations that relate a global protein signal, corresponding to the folding states in equilibrium, and the variable value of a denaturing agent, either temperature or a chemical molecule, have been derived for homomeric and heteromeric proteins, from monomers to trimers and potentially tetramers. These equations provide a robust theoretical basis for measuring the stability of complex proteins, and for comparing the stabilities of wild type and mutant proteins. Such equations cannot be derived for pentamers of higher oligomers because of mathematical limitations (Abel–Ruffini theorem). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k_{f}" }, { "math_id": 1, "text": "k_{u}" }, { "math_id": 2, "text": "K_{eq} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{k_{u}}{k_{f}} = \\frac{\\left[\\ce U \\right]_{eq}}{\\left[\\ce N \\right]_{eq}}" }, { "math_id": 3, "text": "\\Delta G^o" }, { "math_id": 4, "text": "\n\\Delta G^ o = -RT \\ln K_{eq}\n" }, { "math_id": 5, "text": "R" }, { "math_id": 6, "text": "T" }, { "math_id": 7, "text": "p_{N}" }, { "math_id": 8, "text": "p_{U}" }, { "math_id": 9, "text": "\np_{N} = \\frac{1}{1 + e^{-\\Delta G/RT}}\n" }, { "math_id": 10, "text": "\np_{U} = 1 - p_{N} = \\frac{e^{-\\Delta G/RT}}{1 + e^{-\\Delta G/RT}} = \\frac{1}{1 + e^{\\Delta G/RT}}\n" }, { "math_id": 11, "text": "\\Delta n" }, { "math_id": 12, "text": "\n\\Delta G = \\Delta G_{w} - RT \\Delta n \\ln \\left(1 + k [D] \\right)\n" }, { "math_id": 13, "text": "\\Delta G_{w}" }, { "math_id": 14, "text": "\n\\Delta G = \\Delta G_{w} - RT \\Delta n \\ln \\left(1 + (K-1) X_{D} \\right)\n" }, { "math_id": 15, "text": "K" }, { "math_id": 16, "text": "X_{d}" }, { "math_id": 17, "text": "M^{-1}" }, { "math_id": 18, "text": "\n\\Delta G = m \\left( [D]_{1/2} - [D] \\right)\n" }, { "math_id": 19, "text": "m" }, { "math_id": 20, "text": "\\left[ D \\right]_{1/2}" }, { "math_id": 21, "text": "p_{N} = p_{U} = 1/2" }, { "math_id": 22, "text": "\\Delta G" }, { "math_id": 23, "text": "A" }, { "math_id": 24, "text": "A_{N}" }, { "math_id": 25, "text": "A_{U}" }, { "math_id": 26, "text": "\nA = A_{N} p_{N} + A_{U} p_{U}\n" }, { "math_id": 27, "text": "\\Delta H" }, { "math_id": 28, "text": "\\Delta S" }, { "math_id": 29, "text": "\\Delta C_p" }, { "math_id": 30, "text": "\\begin{align} \n\\Delta H(T)&=\\Delta H(T_d)+ \\int_{T_d}^T \\Delta C_p dT\n\\\\\n&=\\Delta H(T_d)+ \\Delta C_p[T-T_d]\n\\\\\n\\Delta S(T)&=\\frac{\\Delta H(T_d)}{T_d}+ \\int_{T_d}^T \\Delta C_p d \\ln T\n\\\\\n&=\\frac{\\Delta H(T_d)}{T_d}+ \\Delta C_p \\ln \\frac{T}{T_d}\n\\\\\n\\Delta G(T)&=\\Delta H -T \\Delta S\n\\\\\n&=\\Delta H(T_d) \\frac{T_d-T}{T_d}+ \\int_{T_d}^T \\Delta C_p dT - T\\int_{T_d}^T \\Delta C_p d \\ln T\n\\\\\n&=\\Delta H(T_d)\\left(1-\\frac{T}{T_d}\\right) - \\Delta C_p\\left[T_d -T +T \\ln \\left(\\frac{T}{T_d}\\right)\\right]\n\n\\end{align}" }, { "math_id": 31, "text": "\\ \\Delta H" }, { "math_id": 32, "text": "\\ \\Delta S" }, { "math_id": 33, "text": "\\ \\Delta G" }, { "math_id": 34, "text": "\\ T" }, { "math_id": 35, "text": "\\ T_d" }, { "math_id": 36, "text": "\\ce{\\Delta C_p}" }, { "math_id": 37, "text": "\\ce{\\Delta H(T_d)}" }, { "math_id": 38, "text": "\\ce{T_d}" }, { "math_id": 39, "text": "\\ce{\\Delta ASA} = \\ce{ASA_{unfolded}} - \\ce{ASA_{native}} " }, { "math_id": 40, "text": "\\ce{ASA_{native}}" }, { "math_id": 41, "text": "\\ce{ASA_{unfolded}}" }, { "math_id": 42, "text": "\\ce{ASA_{unfolded}} = \\left( a_\\ce{polar} \\times \\ce{ASA_{polar}}\\right) + \\left( a_\\ce{aromatic} \\times \\ce{ASA_{aromatic}} \\right) + \\left( a_\\ce{nonpolar} \\times \\ce{ASA_{nonpolar}}\\right)" }, { "math_id": 43, "text": "\\ce{\\Delta ASA}" }, { "math_id": 44, "text": "\\ce{\\Delta C_p} = 0.61 \\times \\ce{\\Delta ASA}" }, { "math_id": 45, "text": "A_\\text{peak}" }, { "math_id": 46, "text": "\\Delta H_{vH}(T)= -R\\frac{d\\ln K}{dT^{-1}}" }, { "math_id": 47, "text": "T=T_d" }, { "math_id": 48, "text": "\\Delta H_{vH}(T_d)" }, { "math_id": 49, "text": "\\Delta H_{vH}(T_d)= \\frac{RT_d^2 \\Delta C_p^{\\max}}{A_\\text{peak}}" }, { "math_id": 50, "text": "A_\\text{peak}=\\Delta H_{vH}(T_d)" }, { "math_id": 51, "text": "\\Delta C_p^{\\max}" } ]
https://en.wikipedia.org/wiki?curid=6660265
666107
Quantum operation
Class of transformations that quantum systems and processes can undergo In quantum mechanics, a quantum operation (also known as quantum dynamical map or quantum process) is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan. The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel. Note that some authors use the term "quantum operation" to refer specifically to completely positive (CP) and non-trace-increasing maps on the space of density matrices, and the term "quantum channel" to refer to the subset of those that are strictly trace-preserving. Quantum operations are formulated in terms of the density operator description of a quantum mechanical system. Rigorously, a quantum operation is a linear, completely positive map from the set of density operators into itself. In the context of quantum information, one often imposes the further restriction that a quantum operation formula_0 must be "physical", that is, satisfy formula_1 for any state formula_2. Some quantum processes cannot be captured within the quantum operation formalism; in principle, the density matrix of a quantum system can undergo completely arbitrary time evolution. Quantum operations are generalized by quantum instruments, which capture the classical information obtained during measurements, in addition to the quantum information. Background. The Schrödinger picture provides a satisfactory account of time evolution of state for a quantum mechanical system under certain assumptions. These assumptions include The Schrödinger picture for time evolution has several mathematically equivalent formulations. One such formulation expresses the time rate of change of the state via the Schrödinger equation. A more suitable formulation for this exposition is expressed as follows: &lt;templatestyles src="Block indent/styles.css"/&gt;The effect of the passage of "t" units of time on the state of an isolated system S is given by a unitary operator "U""t" on the Hilbert space "H" associated to S. This means that if the system is in a state corresponding to "v" ∈ "H" at an instant of time "s", then the state after "t" units of time will be "U""t" "v". For relativistic systems, there is no universal time parameter, but we can still formulate the effect of certain reversible transformations on the quantum mechanical system. For instance, state transformations relating observers in different frames of reference are given by unitary transformations. In any case, these state transformations carry pure states into pure states; this is often formulated by saying that in this idealized framework, there is no decoherence. For interacting (or open) systems, such as those undergoing measurement, the situation is entirely different. To begin with, the state changes experienced by such systems cannot be accounted for exclusively by a transformation on the set of pure states (that is, those associated to vectors of norm 1 in "H"). After such an interaction, a system in a pure state φ may no longer be in the pure state φ. In general it will be in a statistical mix of a sequence of pure states φ1, ..., φ"k" with respective probabilities λ1, ..., λ"k". The transition from a pure state to a mixed state is known as decoherence. Numerous mathematical formalisms have been established to handle the case of an interacting system. The quantum operation formalism emerged around 1983 from work of Karl Kraus, who relied on the earlier mathematical work of Man-Duen Choi. It has the advantage that it expresses operations such as measurement as a mapping from density states to density states. In particular, the effect of quantum operations stays within the set of density states. Definition. Recall that a density operator is a non-negative operator on a Hilbert space with unit trace. Mathematically, a quantum operation is a linear map Φ between spaces of trace class operators on Hilbert spaces "H" and "G" such that Note that, by the first condition, quantum operations may not preserve the normalization property of statistical ensembles. In probabilistic terms, quantum operations may be sub-Markovian. In order that a quantum operation preserve the set of density matrices, we need the additional assumption that it is trace-preserving. In the context of quantum information, the quantum operations defined here, i.e. completely positive maps that do not increase the trace, are also called quantum channels or "stochastic maps". The formulation here is confined to channels between quantum states; however, it can be extended to include classical states as well, therefore allowing quantum and classical information to be handled simultaneously. Kraus operators. Kraus' theorem (named after Karl Kraus) characterizes completely positive maps, which model quantum operations between quantum states. Informally, the theorem ensures that the action of any such quantum operation formula_8 on a state formula_2 can always be written as formula_9, for some set of operators formula_10 satisfying formula_11, where formula_12 is the identity operator. Statement of the theorem. Theorem. Let formula_13 and formula_14 be Hilbert spaces of dimension formula_15 and formula_16 respectively, and formula_8 be a quantum operation between formula_13 and formula_14. Then, there are matrices formula_17 mapping formula_13 to formula_14 such that, for any state formula_18, formula_19 Conversely, any map formula_20 of this form is a quantum operation provided formula_11. The matrices formula_21 are called "Kraus operators". (Sometimes they are known as "noise operators" or "error operators", especially in the context of quantum information processing, where the quantum operation represents the noisy, error-producing effects of the environment.) The Stinespring factorization theorem extends the above result to arbitrary separable Hilbert spaces "H" and "G". There, "S" is replaced by a trace class operator and formula_21 by a sequence of bounded operators. Unitary equivalence. Kraus matrices are not uniquely determined by the quantum operation formula_8 in general. For example, different Cholesky factorizations of the Choi matrix might give different sets of Kraus operators. The following theorem states that all systems of Kraus matrices representing the same quantum operation are related by a unitary transformation: Theorem. Let formula_8 be a (not necessarily trace-preserving) quantum operation on a finite-dimensional Hilbert space "H" with two representing sequences of Kraus matrices formula_22 and formula_23. Then there is a unitary operator matrix formula_24 such that formula_25 In the infinite-dimensional case, this generalizes to a relationship between two minimal Stinespring representations. It is a consequence of Stinespring's theorem that all quantum operations can be implemented by unitary evolution after coupling a suitable ancilla to the original system. Remarks. These results can be also derived from Choi's theorem on completely positive maps, characterizing a completely positive finite-dimensional map by a unique Hermitian-positive density operator (Choi matrix) with respect to the trace. Among all possible Kraus representations of a given channel, there exists a canonical form distinguished by the orthogonality relation of Kraus operators, formula_26. Such canonical set of orthogonal Kraus operators can be obtained by diagonalising the corresponding Choi matrix and reshaping its eigenvectors into square matrices. There also exists an infinite-dimensional algebraic generalization of Choi's theorem, known as "Belavkin's Radon-Nikodym theorem for completely positive maps", which defines a density operator as a "Radon–Nikodym derivative" of a quantum channel with respect to a dominating completely positive map (reference channel). It is used for defining the relative fidelities and mutual informations for quantum channels. Dynamics. For a non-relativistic quantum mechanical system, its time evolution is described by a one-parameter group of automorphisms {α"t"}"t" of "Q". This can be narrowed to unitary transformations: under certain weak technical conditions (see the article on quantum logic and the Varadarajan reference), there is a strongly continuous one-parameter group {"U""t"}"t" of unitary transformations of the underlying Hilbert space such that the elements "E" of "Q" evolve according to the formula formula_27 The system time evolution can also be regarded dually as time evolution of the statistical state space. The evolution of the statistical state is given by a family of operators {β"t"}"t" such that formula_28 Clearly, for each value of "t", "S" → "U"*"t" "S" "U""t" is a quantum operation. Moreover, this operation is "reversible". This can be easily generalized: If "G" is a connected Lie group of symmetries of "Q" satisfying the same weak continuity conditions, then the action of any element "g" of "G" is given by a unitary operator "U": formula_29 This mapping "g" → "U""g" is known as a projective representation of "G". The mappings "S" → "U"*"g" "S" "U""g" are reversible quantum operations. Quantum measurement. Quantum operations can be used to describe the process of quantum measurement. The presentation below describes measurement in terms of self-adjoint projections on a separable complex Hilbert space "H", that is, in terms of a PVM (Projection-valued measure). In the general case, measurements can be made using non-orthogonal operators, via the notions of POVM. The non-orthogonal case is interesting, as it can improve the overall efficiency of the quantum instrument. Binary measurements. Quantum systems may be measured by applying a series of "yes–no questions". This set of questions can be understood to be chosen from an orthocomplemented lattice "Q" of propositions in quantum logic. The lattice is equivalent to the space of self-adjoint projections on a separable complex Hilbert space "H". Consider a system in some state "S", with the goal of determining whether it has some property "E", where "E" is an element of the lattice of quantum "yes-no" questions. Measurement, in this context, means submitting the system to some procedure to determine whether the state satisfies the property. The reference to system state, in this discussion, can be given an operational meaning by considering a statistical ensemble of systems. Each measurement yields some definite value 0 or 1; moreover application of the measurement process to the ensemble results in a predictable change of the statistical state. This transformation of the statistical state is given by the quantum operation formula_30 Here "E" can be understood to be a projection operator. General case. In the general case, measurements are made on observables taking on more than two values. When an observable "A" has a pure point spectrum, it can be written in terms of an orthonormal basis of eigenvectors. That is, "A" has a spectral decomposition formula_31 where E"A"(λ) is a family of pairwise orthogonal projections, each onto the respective eigenspace of "A" associated with the measurement value λ. Measurement of the observable "A" yields an eigenvalue of "A". Repeated measurements, made on a statistical ensemble "S" of systems, results in a probability distribution over the eigenvalue spectrum of "A". It is a discrete probability distribution, and is given by formula_32 Measurement of the statistical state "S" is given by the map formula_33 That is, immediately after measurement, the statistical state is a classical distribution over the eigenspaces associated with the possible values λ of the observable: "S" is a mixed state. Non-completely positive maps. Shaji and Sudarshan argued in a Physical Review Letters paper that, upon close examination, complete positivity is not a requirement for a good representation of open quantum evolution. Their calculations show that, when starting with some fixed initial correlations between the observed system and the environment, the map restricted to the system itself is not necessarily even positive. However, it is not positive only for those states that do not satisfy the assumption about the form of initial correlations. Thus, they show that to get a full understanding of quantum evolution, non completely-positive maps should be considered as well.
[ { "math_id": 0, "text": "\\mathcal E" }, { "math_id": 1, "text": "0 \\le \\operatorname{Tr}[\\mathcal E(\\rho)] \\le 1" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": " \\begin{bmatrix} S_{11} & \\cdots & S_{1 n}\\\\ \\vdots & \\ddots & \\vdots \\\\ S_{n 1} & \\cdots & S_{n n}\\end{bmatrix} " }, { "math_id": 4, "text": " \\begin{bmatrix} \\Phi(S_{11}) & \\cdots & \\Phi(S_{1 n})\\\\ \\vdots & \\ddots & \\vdots \\\\ \\Phi(S_{n 1}) & \\cdots & \\Phi(S_{n n})\\end{bmatrix} " }, { "math_id": 5, "text": "\\Phi \\otimes I_n" }, { "math_id": 6, "text": "I_n" }, { "math_id": 7, "text": "n \\times n" }, { "math_id": 8, "text": "\\Phi" }, { "math_id": 9, "text": "\\Phi(\\rho) = \\sum_k B_k\\rho B_k^*" }, { "math_id": 10, "text": "\\{B_k\\}_k" }, { "math_id": 11, "text": "\\sum_k B_k^* B_k \\leq \\mathbf{1}" }, { "math_id": 12, "text": "\\mathbf{1}" }, { "math_id": 13, "text": "\\mathcal H" }, { "math_id": 14, "text": "\\mathcal G" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "m" }, { "math_id": 17, "text": "\\{ B_i \\}_{1 \\leq i \\leq nm}" }, { "math_id": 18, "text": " \\rho " }, { "math_id": 19, "text": " \\Phi(\\rho) = \\sum_i B_i \\rho B_i^*." }, { "math_id": 20, "text": " \\Phi " }, { "math_id": 21, "text": "\\{ B_i \\}" }, { "math_id": 22, "text": "\\{ B_i \\}_{i\\leq N}" }, { "math_id": 23, "text": "\\{ C_i \\}_{i\\leq N}" }, { "math_id": 24, "text": "(u_{ij})_{ij}" }, { "math_id": 25, "text": " C_i = \\sum_j u_{ij} B_j. " }, { "math_id": 26, "text": "\\operatorname{Tr} A^\\dagger_i A_j \\sim \\delta_{ij} " }, { "math_id": 27, "text": " \\alpha_t(E) = U^*_t E U_t. " }, { "math_id": 28, "text": " \\operatorname{Tr}(\\beta_t(S) E) = \\operatorname{Tr}(S \\alpha_{-t}(E)) = \\operatorname{Tr}(S U _t E U^*_t ) = \\operatorname{Tr}( U^*_t S U _t E )." }, { "math_id": 29, "text": " g \\cdot E = U_g E U_g^*. " }, { "math_id": 30, "text": " S \\mapsto E S E + (I - E) S (I - E). " }, { "math_id": 31, "text": " A = \\sum_\\lambda \\lambda \\operatorname{E}_A(\\lambda)" }, { "math_id": 32, "text": " \\operatorname{Pr}(\\lambda) = \\operatorname{Tr}(S \\operatorname{E}_A(\\lambda))." }, { "math_id": 33, "text": " S \\mapsto \\sum_\\lambda \\operatorname{E}_A(\\lambda) S \\operatorname{E}_A(\\lambda)\\ ." } ]
https://en.wikipedia.org/wiki?curid=666107
66615202
Robert F. Tichy
Austrian mathematician (born 1957) Robert Franz Tichy (born 30 September 1957 in Vienna) is an Austrian mathematician and professor at Graz University of Technology. He studied mathematics at the University of Vienna and finished 1979 with a Ph.D. thesis on uniform distribution under the supervision of Edmund Hlawka. He received his habilitation at TU Wien in 1983. Currently he is a professor at the Institute for Analysis and Number Theory at TU Graz. Previous positions include head of the Department of Mathematics and Dean of the Faculty of Mathematics, Physics and Geodesy at TU Graz, President of the Austrian Mathematical Society, and Member of the Board (Kuratorium) of the FWF, the Austrian Science Foundation. His research deals with Number theory, Analysis and Actuarial mathematics, and in particular with number theoretic algorithms, digital expansions, diophantine problems, combinatorial and asymptotic analysis, quasi Monte Carlo methods and actuarial risk models. Among his contributions are results in discrepancy theory, a criterion (joint with Yuri Bilu) for the finiteness of the solution set of a separable diophantine equation, as well as investigations of graph theoretic indices and of combinatorial algorithms with analytic methods. He also investigated (with Istvan Berkes and Walter Philipp) pseudorandom properties of lacunary sequences. In the theory of equidistribution he solved (with Harald Niederreiter) an open problem of Donald Knuth's book The Art of Computer Programming, by showing that for any sequence formula_0 of distinct natural numbers the sequence formula_1 is completely uniformly distributed for almost all real numbers formula_2; as a corollary, for almost all real numbers formula_3 the sequence formula_4 is random in the sense of Knuth's definition R4. Tichy is interested in the history of Alpinism and is also an avid climber. In 1985 he received the Prize of the Austrian Mathematical Society. Since 2004 he has been a Corresponding Member of the Austrian Academy of Sciences. In 2017 he received an honorary doctorate from the University of Debrecen. He taught as a visiting professor at the University of Illinois at Urbana–Champaign and the Tata Institute of Fundamental Research. In 2017 he was a guest professor at Paris 7; in the winter semester 2020/21 he held the Morlet chair at the Centre International de Rencontres Mathématiques in Luminy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(n_0,n_1,\\dots)" }, { "math_id": 1, "text": "(\\alpha^{n_0},\\alpha^{n_1},\\alpha^{n_2},\\dots)" }, { "math_id": 2, "text": "\\alpha > 1" }, { "math_id": 3, "text": "\\alpha > 1 " }, { "math_id": 4, "text": "(\\alpha,\\alpha^2,\\alpha^3,\\dots)" } ]
https://en.wikipedia.org/wiki?curid=66615202
66617145
Geometrical Product Specification and Verification
Mechanical engineering tolerancing (ISO) Geometrical Product Specification and Verification (GPS&amp;V) is a set of ISO standards developed by ISO Technical Committee 213. The aim of those standards is to develop a common language to specify macro geometry (size, form, orientation, location) and micro-geometry (surface texture) of products or parts of products so that the language can be used consistently worldwide. Background. GPS&amp;V standards cover: Other ISO technical committees are strongly related to ISO TC 213. ISO Technical Committee 10 is in charge of the standardization and coordination of technical product documentation (DPT). The GPS&amp;V standards describe the rules to define geometrical specifications which are further included in the DPT. The DPT is defined as the: "means of conveying all or part of a design definition or specification of a product". The DPT can be either a conventional documentation made of two dimensional Engineering drawings or a documentation based on Computer-aided design (CAD) models with 3RD annotations. The ISO rules to write the documentation are mainly described in ISO 128 and ISO 129 series while the rules for 3RD annotations are described in ISO 16792. ISO Technical Committee 184 develops standards that are closely related to GPS&amp;V standards. In particular ISO TC 184/SC4 develops ISO 10303 standard known as STEP standard. GPS&amp;V shall not to be confused with the use of ASME Y.14.5 which is often referred to as Geometric Dimension and Tolerance (GD&amp;T). History and concepts. History. ISO TC 213 was born in 1996 by merging three previous committees: Operation. GPS&amp;V standards are built on several basic operations defined in ISO 17450-1:2011: Those operations are supposed to completely describe the process of tolerancing from the point of view of the design and from the point of view of the measurement. They are presented in ISO 17450 standard series. Some of them are further described in other standards e.g ISO 16610 series for filtration. Those concepts are based on academic works. The key idea is to start from the real part with its imperfect geometry (skin model) and then to apply a sequence of well defined operations to completely describe the tolerancing process. The operations are used in the GPS&amp;V standards to define the meaning of dimensional, geometrical or surface texture specifications. Skin model. The skin model is a representation of the surface of the real part. The model in CAD systems describes the nominal geometry of the parts of a product. The nominal geometry is perfect. However, the geometrical tolerancing has to take into account the geometrical deviations that arise inevitably from the manufacturing process in order to limit them to what is considered as acceptable by the designer for the part and the complete product to be functional. This is why a representation of the real part with geometrical deviations (skin model) is introduced as the starting point in the tolerancing process. Partition. The skin model is a representation of a whole real part. However, the designer very often, if not always, needs to identify some specific geometrical features of the part to apply well-suited specifications. The process of identifying geometrical features from the skin model or the nominal model is called a partition. The standardization of this operation is a work in progress in ISO TC 213 (ISO 18183 series). &lt;br&gt; Several methods can be used to obtain a partition from a skin model as described in Extraction. The skin model and the partitioned geometrical features are usually considered as continuous, however it is often necessary when measuring the part to consider only points extracted from a line or a surface. The process of e.g. selecting the number of points, their distribution over the real geometrical feature and the way to obtain them is part of the extraction operation. This operation is described in ISO 14406:2011 Filtration. Filtration is an operation that is useful to select features of interest from other features in the data. This operation is heavily used for surface texture specifications however, it is a general operation that can be applied to define other specifications. This operation is well known in signal processing where it can be used for example to isolate some specific wave length in a raw signal. The filtration is standardized in ISO 16610 series where a lot of different filters are described. Association. Association is useful when we need to fit an ideal (perfect) geometrical feature to a real geometrical feature e.g. to find a perfect cylinder that approximates a cloud of points that have been extracted from a real (imperfect) cylindrical geometrical feature. This can be viewed as an mathematical optimization process. A criterion for optimization has to be defined. This criterion can be the minimisation of a quantity such as the squares of the distances from the points to the ideal surface for example. Constraints can also be added such as a condition for the ideal geometrical feature to lie outside the material of the part or to have a specific orientation or location from an other geometrical feature. Different criteria and constraints are used as defaults throughout the GPS&amp;V standards for different purposes such as geometrical specification on geometrical features or datum establishment for example. However, standardization of association as a whole is a work in progress in ISO TC 213. Collection. Collection is a grouping operation. The designer can define a group of geometrical features that are contributing to the same function. It could be used to group two or more holes because they constitute one datum used for the assembly of a part. It could also be used to group nominally planar geometrical features that are constrained to lie inside the same flatness tolerance zone. This operation is described throughout several GPS&amp;V standards. It is heavily used in ISO 5458:2018 for grouping planar geometrical feature and cylindrical geometrical features (holes or pins). The collection operation can be viewed as applying constraints of orientation and or constraints of location among the geometrical features of the considered group. Construction. Construction is described as an operation used to build ideal geometrical features with perfect geometry from other geometrical features. An example, given in ISO 17450-1:2011 is the construction of a straight line resulting from the intersection of two perfect planes. No specific standard addresses this operation, however it is used and defined throughout a lot of standards in GPS&amp;V system. Reconstruction. Reconstruction is an operation allowing the build of a continuous geometrical feature from a discrete geometrical feature. It is useful for example when there is a need to obtain a point between two extracted points as can be the case when identifying a dimension between two opposite points in a particular section in the process of obtaining a linear size of a cylinder. The reconstruction operation is not yet standardized in the GPS&amp;V system however the operation has been described in academic papers Reduction. Reduction is an operation allowing to compute a new geometrical feature from an existing one. The new geometrical feature is a derived geometrical feature. Dimensional specification. Dimensional tolerances are dealt with in ISO 14405: The linear size is indicated above a line ended with arrows and numerical values for the nominal size and the tolerance.The linear size of a geometrical feature of size is defined by default, as the distances between opposite points taken from the surface of the real part. The process to build both the sections and the directions needed to identify the opposite points is defined in ISO 14405-1 standard. This process includes the definition of an associated perfect geometrical feature of the same type as the nominal geometrical feature. By default a least-squares criterion is used. This process is defined only for geometrical features where opposite points exist. ISO 14405-2 illustrates cases where dimensional specification are often misused because opposite points don't exist. In these cases, the use of linear dimensions is considered as ambiguous (see example). The recommendation is to replace dimensional specifications with geometrical specifications to properly specify the location of a geometrical feature with respect to an other geometrical feature, the datum feature (see examples). Angular sizes are useful for cones, wedges or opposite straight lines. They are defined in ISO 14405-3. The definition implies to associate perfect geometrical features e.g. planes for a wedge and to measure the angle between lines of those perfect geometrical features in different sections. The angular sizes are indicated with an arrow and numerical values for the nominal size and the tolerance. It is to be noted that angular size specification is different from angularity specification. Angularity specification controls the shape of the toleranced feature but it is not the case for angular size specification. Size of a cylinder. We consider here the specification of a size of a cylinder to illustrate the definition of a size according to ISO 14405-1. The nominal model is assumed to be a perfect cylinder with a dimensional specification of the diameter without any modifiers changing the default definition of size. According to ISO 14405-1:2016 annex D, the process to establish a dimension between two opposite points starting from the real surface of the manufactured part which is nominally a cylinder is as follows: See example hereafter for an illustration. Dimension with envelope requirement Ⓔ. The envelope requirement is specified by adding the symbol Ⓔ after the tolerance value of a dimensional specification. The symbol Ⓔ modifies the definition of the dimensional specification in the following way (ISO 14405-1 3.8): The maximum inscribed dimension for a nominally cylindrical hole is defined as the maximum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. The minimum circumscribed dimension for a nominally cylindrical pin is defined as the minimum diameter of a perfect cylinder associated to the real surface with a constraint applied to the associated cylinder to stay outside the material of the part. See example hereafter for an illustration. Use of the envelope requirement. The use of the envelope symbol Ⓔ is closely related to the very common function of fitting parts together. A dimensional specification without envelope on the two parts to be fitted is not sufficient to ensure the fitting because the shape deviation of the parts is not limited by the dimensional specifications. The fitting of a cylindrical pin inside a cylindrical hole, for example requires to limit the sizes of both geometrical features but also to limit the deviation of straightness of both geometrical features as it is the combination of the size specification and the geometrical specification (straightness) that will allow the fitting of the two parts. Then the cylindrical pin and the cylindrical hole will fit even in the worst conditions without over constraining the parts with specific form specifications. It is to be noted that the use of dimensional size with envelope does not constrain the orientation nor the location of the parts. The use of geometrical specification together with the maximum material requirement (symbol Ⓜ) allows to ensure fitting of parts when additional constraints on orientation or location are required. ISO 2692:2021 describes the use of the maximum material modifier. Form, orientation, location and run-out specifications. GPS&amp;V standards dealing with geometrical specifications are listed below: The word geometry, used in this paragraph is to be understood as macrogeometry as opposed to surface texture specifications which are dealt with in other standards. The main source for geometrical specifications in GPS&amp;V standards is ISO 1101. ISO 5459 can be considered as a companion standard with ISO 1101 as it defines datum which are heavily used in ISO 1101. ISO 5458 and ISO 1660 are only focussing on subsets of ISO 1101. However, those standards are very useful for the user of GPS&amp;V systems as they cover very common aspects of geometrical tolerancing namely groups of cylinders or planes and profile specifications (lines and surfaces). &lt;br&gt; A geometrical specification allows to define the three following objects: The steps to read a geometrical specification can be summarised as in follows: Toleranced feature. Toleranced features are defined in ISO 1101. The toleranced feature is a real geometrical feature with imperfect geometry identified either directly from the skin model (integral feature) or by a process starting from the skin model (derived feature). Whether the toleranced feature is an integral feature or a derived feature depends upon the precise writing of the corresponding specification: if the arrow of the leader line of the specification is in the prolongation of a dimension line otherwise it is an integral feature. A Ⓐ modifier can also be used in the specification to designate a derived feature. The nominal toleranced feature is a geometrical feature with perfect geometry defined in the TPD corresponding to the toleranced feature. Datum. Datums are defined in ISO 5459 as a simulation of a contact partner at a single part specification, where the contact partner is missing. The contacts „planar touch“ and „fit of lineare size“ are covered by defaults. With this simulation a specification mistake appears against the nature function, which appears in assembly constrains. In essence, the datum is used to link the toleranced feature (imperfect real geometry) to the toleranced zone (perfect geometry). As such the datum object is a three folded object: The link between the orientation, location or run-out specification and the datums is specified in the geometrical specification frame as follows: Some geometrical specification may not have any datum section at all (e.g. form specification). The content of each cell can be either: The process to build a datum system is first described and the process for building a common datum follows. Datum system. A datum is identified by at most three cells in the geometrical specification frame corresponding to primary, secondary and tertiary datums. For the primary, secondary and tertiary datum, a perfect geometry feature of the same kind as the nominal feature is associated to the real feature as described hereafter: The result is a set of associated features. Finally, this set of associated features is used to build a situation feature which is the specified datum. Common datum. The datum features are identified on the skin model from the datum component in the dash separated list of nominal datum appearing in a particular cell of an orientation or location specification. The common datum can be used as primary, secondary or tertiary datum. In either cases, the process to build a common datum is the same however additional orientation constraints shall be added when the common datum is used as secondary or tertiary datum as is done for datum systems and explained hereafter. The criterion for association of common datum is applied on all the associated features together with the following constraints: The result is a set of associated feature. Finally, this set of associated features is used to build a situation feature which is the specified datum. Situation feature. The final step in the datum establishment process is to combine the associated features to obtain a final object defined as situation feature which is identified to the specified datum (ISO 5459:2011 Table B.1). It is a member of the following set: How to build the situation features and therefore the specified datum, is currently mainly defined through examples in ISO 5459:2011. More specific rules are under development. The specified datum concept is closely related to classes of surfaces invariant through displacements. It has been shown that surfaces can be classified according to the displacements that let them invariant. The number of classes is seven. If a displacement let a surface invariant then this displacement cannot be locked by the corresponding specified datum. So the displacement that are not invariant are used to lock specific degrees of freedom of the tolerance zone. For example a set of associated datums made of three mutually perpendicular planes corresponds to the following situation feature: a plane containing a straight line containing a point. The plane is the first associated plane obtained, the line is the intersection between the second associated plane and the first one and the point is the intersection between the line and the third associated plane. The specified datum is therefore belonging to the complex invariance class (formula_0) and all the degrees of freedom of a tolerance zone can be locked with this specified datum. The invariance class graphic symbols are not defined in ISO standards but only used in literature as a useful reminder. An Helicoidal class (formula_1) can also be defined however it is generally replaced with a cylindrical class in real world applications. Tolerance zone. Tolerance zones are defined in ISO 1101. The tolerance zone is a surface or a volume with perfect geometry. It is a surface when it is intended to contain a tolerance feature which is a line. It is a volume when it is intended to contain a tolerance feature which is a surface It can often be described as a rigid body with the following attributes: Theoretical Exact Dimension (TED). TED are identified on a nominal model by dimensions with a framed nominal value without any tolerance. Those dimensions are not specification by themselves but are needed when applying constraints to build datum or to determine the orientation or location of the tolerance zone. TED can also be used for other purposes e.g. to define the nominal shape or dimensions of a profile.&lt;br&gt; When applying constraints generally two types of TED are to be taken into account: Geometrical specification families. The geometrical specifications are divided into three categories: Run-out specification is another family that involves both form and location. Examples. Presentation. This paragraph contains examples of dimensional and geometrical specification to illustrate the definition and use of dimensional and positional specifications. The dimensions and tolerance values (displayed in blue in the figures) shall be numerical values on actual drawings. d, l1, l2 are used for length values. Δd is used for a dimensional tolerance value and t, t1, t2 for positional tolerance values. For each example we present: The deviations are enlarged compared to actual parts in order to show as clearly as possible the steps necessary to build the GPS&amp;V operators. The first angle projection is used in technical drawing. Dimensional specifications. Diameter of a cylindrical part with envelope Ⓔ. The verification is twofold: Ambiguous dimension. This example is often surprising for new practitioners of GPS&amp;V. However, it is a direct consequence of the definition of a linear dimension in ISO 14405-1. The function targeted here is probably to locate the two planes, therefore a location specification on one surface with respect to the other surface or the location of the two surfaces with respect to one another is considered the right way to achieve the function. See examples. Positional specifications. Location of a plane with respect to another plane (case 1). This specification could be useful when one surface (datum plane in this case) has a higher priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated. The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green). Location of a plane with respect to another plane (case 2). This case 2 is similar to case 1 above however the toleranced feature and the datum are switched so that the result is totally different as explained above. This specification could be useful when one surface (datum plane) has a higher priority over the other surface in the assembly process. For example a second part could be required to fit inside the slot being guided by the plane where the datum has been indicated. The part is not conformant to the specification for this particular real part, as the toleranced feature (orange line segment) is not included in the tolerance zone (green) Location of planes with respect to one another (case 3). This specification could be useful when the two surfaces (plane in this case) have the same priority in the assembly process. For example a second part could be required to fit inside the slot being guided by the two planes. The part is conformant to the specification for this particular real part, as the toleranced feature (two orange line segments) is included in the tolerance zone (green). Location of a hole with respect to the edges of a plate. This specification could be useful when the holes is actually located from the edges of the plates in an assembly process and where the A surface has a higher priority over B. If the assembly process is modified then the datum specification shall be adapted in accordance. The order of the datum is important in a datum system as the resulting specified datum can be very different. The part is conformant to the specification for this particular real part, as the toleranced feature (purple line on the left, purple dot on the right) is included in the tolerance zone (green). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{C}_\\mathbf{X}" }, { "math_id": 1, "text": "\\mathbf{C}_\\mathbf{H}" } ]
https://en.wikipedia.org/wiki?curid=66617145
666177
Picard–Lindelöf theorem
Existence and uniqueness of solutions to initial value problems In mathematics, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem. The theorem is named after Émile Picard, Ernst Lindelöf, Rudolf Lipschitz and Augustin-Louis Cauchy. Theorem. Let formula_0 be a closed rectangle with formula_1, the interior of formula_2. Let formula_3 be a function that is continuous in formula_4 and Lipschitz continuous in formula_5 (with Lipschitz constant independent from formula_4). Then, there exists some "ε" &gt; 0 such that the initial value problem formula_6 has a unique solution formula_7 on the interval formula_8. Proof sketch. The proof relies on transforming the differential equation, and applying the Banach fixed-point theorem. By integrating both sides, any function satisfying the differential equation must also satisfy the integral equation formula_9 A simple proof of existence of the solution is obtained by successive approximations. In this context, the method is known as Picard iteration. Set formula_10 and formula_11 It can then be shown, by using the Banach fixed-point theorem, that the sequence of "Picard iterates" "φk" is convergent and that the limit is a solution to the problem. An application of Grönwall's lemma to |"φ"("t") − "ψ"("t")|, where φ and ψ are two solutions, shows that "φ"("t") "ψ"("t"), thus proving the global uniqueness (the local uniqueness is a consequence of the uniqueness of the Banach fixed point). See Newton's method of successive approximation for instruction. Example of Picard iteration. Let formula_12 the solution to the equation formula_13 with initial condition formula_14 Starting with formula_15 we iterate formula_16 so that formula_17: formula_18 formula_19 formula_20 and so on. Evidently, the functions are computing the Taylor series expansion of our known solution formula_21 Since formula_22 has poles at formula_23 this converges toward a local solution only for formula_24 not on all of formula_25. Example of non-uniqueness. To understand uniqueness of solutions, consider the following examples. A differential equation can possess a stationary point. For example, for the equation "ay" (formula_26), the stationary solution is "y"("t") 0, which is obtained for the initial condition "y"(0) 0. Beginning with another initial condition "y"(0) "y"0 ≠ 0, the solution "y"("t") tends toward the stationary point, but reaches it only at the limit of infinite time, so the uniqueness of solutions (over all finite times) is guaranteed. However, for an equation in which the stationary solution is reached after a "finite" time, the uniqueness fails. This happens for example for the equation "ay" , which has at least two solutions corresponding to the initial condition "y"(0) 0 such as: "y"("t") 0 or formula_27 so the previous state of the system is not uniquely determined by its state after "t" = 0. The uniqueness theorem does not apply because the function  "f" ("y") "y"  has an infinite slope at "y" 0 and therefore is not Lipschitz continuous, violating the hypothesis of the theorem. Detailed proof. Let formula_28 where: formula_29 This is the compact cylinder where  "f"  is defined. Let formula_30 this is, the supremum of (the absolute values of) the slopes of the function. Finally, let "L" be the Lipschitz constant of  "f"  with respect to the second variable. We will proceed to apply the Banach fixed-point theorem using the metric on formula_31 induced by the uniform norm formula_32 We define an operator between two function spaces of continuous functions, Picard's operator, as follows: formula_33 defined by: formula_34 We must show that this operator maps a complete non-empty metric space "X" into itself and also is a contraction mapping. We first show that, given certain restrictions on formula_35, formula_36 takes formula_37 into itself in the space of continuous functions with the uniform norm. Here, formula_37 is a closed ball in the space of continuous (and bounded) functions "centered" at the constant function formula_38. Hence we need to show that formula_39 implies formula_40 where formula_41 is some number in formula_42 where the maximum is achieved. The last inequality in the chain is true if we impose the requirement formula_43. Now let's prove that this operator is a contraction mapping. Given two functions formula_44, in order to apply the Banach fixed-point theorem we require formula_45 for some formula_46. So let formula_4 be such that formula_47 Then using the definition of formula_36, formula_48 This is a contraction if formula_49 We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed-point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function formula_50 such that Γ"φ" "φ". This function is the unique solution of the initial value problem, valid on the interval "Ia" where "a" satisfies the condition formula_51 Optimization of the solution's interval. We wish to remove the dependence of the interval "Ia" on "L". To this end, there is a corollary of the Banach fixed-point theorem: if an operator "T""n" is a contraction for some "n" in N, then "T" has a unique fixed point. Before applying this theorem to the Picard operator, recall the following: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Lemma — formula_52 for all formula_53 "Proof." Induction on "m". For the base of the induction ("m" = 1) we have already seen this, so suppose the inequality holds for "m" − 1, then we have: formula_54 By taking a supremum over formula_55 we see that formula_56. This inequality assures that for some large "m", formula_57 and hence Γ"m" will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking "α" = min{"a", }. In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value. Other existence theorems. The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The Peano existence theorem shows only existence, not uniqueness, but it assumes only that  "f"  is continuous in y, instead of Lipschitz continuous. For example, the right-hand side of the equation "y"  with initial condition "y"(0) = 0 is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has at least three solutions: formula_58. Even more general is Carathéodory's existence theorem, which proves existence (in a more general sense) under weaker conditions on  "f" . Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura's theorem. Global existence of solution. The Picard–Lindelöf theorem ensures that solutions to initial value problems exist uniquely within a local interval formula_8, possibly dependent on each solution. The behavior of solutions beyond this local interval can vary depending on the properties of  "f"  and the domain over which  "f"  is defined. For instance, if  "f"  is globally Lipschitz, then the local interval of existence of each solution can be extended to the entire real line and all the solutions are defined over the entire R. If  "f"  is only locally Lipschitz, some solutions may not be defined for certain values of "t", even if  "f"  is smooth. For instance, the differential equation "y" 2 with initial condition "y"(0) = 1 has the solution "y"("t") = 1/(1-"t"), which is not defined at "t" = 1. Nevertheless, if  "f"  is a differentiable function defined over a compact subset of Rn, then the initial value problem has a unique solution defined over the entire R. Similar result exists in differential geometry: if  "f"  is a differentiable vector field defined over a domain which is a compact smooth manifold, then all its trajectories (integral curves) exist for all time. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "D \\subseteq \\R \\times \\R^n" }, { "math_id": 1, "text": "(t_0, y_0) \\in \\operatorname{int} D" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "f: D \\to \\R^n" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "y'(t)=f(t,y(t)),\\qquad y(t_0)=y_0." }, { "math_id": 7, "text": "y(t)" }, { "math_id": 8, "text": "[t_0-\\varepsilon, t_0+\\varepsilon]" }, { "math_id": 9, "text": "y(t) - y(t_0) = \\int_{t_0}^t f(s,y(s)) \\, ds." }, { "math_id": 10, "text": "\\varphi_0(t)=y_0" }, { "math_id": 11, "text": "\\varphi_{k+1}(t)=y_0+\\int_{t_0}^t f(s,\\varphi_k(s))\\,ds." }, { "math_id": 12, "text": "y(t)=\\tan(t)," }, { "math_id": 13, "text": "y'(t)=1+y(t)^2" }, { "math_id": 14, "text": "y(t_0)=y_0=0,t_0=0." }, { "math_id": 15, "text": "\\varphi_0(t)=0," }, { "math_id": 16, "text": "\\varphi_{k+1}(t)=\\int_0^t (1+(\\varphi_k(s))^2)\\,ds" }, { "math_id": 17, "text": " \\varphi_n(t) \\to y(t)" }, { "math_id": 18, "text": "\\varphi_1(t)=\\int_0^t (1+0^2)\\,ds = t" }, { "math_id": 19, "text": "\\varphi_2(t)=\\int_0^t (1+s^2)\\,ds = t + \\frac{t^3}{3}" }, { "math_id": 20, "text": "\\varphi_3(t)=\\int_0^t \\left(1+\\left(s + \\frac{s^3}{3}\\right)^2\\right)\\,ds = t + \\frac{t^3}{3} + \\frac{2t^5}{15} + \\frac{t^7}{63}" }, { "math_id": 21, "text": "y=\\tan(t)." }, { "math_id": 22, "text": "\\tan" }, { "math_id": 23, "text": "\\pm\\tfrac{\\pi}{2}," }, { "math_id": 24, "text": "|t|<\\tfrac{\\pi}{ 2}," }, { "math_id": 25, "text": "\\R" }, { "math_id": 26, "text": "a<0" }, { "math_id": 27, "text": "y(t)=\\begin{cases} \\left (\\tfrac{at}{3} \\right )^{3} & t<0\\\\ \\ \\ \\ \\ 0 & t \\ge 0, \\end{cases}" }, { "math_id": 28, "text": "C_{a,b}=\\overline{I_a(t_0)}\\times\\overline{B_b(y_0)}" }, { "math_id": 29, "text": "\\begin{align}\n\\overline{I_a(t_0)}&=[t_0-a,t_0+a] \\\\\n\\overline{B_b(y_0)}&=[y_0-b,y_0+b].\n\\end{align}" }, { "math_id": 30, "text": "M = \\sup_{C_{a,b}}\\|f\\|," }, { "math_id": 31, "text": "\\mathcal{C}(I_{a}(t_0),B_b(y_0))" }, { "math_id": 32, "text": "\\| \\varphi \\|_\\infty = \\sup_{t \\in I_a} | \\varphi(t)|." }, { "math_id": 33, "text": "\\Gamma:\\mathcal{C}(I_{a}(t_0),B_b(y_0)) \\longrightarrow \\mathcal{C}(I_{a}(t_0),B_b(y_0))" }, { "math_id": 34, "text": "\\Gamma \\varphi(t) = y_0 + \\int_{t_0}^{t} f(s,\\varphi(s)) \\, ds." }, { "math_id": 35, "text": "a" }, { "math_id": 36, "text": "\\Gamma" }, { "math_id": 37, "text": "\\overline{B_b(y_0)}" }, { "math_id": 38, "text": "y_0" }, { "math_id": 39, "text": "\\| \\varphi -y_0 \\|_\\infty \\le b" }, { "math_id": 40, "text": "\\left\\| \\Gamma\\varphi(t)-y_0 \\right\\| = \\left\\|\\int_{t_0}^t f(s,\\varphi(s))\\, ds \\right\\| \\leq \\int_{t_0}^{t'} \\left\\|f(s,\\varphi(s))\\right\\| ds \\leq \\int_{t_0}^{t'} M\\, ds = M \\left|t'-t_0 \\right| \\leq M a \\leq b" }, { "math_id": 41, "text": "t'" }, { "math_id": 42, "text": "[t_0-a, t_0 +a]" }, { "math_id": 43, "text": "a < \\frac{b}{M}" }, { "math_id": 44, "text": "\\varphi_1,\\varphi_2\\in\\mathcal{C}(I_{a}(t_0),B_b(y_0))" }, { "math_id": 45, "text": " \\left \\| \\Gamma \\varphi_1 - \\Gamma \\varphi_2 \\right\\|_\\infty \\le q \\left\\| \\varphi_1 - \\varphi_2 \\right\\|_\\infty," }, { "math_id": 46, "text": "0 \\leq q < 1" }, { "math_id": 47, "text": "\\| \\Gamma \\varphi_1 - \\Gamma \\varphi_2 \\|_\\infty = \\left\\| \\left(\\Gamma\\varphi_1 - \\Gamma\\varphi_2 \\right)(t) \\right\\|." }, { "math_id": 48, "text": "\\begin{align}\n\\left\\|\\left(\\Gamma\\varphi_1 - \\Gamma\\varphi_2 \\right)(t) \\right\\| &= \\left\\|\\int_{t_0}^t \\left( f(s,\\varphi_1(s))-f(s,\\varphi_2(s)) \\right)ds \\right\\|\\\\\n&\\leq \\int_{t_0}^t \\left\\|f \\left(s,\\varphi_1(s)\\right)-f\\left(s,\\varphi_2(s) \\right) \\right\\| ds \\\\\n&\\leq L \\int_{t_0}^t \\left\\|\\varphi_1(s)-\\varphi_2(s) \\right\\|ds && \\text{since } f \\text{ is Lipschitz-continuous} \\\\\n&\\leq L \\int_{t_0}^t \\left\\|\\varphi_1-\\varphi_2 \\right\\|_\\infty \\,ds \\\\\n&\\leq La \\left\\|\\varphi_1-\\varphi_2 \\right\\|_\\infty\n\\end{align}" }, { "math_id": 49, "text": "a < \\tfrac{1}{L}." }, { "math_id": 50, "text": "\\varphi\\in \\mathcal{C}(I_a (t_0),B_b(y_0))" }, { "math_id": 51, "text": "a < \\min \\left \\{ \\tfrac{b}{M}, \\tfrac{1}{L} \\right \\}." }, { "math_id": 52, "text": "\\left\\| \\Gamma^m \\varphi_1(t) - \\Gamma^m\\varphi_2(t) \\right\\| \\leq \\frac{L^m|t-t_0|^m}{m!}\\left\\|\\varphi_1-\\varphi_2\\right\\|" }, { "math_id": 53, "text": "t \\in [t_0 - \\alpha, t_0 + \\alpha]" }, { "math_id": 54, "text": "\\begin{align}\n\\left \\| \\Gamma^m \\varphi_1(t) - \\Gamma^m\\varphi_2(t) \\right \\| &= \\left \\|\\Gamma\\Gamma^{m-1} \\varphi_1(t) - \\Gamma\\Gamma^{m-1}\\varphi_2(t) \\right \\| \\\\\n&\\leq \\left| \\int_{t_0}^t \\left \\| f \\left (s,\\Gamma^{m-1}\\varphi_1(s) \\right )-f \\left (s,\\Gamma^{m-1}\\varphi_2(s) \\right )\\right \\| ds \\right| \\\\\n&\\leq L \\left| \\int_{t_0}^t \\left \\|\\Gamma^{m-1}\\varphi_1(s)-\\Gamma^{m-1}\\varphi_2(s)\\right \\| ds\\right| \\\\\n&\\leq L \\left| \\int_{t_0}^t \\frac{L^{m-1}|s-t_0|^{m-1}}{(m-1)!} \\left \\| \\varphi_1-\\varphi_2\\right \\| ds\\right| \\\\\n&\\leq \\frac{L^m |t-t_0|^m }{m!} \\left \\|\\varphi_1 - \\varphi_2 \\right \\|.\n\\end{align}" }, { "math_id": 55, "text": " t \\in [t_0 - \\alpha, t_0 + \\alpha] " }, { "math_id": 56, "text": "\\left \\| \\Gamma^m \\varphi_1 - \\Gamma^m\\varphi_2 \\right \\| \\leq \\frac{L^m\\alpha^m}{m!}\\left \\|\\varphi_1-\\varphi_2\\right \\|" }, { "math_id": 57, "text": "\\frac{L^m\\alpha^m}{m!}<1," }, { "math_id": 58, "text": "y(t) = 0, \\qquad y(t) = \\pm\\left (\\tfrac23 t\\right)^{\\frac{3}{2}}" } ]
https://en.wikipedia.org/wiki?curid=666177
66618
The Limits to Growth
1972 book on economic and population growth The Limits to Growth (often abbreviated LTG) is a 1972 report that discussed the possibility of exponential economic and population growth with finite supply of resources, studied by computer simulation. The study used the World3 computer model to simulate the consequence of interactions between the Earth and human systems. The model was based on the work of Jay Forrester of MIT, as described in his book "World Dynamics". Commissioned by the Club of Rome, the study saw its findings first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971. The report's authors are Donella H. Meadows, Dennis L. Meadows, Jørgen Randers, and William W. Behrens III, representing a team of 17 researchers. The report's findings suggest that, in the absence of significant alterations in resource utilization, it is highly likely that there will be an abrupt and unmanageable decrease in both population and industrial capacity. Although it faced severe criticism and scrutiny upon its release, the report influenced environmental reforms for decades. Subsequent research concurred that global use of natural resources has been inadequately reformed to alter its expected outcome. Yet price predictions based on resource scarcity failed to materialize in the years since publication. Since its publication, some 30 million copies of the book in 30 languages have been purchased. It continues to generate debate and has been the subject of several subsequent publications. "Beyond the Limits" and "The Limits to Growth: The 30-Year Update" were published in 1992 and 2004 respectively; in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as "2052: A Global Forecast for the Next Forty Years"; and in 2022 two of the original "Limits to Growth" authors, Dennis Meadows and Jørgen Randers, joined 19 other contributors to produce "Limits and Beyond". Purpose. In commissioning the MIT team to undertake the project that resulted in "LTG", the Club of Rome had three objectives: Method. The World3 model is based on five variables: "population, food production, industrialization, pollution, and consumption of nonrenewable natural resources". At the time of the study, all these variables were increasing and were assumed to continue to grow exponentially, while the ability of technology to increase resources grew only linearly. The authors intended to explore the possibility of a sustainable feedback pattern that would be achieved by altering growth trends among the five variables under three scenarios. They noted that their projections for the values of the variables in each scenario were predictions "only in the most limited sense of the word", and were only indications of the system's behavioral tendencies. Two of the scenarios saw "overshoot and collapse" of the global system by the mid- to latter-part of the 21st century, while a third scenario resulted in a "stabilized world". Exponential reserve index. A key idea in "The Limits to Growth" is the notion that if the rate of resource use is increasing, the number of reserves cannot be calculated by simply taking the current known reserves and dividing them by the current yearly usage, as is typically done to obtain a static index. For example, in 1972, the amount of chromium reserves was 775 million metric tons, of which 1.85 million metric tons were mined annually. The static index is 775/1.85=418 years, but the rate of chromium consumption was growing at 2.6 percent annually, or exponentially. If instead of assuming a constant rate of usage, the assumption of a constant rate of growth of 2.6 percent annually is made, the resource will instead last formula_0 In general, the formula for calculating the amount of time left for a resource with constant consumption growth is: formula_1 where: "y" = years left; "r" = the continuous compounding growth rate; "s" = R/C or static reserve; "R" = reserve; "C" = (annual) consumption. Commodity reserve extrapolation. The chapter contains a large table that spans five pages in total, based on actual geological reserves data for a total of 19 non-renewable resources, and analyzes their reserves at 1972 modeling time of their exhaustion under three scenarios: static (constant growth), exponential, and exponential with reserves multiplied by 5 to account for possible discoveries. A short excerpt from the table is presented below: The chapter also contains a detailed computer model of chromium availability with current (as of 1972) and double the known reserves as well as numerous statements on the current increasing price trends for discussed metals: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Given present resources consumption rates and the projected increase in the rates, the great majority of the currently important nonrenewable resources will be extremely costly 100 years from now. (...) The prices of those resources with the shortest static reserve indices have already begun to increase. The price of mercury, for example, has gone up 500 percent in the last 20 years; the price of lead has increased 300 percent in the last 30 years. Interpretations of the exhaustion model. Due to the detailed nature and use of actual resources and their real-world price trends, the indexes have been interpreted as a prediction of the number of years until the world would "run out" of them, both by environmentalist groups calling for greater conservation and restrictions on use and by skeptics criticizing the accuracy of the predictions. This interpretation has been widely propagated by media and environmental organizations, and authors who, apart from a note about the possibility of the future flows being "more complicated", did not clearly constrain or deny this interpretation. While environmental organizations used it to support their arguments, a number of economists used it to criticize "LTG" as a whole shortly after publication in the 1970s (Peter Passel, Marc Roberts, and Leonard Ross), with similar criticism reoccurring from Ronald Baily, George Goodman and others in the 1990s. In 2011 Ugo Bardi in "The Limits to Growth Revisited" argued that "nowhere in the book was it stated that the numbers were supposed to be read as predictions", nonetheless as they were the only tangible numbers referring to actual resources, they were promptly picked as such by both supporters as well as opponents. While Chapter 2 serves as an introduction to the concept of exponential growth modeling, the actual World3 model uses an abstract "non-renewable resources" component based on static coefficients rather than the actual physical commodities described above. Conclusions. After reviewing their computer simulations, the research team came to the following conclusions: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The introduction goes on to say: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;These conclusions are so far-reaching and raise so many questions for further study that we are quite frankly overwhelmed by the enormity of the job that must be done. We hope that this book will serve to interest other people, in many fields of study and in many countries of the world, to raise the space and time horizons of their concerns, and to join us in understanding and preparing for a period of great transition – the transition from growth to global equilibrium. Criticism. "LTG" provoked a wide range of responses, including immediate criticisms almost as soon as it was published. Peter Passell and two co-authors published a 2 April 1972 article in the "New York Times" describing "LTG" as "an empty and misleading work ... best summarized ... as a rediscovery of the oldest maxim of computer science: Garbage In, Garbage Out". Passell found the study's simulation to be simplistic while assigning little value to the role of technological progress in solving the problems of resource depletion, pollution, and food production. They charged that all "LTG" simulations ended in collapse, predicting the imminent end of irreplaceable resources. Passell also charged that the entire endeavour was motivated by a hidden agenda: to halt growth in its tracks. In 1973, a group of researchers at the Science Policy Research Unit at the University of Sussex concluded that simulations in "Limits to Growth" were very sensitive to a few key assumptions and suggest that the MIT assumptions were unduly pessimistic, and the MIT methodology, data, and projections were faulty. However, the "LTG" team, in a paper entitled "A Response to Sussex", described and analyzed five major areas of disagreement between themselves and the Sussex authors. The team asserted that the Sussex critics applied "micro reasoning to macro problems", and suggested that their own arguments had been either misunderstood or wilfully misrepresented. They pointed out that the critics had failed to suggest any alternative model for the interaction of growth processes and resource availability, and "nor had they described in precise terms the sort of social change and technological advances that they believe would accommodate current growth processes." During that period, the very idea of any worldwide constraint, as indicated in the study, was met with scepticism and opposition by both businesses and the majority of economists. Critics declared that history proved the projections to be incorrect, such as the predicted resource depletion and associated economic collapse by the end of the 20th century. The methodology, the computer, the conclusions, the rhetoric and the people behind the project were criticised. Yale economist Henry C. Wallich agreed that growth could not continue indefinitely, but that a natural end to growth was preferable to intervention. Wallich stated that technology could solve all the problems the report was concerned about, but only if growth continued apace. According to Wallich's cautionary statement, prematurely halting progress would result in the perpetual impoverishment of billions. Julian Simon, a professor at the Universities of Illinois and, later, Maryland, argued that the fundamental underlying concepts of the LTG scenarios were faulty because the very idea of what constitutes a "resource" varies over time. For instance, wood was the primary shipbuilding resource until the 1800s, and there were concerns about prospective wood shortages from the 1500s on. But then boats began to be made of iron, later steel, and the shortage issue disappeared. Simon argued in his book "The Ultimate Resource" that human ingenuity creates new resources as required from the raw materials of the universe. For instance, copper will never "run out". History demonstrates that as it becomes scarcer its price will rise and more will be found, more will be recycled, new techniques will use less of it, and at some point a better substitute will be found for it altogether. His book was revised and reissued in 1996 as "The Ultimate Resource 2". To the US Congress in 1973, Allen V. Kneese and Ronald Riker of Resources for the Future (RFF) testified that in their view, "The authors load their case by letting some things grow exponentially and others not. Population, capital and pollution grow exponentially in all models, but technologies for expanding resources and controlling pollution are permitted to grow, if at all, only in discrete increments." However, their testimony also noted the possibility of "relatively firm long-term limits" associated with carbon dioxide emissions, that humanity might "loose upon itself, or the ecosystem services on which it depends, a disastrously virulent substance", and (implying that population growth in "developing countries" is problematic) that "we don't know what to do about it". In 1997, the Italian economist Giorgio Nebbia observed that the negative reaction to the "LTG" study came from at least four sources: those who saw the book as a threat to their business or industry; professional economists, who saw "LTG" as an uncredentialed encroachment on their professional perquisites; the Catholic church, which bridled at the suggestion that overpopulation was one of mankind's major problems; finally, the political left, which saw the "LTG" study as a scam by the elites designed to trick workers into believing that a proletarian paradise was a pipe dream. A UK government report found that "In the 1990s, criticism tended to focus on the misconception that "Limits to Growth" predicted global resource depletion and social collapse by the end of the year 2000". Peter Taylor and Frederick Buttle’s interpretation of the "LTG" study and the associated system dynamics (SD) models found that the original SD was created for firms and set the pattern for urban, global, and other SD models. These firm-based SDs relied on superintending managers to prevent undesirable cycling and feedback loops caused by separate common-sense decisions made by individual sectors. However, the later global model lacked superintending managers that enforce interrelated world-level changes, making undesirable cycles and exponential growth and collapse happen in nearly all models no matter the parameter settings. There was no way for a few individuals in the model to override the structure of the system even if they understood the system as a whole. This meant there were only two solutions: convincing everyone in the system to change the basic structure of population growth and collapse (moral response) and/or having a superintending agency analyzing the system as a whole and directing changes (technocratic response). The "LTG" report combined these two approaches multiple times. System dynamists constructed interventions into the world model to demonstrate how their proposed interventions improved the system to prevent collapse. The SD model also aggregated the world’s population and resources which meant that it did not demonstrate how crises emerge at different times and in different ways without any strictly global logic or form because of the unequal distributions of populations and resources. These issues indicate that the local, national, and regional differentiation in politics and economics surrounding socioenvironmental change was excluded from the SD used by "LTG", making it unable to accurately demonstrate real-world dynamics. Positive reviews. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; With few exceptions, economics as a discipline has been dominated by a perception of living in an unlimited world, where resource and pollution problems in one area were solved by moving resources or people to other parts. The very hint of any global limitation as suggested in the report "The Limits to Growth" was met with disbelief and rejection by businesses and most economists. However, this conclusion was mostly based on false premises. In 1980, the Global 2000 Report to the President arrived at similar conclusions regarding expected global resource scarcity, and the need for multilateral coordination to prepare for this situation. In a 2008 blog post, Ugo Bardi commented that "Although, by the 1990s "LTG" had become everyone's laughing stock, among some the "LTG" ideas are becoming again popular". Reading "LTG" for the first time in 2000, Matthew Simmons concluded his views on the report by saying, "In hindsight, The Club of Rome turned out to be right. We simply wasted 30 important years ignoring this work." Robert Solow, who had been a vocal critic of LTG, said in 2009 that "thirty years later, the situation may have changed... it will probably be more important in the future to deal intellectually, quantitatively, as well as practically, with the mutual interdependence of economic growth, natural resource availability, and environmental constraints". In a study conducted in 2008, Graham Turner from CSIRO discovered a significant correlation between the observed historical data spanning from 1970 to 2000 and the simulated outcomes derived from the "standard run" limits of the growth model. This correlation was apparent across nearly all the reported outputs. The comparison falls comfortably within the range of uncertainty for almost all the available data, both in terms of magnitude and the patterns observed over time. Turner conducted an analysis of many studies, with a special focus on those authored by economists, that have consistently aimed to discredit the limits-to-growth concept over the course of several years. According to Turner, the aforementioned studies exhibit flaws and demonstrate a lack of comprehension regarding the model. Turner reprised these observations in another opinion piece in "The Guardian" on 2 September 2014. Turner used data from the UN to claim that the graphs almost exactly matched the 'Standard Run' from 1972 (i.e. the worst-case scenario, assuming that a 'business as usual' attitude was adopted, and there were no modifications of human behaviour in response to the warnings in the report). Birth rates and death rates were both slightly lower than projected, but these two effects cancelled each other out, leaving the growth in world population almost exactly as forecast. In 2010, Nørgård, Peet and Ragnarsdóttir called the book a "pioneering report", and said that it "has withstood the test of time and, indeed, has only become more relevant." In 2012, Christian Parenti drew comparisons between the reception of "The Limits to Growth" and the ongoing global warming controversy. Parenti further remarked that despite its scientific rigour and credibility, the intellectual guardians of influential economic interests actively dismissed LTG as a warning. A parallel narrative is currently unfolding within the realm of climate research. In 2012, John Scales Avery, a member of the Nobel Prize (1995) winning group associated with the Pugwash Conferences on Science and World Affairs, supported the basic thesis of "LTG" by stating, Although the specific predictions of resource availability in "Limits to Growth" lacked accuracy, its basic thesis – that unlimited economic growth on a finite planet is impossible – was indisputably correct. Legacy. Updates and symposia. The Club of Rome has persisted after "The Limits to Growth" and has generally provided comprehensive updates to the book every five years. An independent retrospective on the public debate over "The Limits to Growth" concluded in 1978 that optimistic attitudes had won out, causing a general loss of momentum in the environmental movement. While summarizing a large number of opposing arguments, the article concluded that "scientific arguments for and against each position ... have, it would seem, played only a small part in the general acceptance of alternative perspectives." In 1989, a symposium was held in Hanover, entitled "Beyond the Limits to Growth: Global Industrial Society, Vision or Nightmare?" and in 1992, "Beyond the Limits" (BTL) was published as a 20-year update on the original material. It "concluded that two decades of history mainly supported the conclusions we had advanced 20 years earlier. But the 1992 book did offer one major new finding. We suggested in BTL that humanity had already overshot the limits of Earth's support capacity." "Limits to Growth: The 30-Year Update" was published in 2004. The authors observed that "It is a sad fact that humanity has largely squandered the past 30 years in futile debates and well-intentioned, but halfhearted, responses to the global ecological challenge. We do not have another 30 years to dither. Much will have to change if the ongoing overshoot is not to be followed by collapse during the twenty-first century." In 2012, the Smithsonian Institution held a symposium entitled "Perspectives on "Limits to Growth"". Another symposium was held in the same year by the Volkswagen Foundation, entitled "Already Beyond?" "Limits to Growth" did not receive an official update in 2012, but one of its coauthors, Jørgen Randers, published a book, "". Comparisons and updated models. In 2008, physicist Graham Turner at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia published a paper called "A Comparison of 'The Limits to Growth' with Thirty Years of Reality". It compared the past thirty years of data with the scenarios laid out in the 1972 book and found that changes in industrial production, food production, and pollution are all congruent with one of the book's three scenarios—that of "business as usual". This scenario in "Limits" points to economic and societal collapse in the 21st century. In 2010, Nørgård, Peet, and Ragnarsdóttir called the book a "pioneering report". They said that, "its approach remains useful and that its conclusions are still surprisingly valid ... unfortunately the report has been largely dismissed by critics as a doomsday prophecy that has not held up to scrutiny." Also in 2008, researcher Peter A. Victor wrote that even though the "Limits" team probably underestimated price mechanism's role in adjusting outcomes, their critics have overestimated it. He states that "Limits to Growth" has had a significant impact on the conception of environmental issues and notes that (in his view) the models in the book were meant to be taken as predictions "only in the most limited sense of the word". In a 2009 article published in "American Scientist" entitled "Revisiting the Limits to Growth After Peak Oil," Hall and Day noted that "the values predicted by the limits-to-growth model and actual data for 2008 are very close." These findings are consistent with the 2008 CSIRO study which concluded: "The analysis shows that 30 years of historical data compares favorably with key features ... [of the "Limits to Growth"] "standard run" scenario, which results in collapse of the global system midway through the 21st Century." In 2011, Ugo Bardi published a book-length academic study of "The Limits to Growth", its methods, and historical reception and concluded that "The warnings that we received in 1972 ... are becoming increasingly more worrisome as reality seems to be following closely the curves that the ... scenario had generated." A popular analysis of the accuracy of the report by science writer Richard Heinberg was also published. In 2012, writing in "American Scientist", Brian Hayes stated that the model is "more a polemical tool than a scientific instrument". He went on to say that the graphs generated by the computer program should not, as the authors note, be used as predictions. In 2014, Turner concluded that "preparing for a collapsing global system could be even more important than trying to avoid collapse." Another 2014 study from the University of Melbourne confirmed that data closely tracked the World3 BAU model. In 2015, a calibration of the updated World3-03 model using historical data from 1995 to 2012 to better understand the dynamics of today's economic and resource system was undertaken. The results showed that human society has invested more to abate persistent pollution, increase food productivity and have a more productive service sector however the broad trends within "Limits to Growth" still held true. In 2016, the UK government established an All-party parliamentary group on Limits to Growth. Its initial report concluded that "there is unsettling evidence that society is still following the 'standard run' of the original study – in which overshoot leads to an eventual collapse of production and living standards". The report also points out that some issues not fully addressed in the original 1972 report, such as climate change, present additional challenges for human development. In 2020, an analysis by Gaya Herrington, then Director of Sustainability Services of KPMG US, was published in Yale University's "Journal of Industrial Ecology". The study assessed whether, given key data known in 2020 about factors important for the "Limits to Growth" report, the original report's conclusions are supported. In particular, the 2020 study examined updated quantitative information about ten factors, namely population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint, and concluded that the "Limits to Growth" prediction is essentially correct in that continued economic growth is unsustainable under a "business as usual" model. The study found that current empirical data is broadly consistent with the 1972 projections and that if major changes to the consumption of resources are not undertaken, economic growth will peak and then rapidly decline by around 2040. In 2023, the parameters of the World3 model were recalibrated using empirical data up to 2022. This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business-as-usual scenario of the Limits to Growth standard run. The main effect of the recalibration update is to raise the peaks of most variables and move them a few years into the future. Related books. Books about humanity's uncertain future have appeared regularly over the years. A few of them, including the books mentioned above for reference, include: See also. &lt;templatestyles src="Div col/styles.css"/&gt; Books: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\ln(1+0.026\\times 418)}{0.026} \\approx \\text{95 years}" }, { "math_id": 1, "text": "y = \\frac{\\ln((rs) + 1)}{r}" } ]
https://en.wikipedia.org/wiki?curid=66618
6662091
Fluorescence anisotropy
Fluorescence anisotropy or fluorescence polarization is the phenomenon where the light emitted by a fluorophore has unequal intensities along different axes of polarization. Early pioneers in the field include Aleksander Jablonski, Gregorio Weber, and Andreas Albrecht. The principles of fluorescence polarization and some applications of the method are presented in Lakowicz's book. Definition of fluorescence anisotropy. The anisotropy (r) of a light source is defined as the ratio of the polarized component to the total intensity (formula_0): formula_1 When the excitation is polarized along the z-axis, emission from the fluorophore is symmetric around the z-axis(Figure). Hence statistically we have formula_2. As formula_3, and formula_4, we have formula_5. Principle – Brownian motion and photoselection. In fluorescence, a molecule absorbs a photon and gets excited to a higher energy state. After a short delay (the average represented as the fluorescence lifetime formula_6), it comes down to a lower state by losing some of the energy as heat and emitting the rest of the energy as another photon. The excitation and de-excitation involve the redistribution of electrons about the molecule. Hence, excitation by a photon can occur only if the electric field of the light is oriented in a particular axis about the molecule. Also, the emitted photon will have a specific polarization with respect to the molecule. The first concept to understand for anisotropy measurements is the concept of Brownian motion. Although water at room temperature contained in a glass to the eye may look very still, on the molecular level each water molecule has kinetic energy and thus there are many collisions between water molecules in any amount of time. A nanoparticle (yellow dot in the figure) suspended in solution will undergo a random walk due to the summation of these underlying collisions. The rotational correlation time ("Φr"), the time it takes for the molecule to rotate 1 radian, is dependent on the viscosity ("η"), temperature (T), Boltzmann constant ("kB") and volume ("V") of the nanoparticle: formula_7 The second concept is photoselection by use of a polarized light. When polarized light is applied to a group of randomly oriented fluorophores, most of the excited molecules will be those oriented within a particular range of angles to the applied polarization. If they do not move, the emitted light will also be polarized within a particular range of angles to the applied light. For single-photon excitation the intrinsic anisotropy "r"0 has a maximum theoretical value of 0.4 when the excitation and emission dipoles are parallel and a minimum value of -0.2 when the excitation and emission dipoles are perpendicular. formula_8 where β is the angle between the excitation and emission dipoles. For steady-state fluorescence measurements it is usually measured by embedding the fluorophore in a frozen polyol. Taking the idealistic simplest case a subset of dye molecules suspended in solution that have a mono-exponential fluorescence lifetime formula_6 and r0=0.4 (rhodamine 6g in ethylene glycol made to have an absorbance of ~0.05 is a good test sample). If the excitation is unpolarized then the measured fluorescence emission should likewise be unpolarized. If however the excitation source is vertically polarized using an excitation polarizer then polarization effects will be picked up in the measured fluorescence. These polarization artifacts can be combated by placing an emission polarizer at the magic angle of 54.7º. If the emission polarizer is vertically polarized there will be an additional loss of fluorescence as Brownian motion results in dye molecules moving from an initial vertical polarized configuration to an unpolarized configuration. On the other hand, if the emission polarizer is horizontally polarized there will be an additional introduction of excited molecules that were initially vertically polarized and became depolarized via Brownian motion. The fluorescence sum and difference can be constructed by addition of the intensities and subtraction of the fluorescence intensities respectively: formula_9 formula_10 Dividing the difference by the sum gives the anisotropy decay: formula_11 The grating factor "G" is an instrumental preference of the emission optics for the horizontal orientation to the vertical orientation. It can be measured by moving the excitation polarizer to the horizontal orientation and comparing the intensities when the emission polarizer is vertically and horizontally polarized respectively. formula_12 "G" is emission wavelength dependent. Note "G" in literature is defined as the inverse shown. The degree of decorrelation in the polarization of the incident and emitted light depends on how quickly the fluorophore orientation gets scrambled (the rotational lifetime formula_13 ) compared to the fluorescence lifetime (formula_6). The scrambling of orientations can occur by the whole molecule tumbling or by the rotation of only the fluorescent part. The rate of tumbling is related to the measured anisotropy by the Perrin equation: formula_14 Where r is the observed anisotropy, r0 is the intrinsic anisotropy of the molecule, formula_6 is the fluorescence lifetime and formula_15 is the rotational correlation time. This analysis is valid only if the fluorophores are relatively far apart. If they are very close to another, they can exchange energy by FRET and because the emission can occur from one of many independently moving (or oriented) molecules this results in a lower than expected anisotropy or a greater decorrelation. This type of homotransfer Förster resonance energy transfer is called energy migration FRET or emFRET. Steady-state fluorescence anisotropy only give an "average" anisotropy. Much more information can be obtained with time-resolved fluorescence anisotropy where the decay time, residual anisotropy and rotational correlation time can all be determined from fitting the anisotropy decay. Typically a vertically pulsed laser source is used for excitation and timing electronics are added between the start pulses of the laser (start) and the measurement of the fluorescence photons (stop). The technique Time-Correlated Single Photon Counting (TCSPC) is typically employed. Again using the idealistic simplest case a subset of dye molecules suspended in solution that have a mono-exponential fluorescence lifetime formula_6 and an initial anisotropy r0=0.4. If the sample is excited with a pulsed vertically orientated excitation source then a single decay time formula_6 should be measured when the emission polarizer is at the magic angle. If the emission polarizer is vertically polarized instead two decay times will be measured both with positive pre-exponential factors, the first decay time should be equivalent to formula_6 measured with the unpolarized emission set-up and the second decay time will be due to the loss of fluorescence as Brownian motion results in dye molecules moving from an initial vertical polarized configuration to an unpolarized configuration. On the other hand, if the emission polarizer is horizontally polarized, two decay times will again be recovered the first one with a positive pre-exponential factor and will be equivalent to formula_6 but the second one will have a negative pre-exponential factor resulting from the introduction of excited molecules that were initially vertically polarized and became depolarized via Brownian motion. The fluorescence sum and difference can be constructed by addition of the decays and subtraction of the fluorescence decays respectively: formula_16 formula_17 Dividing the difference by the sum gives the anisotropy decay: formula_18 In the simplest case for only one species of spherical dye: formula_19 Applications. Fluorescence anisotropy can be used to measure the binding constants and kinetics of reactions that cause a change in the rotational time of the molecules. If the fluorophore is a small molecule, the rate at which it tumbles can decrease significantly when it is bound to a large protein. If the fluorophore is attached to the larger protein in a binding pair, the difference in polarization between bound and unbound states will be smaller (because the unbound protein will already be fairly stable and tumble slowly to begin with) and the measurement will be less accurate. The degree of binding is calculated by using the difference in anisotropy of the partially bound, free and fully bound (large excess of protein) states measured by titrating the two binding partners. If the fluorophore is bound to a relatively large molecule like a protein or an RNA, the change in the mobility accompanying folding can be used to study the dynamics of folding. This provides a measure of the dynamics of how the protein achieves its final, stable 3D shape. In combination with fluorophores which interact via Förster resonance energy transfer(FRET), fluorescence anisotropy can be used to detect the oligomeric state of complex-forming molecules ("How many of the molecules are interacting?"). Fluorescence anisotropy is also applied to microscopy, with use of polarizers in the path of the illuminating light and also before the camera. This can be used to study the local viscosity of the cytosol or membranes, with the latter giving information about the membrane microstructure and the relative concentrations of various lipids. This technique has also been used to detect the binding of molecules to their partners in signaling cascades in response to certain cues. The phenomenon of emFRET and the associated decrease in anisotropy when close interactions occur between fluorophores has been used to study the aggregation of proteins in response to signaling. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_T" }, { "math_id": 1, "text": "r=\\frac{I_z-I_y}{I_x+I_y+I_z}" }, { "math_id": 2, "text": "I_x=I_y" }, { "math_id": 3, "text": "I_y=I_{\\perp}" }, { "math_id": 4, "text": "I_z=I_{\\parallel}" }, { "math_id": 5, "text": "r=\\frac{I_{\\parallel}-I_{\\perp}}{I_{\\parallel}+2I_{\\perp}}=\\frac{I_{\\parallel}-I_{\\perp}}{I_{T}}" }, { "math_id": 6, "text": "\\tau" }, { "math_id": 7, "text": "\\phi _r = {{\\eta V} \\over {k{_B}T}}" }, { "math_id": 8, "text": "{r_0} = {2 \\over 5}\\left( {{{3{{\\cos }^2}\\beta - 1} \\over 2}} \\right)" }, { "math_id": 9, "text": "S = {I_{VV}} + 2G{I_{VH}}" }, { "math_id": 10, "text": "D = {I_{VV}} - G{I_{VH}}" }, { "math_id": 11, "text": "r = {D \\over S}" }, { "math_id": 12, "text": "G = {{{I_{HV}}} \\over {{I_{HH}}}}" }, { "math_id": 13, "text": "\\phi" }, { "math_id": 14, "text": "r(\\tau)=\\frac{r_0}{1+\\tau/\\tau_c}" }, { "math_id": 15, "text": "\\tau_c" }, { "math_id": 16, "text": "S(t) = G{I_{VV}}(t) + 2{I_{VH}}(t)" }, { "math_id": 17, "text": "D(t) = G{I_{VV}}(t) - {I_{VH}}(t)" }, { "math_id": 18, "text": "r(t) = {D(t) \\over S(t)}" }, { "math_id": 19, "text": "r(t) = {r_0}\\exp \\left( { - {t \\over {{\\phi _r}}}} \\right)" } ]
https://en.wikipedia.org/wiki?curid=6662091
66623591
1 Chronicles 7
First Book of Chronicles, chapter 7 1 Chronicles 7 is the seventh chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains the genealogies of tribes settled north of Judah: Issachar, Benjamin, Naphtali, Manasseh, Ephraim and Asher. It belongs to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34). Text. This chapter was originally written in the Hebrew language. It is divided into 40 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. The whole chapter belongs to an arrangement comprising 1 Chronicles 2:3–8:40 with the king-producing tribes of Judah (David; 2:3–4:43) and Benjamin (Saul; 8:1–40) bracketing the series of lists as the priestly tribe of Levi (6:1–81) anchors the center, in the following order: A David’s royal tribe of Judah (2:3–4:43) B Northern tribes east of Jordan (5:1–26) X The priestly tribe of Levi (6:1–81) B' Northern tribes west of Jordan (7:1–40) A' Saul’s royal tribe of Benjamin (8:1–40) Descendants of Issachar (7:1–5). The list parallels and , but with additional information about the tribe of Issachar whose allotted land was located southwest of the Sea of Galilee. "Now the sons of Issachar were Tola, Puah, Jashub, and Shimron, four in all." Verse 1. "Puah" (Hebrew: ): written as "Puvvah" () in Genesis 46:13, and "Puvah" () in Numbers 26:23. "And their brethren among all the families of Issachar were valiant men of might, reckoned in all by their genealogies fourscore and seven thousand." Descendants of Benjamin (7:6–12). This is one of varying Benjamin's genealogies in Chronicles and other Old Testament documents, with one uniting element: Bela is Benjamin's firstborn son (cf. Genesis 46:21; Numbers 26:38; 1 Chronicles 8:1). A longer genealogy is listed in 1 Chronicles 8:1–28. "The sons of Benjamin; Bela, and Becher, and Jediael, three." "And Shuppim and Huppim were the sons of Ir, Hushim the son of Aher." Descendants of Naphtali (7:13). The genealogy consists of only one verse, paralleling Genesis 46:24 and Numbers 26:48–50. Descendants of Manasseh (7:14–19). The list is difficult to understand because of possible transmission corruption in some places since it differs from older source (Numbers 26:29–34). It also parallels Joshua 17:1–6. "And Machir took to wife the sister of Huppim and Shuppim, whose sister's name was Maachah; and the name of the second was Zelophehad: and Zelophehad had daughters." Descendants of Ephraim (7:20–29). This section consists of 3 parts: Joshua's genealogy resembles David's in . "And Ephraim their father mourned many days, and his brothers came to comfort him." Verse 22. This verse recalls the opening of the story of Job (Job 2:11) suggesting that the Chronicler wished to draw a parallel between Job and Ephraim, Descendants of Asher (7:30–40). The first part of Asher's genealogy parallels Genesis 46:17 and Numbers 26:44-7, but the rest has no other parallel and contains far more non-Hebrew names than other biblical documents. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66623591
66637094
Subsective modifier
In linguistics, a subsective modifier is an expression which modifies another by delivering a subset of its denotation. For instance, the English adjective "skilled" is subsective since being a skilled surgeon entails being a surgeon. By contrast, the English adjective "alleged" is non-subsective since an "alleged spy" need not be an actual spy. A modifier can be subsective without being intersective. For instance, calling someone an "old friend" entails that they are a friend but does not entail that they are elderly. The term "subsective" is most often applied to modifiers which are not intersective and non-intersectivity is sometimes treated as part of its definition. There is no standard analysis for the semantics of (non-intersective) subsective modifiers. Early work such as Montague (1970) took subsective adjectives as evidence that adjectives do not denote properties which compose intersectively but rather functions which take and return a property which may or may not make an intersective semantic contribution. However, subsequent work has shown that variants of the property-based analysis can in fact account for the data. For instance, vague predicates often pass standard tests for nonintersectivity, e.g. "Neutrons are "big" subatomic particles" doesn't entail that neutrons are actually big but have in fact be analyzed as intersective using degree semantics. Current work tends to assume that the phenomenon of subsectivity is not a natural class. Adverbial readings. Subsectivity can arise when an adjective receives an adverbial reading. For instance, the subsective modifiers in the examples below do not express intrinsic qualities of the subject but rather the manner in which the subject typically performs a particular action. (Without the parenthetical, these examples would be ambiguous between an adverbial reading and a garden variety intersective reading.) Examples of this sort have been analyzed within a Davidsonian semantics as modifying an event variable introduced by the noun. In this analysis, an agentive noun such as "dancer" is formed by applying a generic quantifier gen to a predicate) which is true of dancing events. The quantifier gen provides a habitual-like meaning, taking a predicate of events and returning a predicate) which is true of an individual if they are the agent of the typical such event. In this analysis, adjectives such as "beautiful", "meticulous", and "fierce" can denote properties either of events or of individuals. When the adjective takes scope above gen it must be interpreted as a predicate of individuals; when it scopes below gen it must be interpreted as a predicate of events. In this latter case, the denotation of the adjective can still compose intersectively. Thus, on this analysis, to say that Oleg is a beautiful dancer is to say that he is the typical agent of typical beautiful dancing events. This is technically an intersective reading since it is derived by intersecting the modifier with the noun. However, it does not look like a typical intersective meaning since it does not require that Oleg himself be an element of that intersection—rather that he be the agent of certain events in that intersection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[\\![ \\text{skilled surgeon} ]\\!] \\subseteq [\\![\\text{surgeon}]\\!]" }, { "math_id": 1, "text": "[\\![ \\text{dance} ]\\!] = \\{e \\, | \\, e \\text{ is a dancing event }\\}" }, { "math_id": 2, "text": "[\\![" }, { "math_id": 3, "text": "\\!\\text{ dance} ]\\!] = \\{ x \\, | \\, x \\text{ is the agent of all contextually relevant dancing events } \\}" }, { "math_id": 4, "text": "[\\![ \\text{beautiful}_1 ]\\!] = \\{e \\, | \\, e \\text{ is beautiful }\\}" }, { "math_id": 5, "text": "[\\![ \\text{beautiful}_2 ]\\!] = \\{x \\, | \\, x \\text{ is beautiful }\\}" }, { "math_id": 6, "text": "[\\![ \\text{beautiful}_1 \\text{dance} ]\\!] = \\{e \\, | \\, e \\text{ is beautiful } \\} \\cap \\{ e \\, | \\, e \\text{ is a dancing event }\\} " } ]
https://en.wikipedia.org/wiki?curid=66637094
66638383
Sequence covering map
In mathematics, specifically topology, a sequence covering map is any of a class of maps between topological spaces whose definitions all somehow relate sequences in the codomain with sequences in the domain. Examples include sequentially quotient maps, sequence coverings, 1-sequence coverings, and 2-sequence coverings. These classes of maps are closely related to sequential spaces. If the domain and/or codomain have certain additional topological properties (often, the spaces being Hausdorff and first-countable is more than enough) then these definitions become equivalent to other well-known classes of maps, such as open maps or quotient maps, for example. In these situations, characterizations of such properties in terms of convergent sequences might provide benefits similar to those provided by, say for instance, the characterization of continuity in terms of sequential continuity or the characterization of compactness in terms of sequential compactness (whenever such characterizations hold). Definitions. Preliminaries. A subset formula_0 of formula_1 is said to be sequentially open in formula_1 if whenever a sequence in formula_2 converges (in formula_1) to some point that belongs to formula_3 then that sequence is necessarily eventually in formula_0 (i.e. at most finitely many points in the sequence do not belong to formula_0). The set formula_4 of all sequentially open subsets of formula_1 forms a topology on formula_2 that is finer than formula_2's given topology formula_5 By definition, formula_1 is called a sequential space if formula_6 Given a sequence formula_7 in formula_2 and a point formula_8 formula_9 in formula_1 if and only if formula_9 in formula_10 Moreover, formula_4 is the finest topology on formula_2 for which this characterization of sequence convergence in formula_1 holds. A map formula_11 is called sequentially continuous if formula_12 is continuous, which happens if and only if for every sequence formula_13 in formula_2 and every formula_8 if formula_9 in formula_1 then necessarily formula_14 in formula_15 Every continuous map is sequentially continuous although in general, the converse may fail to hold. In fact, a space formula_1 is a sequential space if and only if it has the following universal property for sequential spaces: for every topological space formula_16 and every map formula_17 the map formula_11 is continuous if and only if it is sequentially continuous. The sequential closure in formula_1 of a subset formula_18 is the set formula_19 consisting of all formula_20 for which there exists a sequence in formula_0 that converges to formula_21 in formula_22 A subset formula_18 is called sequentially closed in formula_1 if formula_23 which happens if and only if whenever a sequence in formula_0 converges in formula_1 to some point formula_20 then necessarily formula_24 The space formula_1 is called a Fréchet–Urysohn space if formula_25 for every subset formula_26 which happens if and only if every subspace of formula_1 is a sequential space. Every first-countable space is a Fréchet–Urysohn space and thus also a sequential space. All pseudometrizable spaces, metrizable spaces, and second-countable spaces are first-countable. Sequence coverings. A sequence formula_27 in a set formula_2 is by definition a function formula_28 whose value at formula_29 is denoted by formula_30 (although the usual notation used with functions, such as parentheses formula_31 or composition formula_32 might be used in certain situations to improve readability). Statements such as "the sequence formula_7 is injective" or "the image (i.e. range) formula_33 of a sequence formula_7 is infinite" as well as other terminology and notation that is defined for functions can thus be applied to sequences. A sequence formula_34 is said to be a subsequence of another sequence formula_7 if there exists a strictly increasing map formula_35 (possibly denoted by formula_36 instead) such that formula_37 for every formula_38 where this condition can be expressed in terms of function composition formula_39 as: formula_40 As usual, if formula_41 is declared to be (such as by definition) a subsequence of formula_7 then it should immediately be assumed that formula_35 is strictly increasing. The notation formula_42 and formula_43 mean that the sequence formula_7 is valued in the set formula_44 The function formula_45 is called a if for every convergent sequence formula_46 in formula_47 there exists a sequence formula_48 such that formula_49 It is called a if for every formula_50 there exists some formula_51 such that every sequence formula_52 that converges to formula_53 in formula_54 there exists a sequence formula_48 such that formula_55 and formula_7 converges to formula_21 in formula_22 It is a if formula_45 is surjective and also for every formula_50 and every formula_56 every sequence formula_52 and converges to formula_53 in formula_54 there exists a sequence formula_48 such that formula_55 and formula_7 converges to formula_21 in formula_22 A map formula_45 is a compact covering if for every compact formula_57 there exists some compact subset formula_58 such that formula_59 Sequentially quotient mappings. In analogy with the definition of sequential continuity, a map formula_11 is called a if formula_12 is a quotient map, which happens if and only if for any subset formula_60 formula_0 is sequentially open formula_16 if and only if this is true of formula_61 in formula_22 Sequentially quotient maps were introduced in who defined them as above. Every sequentially quotient map is necessarily surjective and sequentially continuous although they may fail to be continuous. If formula_11 is a sequentially continuous surjection whose domain formula_1 is a sequential space, then formula_11 is a quotient map if and only if formula_16 is a sequential space and formula_11 is a sequentially quotient map. Call a space formula_16 sequentially Hausdorff if formula_62 is a Hausdorff space. In an analogous manner, a "sequential version" of every other separation axiom can be defined in terms of whether or not the space formula_62 possess it. Every Hausdorff space is necessarily sequentially Hausdorff. A sequential space is Hausdorff if and only if it is sequentially Hausdorff. If formula_11 is a sequentially continuous surjection then assuming that formula_16 is sequentially Hausdorff, the following are equivalent: If the assumption that formula_64 is sequentially Hausdorff were to be removed, then statement (2) would still imply the other two statement but the above characterization would no longer be guaranteed to hold (however, if points in the codomain were required to be sequentially closed then any sequentially quotient map would necessarily satisfy condition (3)). This remains true even if the sequential continuity requirement on formula_45 was strengthened to require (ordinary) continuity. Instead of using the original definition, some authors define "sequentially quotient map" to mean a continuous surjection that satisfies condition (2) or alternatively, condition (3). If the codomain is sequentially Hausdorff then these definitions differs from the original only in the added requirement of continuity (rather than merely requiring sequential continuity). The map formula_11 is called if for every convergent sequence formula_63 in formula_16 such that formula_46 is not eventually equal to formula_65 the set formula_66 is not sequentially closed in formula_67 where this set may also be described as: formula_68 Equivalently, formula_11 is presequential if and only if for every convergent sequence formula_63 in formula_16 such that formula_69 the set formula_70 is not sequentially closed in formula_22 A surjective map formula_11 between Hausdorff spaces is sequentially quotient if and only if it is sequentially continuous and a presequential map. Characterizations. If formula_11 is a continuous surjection between two first-countable Hausdorff spaces then the following statements are true: Properties. The following is a sufficient condition for a continuous surjection to be sequentially open, which with additional assumptions, results in a characterization of open maps. Assume that formula_45 is a continuous surjection from a regular space formula_2 onto a Hausdorff space formula_74 If the restriction formula_76 is sequentially quotient for every open subset formula_72 of formula_2 then formula_45 maps open subsets of formula_2 to sequentially open subsets of formula_74 Consequently, if formula_2 and formula_64 are also sequential spaces, then formula_45 is an open map if and only if formula_76 is sequentially quotient (or equivalently, quotient) for every open subset formula_72 of formula_77 Given an element formula_50 in the codomain of a (not necessarily surjective) continuous function formula_17 the following gives a sufficient condition for formula_53 to belong to formula_71's image: formula_78 A family formula_79 of subsets of a topological space formula_1 is said to be locally finite at a point formula_20 if there exists some open neighborhood formula_72 of formula_21 such that the set formula_80 is finite. Assume that formula_45 is a continuous map between two Hausdorff first-countable spaces and let formula_81 If there exists a sequence formula_82 in formula_64 such that (1) formula_63 and (2) there exists some formula_20 such that formula_83 is not locally finite at formula_73 then formula_84 The converse is true if there is no point at which formula_71 is locally constant; that is, if there does not exist any non-empty open subset of formula_2 on which formula_71 restricts to a constant map. Sufficient conditions. Suppose formula_45 is a continuous open surjection from a first-countable space formula_2 onto a Hausdorff space formula_47 let formula_85 be any non-empty subset, and let formula_86 where formula_87 denotes the closure of formula_88 in formula_74 Then given any formula_89 and any sequence formula_7 in formula_90 that converges to formula_73 there exists a sequence formula_91 in formula_90 that converges to formula_92 as well as a subsequence formula_93 of formula_7 such that formula_94 for all formula_95 In short, this states that given a convergent sequence formula_96 such that formula_9 then for any other formula_97 belonging to the same fiber as formula_73 it is always possible to find a subsequence formula_98 such that formula_99 can be "lifted" by formula_71 to a sequence that converges to formula_100 The following shows that under certain conditions, a map's fiber being a countable set is enough to guarantee the existence of a point of openness. If formula_45 is a sequence covering from a Hausdorff sequential space formula_2 onto a Hausdorff first-countable space formula_64 and if formula_50 is such that the fiber formula_75 is a countable set, then there exists some formula_51 such that formula_21 is a point of openness for formula_101 Consequently, if formula_45 is quotient map between two Hausdorff first-countable spaces and if every fiber of formula_71 is countable, then formula_45 is an almost open map and consequently, also a 1-sequence covering. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "(X, \\tau)" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "S," }, { "math_id": 4, "text": "\\operatorname{SeqOpen}(X, \\tau)" }, { "math_id": 5, "text": "\\tau." }, { "math_id": 6, "text": "\\tau = \\operatorname{SeqOpen}(X, \\tau)." }, { "math_id": 7, "text": "x_{\\bull}" }, { "math_id": 8, "text": "x \\in X," }, { "math_id": 9, "text": "x_{\\bull} \\to x" }, { "math_id": 10, "text": "(X, \\operatorname{SeqOpen}(X, \\tau))." }, { "math_id": 11, "text": "f : (X, \\tau) \\to (Y, \\sigma)" }, { "math_id": 12, "text": "f : (X, \\operatorname{SeqOpen}(X, \\tau)) \\to (Y, \\operatorname{SeqOpen}(Y, \\sigma))" }, { "math_id": 13, "text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^{\\infty}" }, { "math_id": 14, "text": "f\\left(x_{\\bull}\\right) \\to f(x)" }, { "math_id": 15, "text": "(Y, \\sigma)." }, { "math_id": 16, "text": "(Y, \\sigma)" }, { "math_id": 17, "text": "f : X \\to Y," }, { "math_id": 18, "text": "S \\subseteq X" }, { "math_id": 19, "text": "\\operatorname{scl}_{(X, \\tau)} S" }, { "math_id": 20, "text": "x \\in X" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "(X, \\tau)." }, { "math_id": 23, "text": "S = \\operatorname{scl}_{(X, \\tau)} S," }, { "math_id": 24, "text": "x \\in S." }, { "math_id": 25, "text": "\\operatorname{scl}_X S ~=~ \\operatorname{cl}_X S" }, { "math_id": 26, "text": "S \\subseteq X," }, { "math_id": 27, "text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^\\infty" }, { "math_id": 28, "text": "x_{\\bull} : \\N \\to X" }, { "math_id": 29, "text": "i \\in \\N" }, { "math_id": 30, "text": "x_i" }, { "math_id": 31, "text": "x_{\\bull}(i)" }, { "math_id": 32, "text": "f \\circ x_{\\bull}," }, { "math_id": 33, "text": "\\operatorname{Im} x_{\\bull}" }, { "math_id": 34, "text": "s_{\\bull}" }, { "math_id": 35, "text": "l_{\\bull} : \\N \\to \\N" }, { "math_id": 36, "text": "l_{\\bull} = \\left(l_k\\right)_{k=1}^{\\infty}" }, { "math_id": 37, "text": "s_k = x_{l_k}" }, { "math_id": 38, "text": "k \\in \\N," }, { "math_id": 39, "text": "\\circ" }, { "math_id": 40, "text": "s_{\\bull} = x_{\\bull} \\circ l_{\\bull}." }, { "math_id": 41, "text": "x_{l_{\\bull}} = \\left(x_{l_k}\\right)_{k=1}^{\\infty}" }, { "math_id": 42, "text": "x_{\\bull} \\subseteq S" }, { "math_id": 43, "text": "\\operatorname{Im} x_{\\bull} \\subseteq S" }, { "math_id": 44, "text": "S." }, { "math_id": 45, "text": "f : X \\to Y" }, { "math_id": 46, "text": "y_{\\bull}" }, { "math_id": 47, "text": "Y," }, { "math_id": 48, "text": "x_{\\bull} \\subseteq X" }, { "math_id": 49, "text": "y_{\\bull} = f \\circ x_{\\bull}." }, { "math_id": 50, "text": "y \\in Y" }, { "math_id": 51, "text": "x \\in f^{-1}(y)" }, { "math_id": 52, "text": "y_{\\bull} \\subseteq Y" }, { "math_id": 53, "text": "y" }, { "math_id": 54, "text": "(Y, \\sigma)," }, { "math_id": 55, "text": "y_{\\bull} = f \\circ x_{\\bull}" }, { "math_id": 56, "text": "x \\in f^{-1}(y)," }, { "math_id": 57, "text": "K \\subseteq Y" }, { "math_id": 58, "text": "C \\subseteq X" }, { "math_id": 59, "text": "f(C) = K." }, { "math_id": 60, "text": "S \\subseteq Y," }, { "math_id": 61, "text": "f^{-1}(S)" }, { "math_id": 62, "text": "(Y, \\operatorname{SeqOpen}(Y, \\sigma))" }, { "math_id": 63, "text": "y_{\\bull} \\to y" }, { "math_id": 64, "text": "Y" }, { "math_id": 65, "text": "y," }, { "math_id": 66, "text": "\\bigcup_{\\stackrel{i \\in \\N,}{y_i \\neq y}} f^{-1}\\left(y_i\\right)" }, { "math_id": 67, "text": "(X, \\tau)," }, { "math_id": 68, "text": "\\bigcup_{\\stackrel{i \\in \\N,}{y_i \\neq y}} f^{-1}\\left(y_i\\right) \n~=~ f^{-1} \\left(\\left(\\operatorname{Im} y_{\\bull}\\right) \\setminus \\{ y \\}\\right) \n~=~ f^{-1} \\left(\\operatorname{Im} y_{\\bull}\\right) \\setminus f^{-1}(y)\n" }, { "math_id": 69, "text": "y_{\\bull} \\subseteq Y \\setminus \\{ y \\}," }, { "math_id": 70, "text": "f^{-1} \\left(\\operatorname{Im} y_{\\bull}\\right)" }, { "math_id": 71, "text": "f" }, { "math_id": 72, "text": "U" }, { "math_id": 73, "text": "x," }, { "math_id": 74, "text": "Y." }, { "math_id": 75, "text": "f^{-1}(y)" }, { "math_id": 76, "text": "f\\big\\vert_U : U \\to f(U)" }, { "math_id": 77, "text": "X." }, { "math_id": 78, "text": "y \\in \\operatorname{Im} f := f(X)." }, { "math_id": 79, "text": "\\mathcal{B}" }, { "math_id": 80, "text": "\\left\\{ B \\in \\mathcal{B} ~:~ U \\cap B \\neq \\varnothing \\right\\}" }, { "math_id": 81, "text": "y \\in Y." }, { "math_id": 82, "text": "y_{\\bull} = \\left(y_i\\right)_{i=1}^{\\infty}" }, { "math_id": 83, "text": "\\left\\{ f^{-1}\\left(y_i\\right) ~:~ i \\in \\N \\right\\}" }, { "math_id": 84, "text": "y \\in \\operatorname{Im} f = f(X)." }, { "math_id": 85, "text": "D \\subseteq Y" }, { "math_id": 86, "text": "y \\in \\operatorname{cl}_Y D" }, { "math_id": 87, "text": "\\operatorname{cl}_Y D" }, { "math_id": 88, "text": "D" }, { "math_id": 89, "text": "x, z \\in f^{-1}(y)" }, { "math_id": 90, "text": "f^{-1}(D)" }, { "math_id": 91, "text": "z_{\\bull}" }, { "math_id": 92, "text": "z" }, { "math_id": 93, "text": "\\left(x_{l_k}\\right)_{k=1}^\\infty" }, { "math_id": 94, "text": "f(z_k) = f\\left(x_{l_k}\\right)" }, { "math_id": 95, "text": "k \\in \\N." }, { "math_id": 96, "text": "x_{\\bull} \\subseteq f^{-1}(D)" }, { "math_id": 97, "text": "z \\in f^{-1}(f(x))" }, { "math_id": 98, "text": "x_{l_{\\bull}} = \\left(x_{l_k}\\right)_{k=1}^\\infty" }, { "math_id": 99, "text": "f \\circ x_{l_{\\bull}} = \\left(f\\left(x_{l_k}\\right)\\right)_{k=1}^\\infty" }, { "math_id": 100, "text": "z." }, { "math_id": 101, "text": "f : X \\to Y." } ]
https://en.wikipedia.org/wiki?curid=66638383
66640115
1 Chronicles 8
First Book of Chronicles, chapter 8 1 Chronicles 8 is the eighth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter focuses on the tribe of Benjamin, especially the family of King Saul. It belongs to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34). Text. This chapter was originally written in the Hebrew language. It is divided into 40 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. The whole chapter belongs to an arrangement comprising 1 Chronicles 2:3–8:40 with the king-producing tribes of Judah (David; 2:3–4:43) and Benjamin (Saul; 8:1–40) bracketing the series of lists as the priestly tribe of Levi (6:1–81) anchors the center, in the following order: A David’s royal tribe of Judah (2:3–4:43) B Northern tribes east of Jordan (5:1–26) X The priestly tribe of Levi (6:1–81) B' Northern tribes west of Jordan (7:1–40) A' Saul’s royal tribe of Benjamin (8:1–40) Descendants of Benjamin (8:1–32). This section contains a second genealogy of Benjamin (after 1 Chronicles 7:6-12) and is considered as a later addition to the Chronicles, documenting family trees of individuals in the tribe of Benjamin, with dwelling places and historical notes, in four sections that end in verses 7, 12, 28 and 32, respectively. The first section is the family of Bela (verses 1–7), then followed by the family of Shaharaim (verses 8–12). Several families who lived in Aijalon and Jerusalem are listed in verses 13–28, continues with the forefathers of Saul in verses 29–32, to be followed with the genealogy of Saul in the subsequent section. "1 Now Benjamin begat Bela his firstborn, Ashbel the second, and Aharah the third," "2 Nohah the fourth, and Rapha the fifth." "And their brethren among all the families of Issachar were valiant men of might, reckoned in all by their genealogies fourscore and seven thousand." Family of Saul (8:33–40). This section focuses on the genealogy of Saul, nearly identical to the list in 1 Chronicles 9:35–44. Although the royal throne was occupied by David's line, the descendants of Saul was apparently still considered important, as the list continues to the ten generation after Saul's death (1 Chronicles 10) into the 8th century BCE. "And Ner begat Kish, and Kish begat Saul, and Saul begat Jonathan, and Malchishua, and Abinadab, and Eshbaal." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66640115
66647913
Privative adjective
In linguistics, a privative adjective is an adjective which seems to exclude members of the extension of the noun which it modifies. For instance, "fake" is privative since a "fake nose" is not an actual nose. Other examples in English include "pretend", "fictitious", and "artificial". The defining feature of privative adjectives is shown below in set theoretic notation. Privative adjectives are non-subsective, but behave differently from ordinary non-subsectives in important respects, at least in English. While ordinary non-subsectives such as the modal adjective "alleged" can only be used in attributive position, privative adjectives can be used either in attributive or predicative position. In this regard, privative adjectives pattern more like intersective adjectives such as "blue". In part because of this pattern, Partee (1997) argued that privative adjectives are in fact intersective adjectives which coerce a broader interpretation of the nouns they modify. On this analysis, listeners treat fake noses as falling within the extension of the noun "nose" because refusing to do so would render the expression "fake nose" self-contradictory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Adj}" }, { "math_id": 1, "text": "\\text{N}" }, { "math_id": 2, "text": "[\\![ \\text{Adj N} ]\\!] \\cap [\\![\\text{N}]\\!] = \\emptyset" } ]
https://en.wikipedia.org/wiki?curid=66647913
6664825
Clamping (graphics)
Limiting a position to an area In computer science, clamping, or clipping is the process of limiting a value to a range between a minimum and a maximum value. Unlike wrapping, clamping merely moves the point to the nearest available value. In Python, clamping can be defined as follows: def clamp(x, minimum, maximum): if x &lt; minimum: return minimum if x &gt; maximum: return maximum return x This is equivalent to for languages that support the functions min and max. Uses. Several programming languages and libraries provide functions for fast and vectorized clamping. In Python, the pandas library offers the codice_0 and codice_1 methods. The NumPy library offers the codice_2 function. In the Wolfram Language, it is implemented as . In OpenGL, the codice_3 function takes four codice_4 values which are then 'clamped' to the range formula_0. One of the many uses of clamping in computer graphics is the placing of a detail inside a polygon—for example, a bullet hole on a wall. It can also be used with wrapping to create a variety of effects. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[0,1]" } ]
https://en.wikipedia.org/wiki?curid=6664825
66649570
1 Chronicles 9
First Book of Chronicles, chapter 9 1 Chronicles 9 is the ninth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains list of Jerusalem's inhabitants in the post-exilic period (verses 1–34), and closes with the family of Saul (verses 35–44), an almost literal repetition of the list of descendants in 1 Chronicles 8:29–38. The first part of the chapter (verses 1–34) belongs to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34), whereas the second part (verses 35–44) belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30). Text. This chapter was originally written in the Hebrew language. It is divided into 44 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Returned exiles in Jerusalem (9:1–16). This section contains a list of people returning from Babylonian exile to Jerusalem, in following order: Israel (non-clerics, naming four tribes: Judah, Benjamin, Ephraim, Manasseh; verses 1–9), priests (verses 10–13), and Levites (verses 14–16). Verses 2–17 were probably adapted from Nehemiah 11:3-19. "So all Israel were reckoned by genealogies; and, behold, they were written in the book of the kings of Israel and Judah, who were carried away to Babylon for their transgression." "And the first inhabitants who dwelt in their possessions in their cities were Israelites, priests, Levites, and the Nethinim." "And in Jerusalem dwelt of the children of Judah, and of the children of Benjamin, and of the children of Ephraim, and Manasseh;" "And of the priests; Jedaiah, and Jehoiarib, and Jachin," "The beginning of the se[cond] month is [on the si]xth [day] of the course of Jedaiah. On the second of the month is the Sabbath of the course of Harim..." The gatekeepers (9:17–34). The gatekeepers (or 'porters') are described at length as members of the Levite families (cf. ff; they are listed separately from other 'Levites'), with specific duties (verses 18–19) to guard 'thresholds of the tent' as well as the entrances. These duties were established during the desert-dwelling period and had not changed since that time. These gatekeepers are different from the singers, who only began to hold their office when their job as bearers of the ark became unnecessary (cf. ). Apart from guard duties, the gatekeepers were also in charge of utensils, furniture, materials for service, and baking the flat cakes and "rows of bread" (cf. ). also give special attention to gatekeepers. The family of King Saul (9:35–44). This section focuses on the genealogy of Saul, the first ruler of Israel, nearly identical to the list in 1 Chronicles 8:29–38, to conclude the genealogies of the tribes of Israel. "And Ner begat Kish, and Kish begat Saul, and Saul begat Jonathan, and Malchishua, and Abinadab, and Eshbaal." "And the son of Jonathan was Meribbaal: and Meribbaal begat Micah." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66649570
666496
Unified Thread Standard
Standard thread form The Unified Thread Standard (UTS) defines a standard thread form and series—along with allowances, tolerances, and designations—for screw threads commonly used in the United States and Canada. It is the main standard for bolts, nuts, and a wide variety of other threaded fasteners used in these countries. It has the same 60° profile as the ISO metric screw thread, but the characteristic dimensions of each UTS thread (outer diameter and pitch) were chosen as an inch fraction rather than a millimeter value. The UTS is currently controlled by ASME/ANSI in the United States. Basic profile. Each thread in the series is characterized by its major diameter "D"maj and its pitch, "P". UTS threads consist of a symmetric V-shaped thread. In any plane containing the thread axis, the flanks of the V have an angle of 60° to each other. The outermost &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄8 and the innermost &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4 of the height "H" of the V-shape are cut off from the profile. The major diameter "D"maj is the diameter of the screw measured from the outer edge of the threads. The minor diameter "D"min (also known as the root diameter) is the diameter of the screw measured from the inner edge of the threads. The major diameter may be slightly different from the shank diameter, which is the diameter of the unthreaded part of the screw. The diameters are sometimes given approximately in fractions of an inch (e.g. the major diameter of a #6 screw is 0.1380 in, approximately &lt;templatestyles src="Fraction/styles.css" /&gt;9⁄64 in = 0.140625 in). The pitch "P" is the distance between thread peaks. For UTS threads, which are single-start threads, it is equal to the lead, the axial distance that the screw advances during a 360° rotation. UTS threads do not usually use the pitch parameter; instead a parameter known as threads per inch (TPI) is used, which is the reciprocal of the pitch. The relationship between the height "H" and the pitch "P" is found using the following equation where formula_0 is half the included angle of the thread, in this case 30 degrees: formula_1 or formula_2 In an external (male) thread (e.g., on a bolt), the major diameter "D"maj and the minor diameter "D"min define "maximum" dimensions of the thread. This means that the external thread must end flat at "D"maj, but can be rounded out below the minor diameter "D"min. Conversely, in an internal (female) thread (e.g., in a nut), the major and minor diameters are "minimum" dimensions, therefore the thread profile must end flat at "D"min but may be rounded out beyond "D"maj. These provisions are to prevent any interferences. The minor diameter "D"min and effective pitch diameter "D"p are derived from the major diameter and pitch as: formula_3 Designation. The standard designation for a UTS thread is a number indicating the nominal (major) diameter of the thread, followed by the pitch measured in threads per inch. For diameters smaller than inch, the diameter is indicated by an integer number defined in the standard; for all other diameters, the inch figure is given. This number pair is optionally followed by the letters UNC, UNF or UNEF (Unified) if the diameter-pitch combination is from the "coarse", "fine", or "extra fine" series, and may also be followed by a tolerance class. Example: #6-32 UNC 2B (major diameter: 0.1380 inch, pitch: 32 tpi) &lt;templatestyles src="Reflist/styles.css" /&gt; The following formula is used to calculate the major diameter of a numbered screw greater than or equal to 0: "Major diameter" = "Screw #" × 0.013 in + 0.060 in. For example, the major diameter of a #10 screw is 10 × 0.013 in + 0.060 in = 0.190 in. To calculate the major diameter of "aught" size screws count the number of extra zeroes and multiply this number by 0.013 in and subtract from 0.060 in. For example, the major diameter of a #0000 screw is 0.060 in − (3 × 0.013 in) = 0.060 in − 0.039 in = 0.021 in. The number series of machine screws has been extended downward to include #00-90 (0.047 in = 0.060 in − 0.013 in) and #000-120 (0.034 in = 0.060 in − 2 × 0.013 in) screws; however, the main standard for screws smaller than #0 is ANSI/ASME standard B1.10 Unified Miniature Screw Threads. This defines a series of metric screws named after their major diameters in millimetres, from 0.30 UNM to 1.40 UNM. Preferred sizes are 0.3, 0.4, 0.5, 0.6, 0.8, 1.0 and 1.2 mm, with additional defined sizes halfway between. The standard thread pitch is approximately of the major diameter. The thread form is slightly modified to increase the minor diameter, and thus the strength of screws and taps. The major diameter still extends to within "H" of the theoretical sharp "V", but the total depth of the thread is reduced 4% from "H" =  cos(30°) "P" ≈ 0.541"P" to 0.52"P". This increases the amount of the theoretical sharp "V" which is cut off at the minor diameter by 10% from 0.25"H" to − ≈ 0.27456"H". The number series of machine screws once included more odd numbers and went up to #16 or more. Standardization efforts in the late 19th and the early part of the 20th century reduced the range of sizes considerably. Now, it is less common to see machine screws larger than #14, or odd number sizes other than #1, #3 and #5. Even though #14 and #16 screws are still available, they are not as common as sizes #0 through #12. Sometimes "special" diameter and pitch combinations (UNS) are used, for example a major diameter with 20 threads per inch. UNS threads are rarely used for bolts, but rather on nuts, tapped holes, and threaded ODs. Because of this UNS taps are readily available. Most UNS threads have more threads per inch than the correlating UNF or UNEF standard; therefore they are often the strongest thread available. Because of this they are often used in applications where high stresses are encountered, such as machine tool spindles or automotive spindles. Gauging. A screw thread gauging system comprises a list of screw thread characteristics that must be inspected to establish the dimensional acceptability of the screw threads on a threaded product and the gauge(s) which shall be used when inspecting those characteristics. Currently this gauging for UTS is controlled by: These standards provide essential specifications and dimensions for the gauges used on Unified inch screw threads (UN, UNR, UNJ thread form) on externally and internally threaded products. It also covers the specifications and dimensions for the thread gauges and measuring equipment. The basic purpose and use of each gauge are also described. It also establishes the criteria for screw thread acceptance when a gauging system is used. Tolerance classes. A classification system exists for ease of manufacture and interchangeability of fabricated threaded items. Most (but certainly not all) threaded items are made to a classification standard called the Unified Screw Thread Standard Series. This system is analogous to the fits used with assembled parts. The letter suffix "A" or "B" denotes whether the threads are external or internal, respectively. Classes 1A, 2A, 3A apply to external threads; Classes 1B, 2B, 3B apply to internal threads. Thread class refers to the acceptable range of pitch diameter for any given thread. The pitch diameter is indicated as Dp in the figure shown above. There are several methods that are used to measure the pitch diameter. The most common method used in production is by way of a go/no-go gauge. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "H = \\frac{ {1}}{2\\tan \\theta} \\cdot P = \\frac{ \\sqrt 3 }{2} \\cdot P \\approx 0.866025 \\cdot P" }, { "math_id": 2, "text": "P = 2\\tan\\theta\\cdot H = \\frac{2}{\\sqrt 3} \\cdot H \\approx 1.154701 \\cdot H." }, { "math_id": 3, "text": "\\begin{align}\n D_\\text{min} &= D_\\text{maj} - 2\\cdot\\frac58\\cdot H = D_\\text{maj} - \\frac{ 5 {\\sqrt 3}}{8}\\cdot P \\approx D_\\text{maj} - 1.082532 \\cdot P \\\\\n D_\\text{p} &= D_\\text{maj} - 2\\cdot\\frac38\\cdot H = D_\\text{maj} - \\frac{ 3 {\\sqrt 3}}{8}\\cdot P \\approx D_\\text{maj} - 0.649519 \\cdot P.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=666496
6665119
Wrapping (graphics)
In computer graphics, wrapping is the process of limiting a position to an area. A good example of wrapping is wallpaper, a single pattern repeated indefinitely over a wall. Wrapping is used in 3D computer graphics to repeat a texture over a polygon, eliminating the need for large textures or multiple polygons. To wrap a position "x" to an area of width "w", calculate the value formula_0. Implementation. For computational purposes the wrapped value "x"' of "x" can be expressed as formula_1 where formula_2 is the highest value in the range, and formula_3 is the lowest value in the range. Pseudocode for wrapping of a value to a range other than 0–1 is function wrap(X, Min, Max: Real): Real; X := X - Int((X - Min) / (Max - Min)) * (Max - Min); if X &lt; 0 then // This corrects the problem caused by using Int instead of Floor X := X + Max - Min; return X; Pseudocode for wrapping of a value to a range of 0–1 is function wrap(X: Real): Real; X := X - Int(X); if X &lt; 0 then X := X + 1; return X; Pseudocode for wrapping of a value to a range of 0–1 without branching is, function wrap(X: Real): Real; return ((X mod 1.0) + 1.0) mod 1.0;
[ { "math_id": 0, "text": "x' = x \\text{ mod } w" }, { "math_id": 1, "text": "x' = x - \\lfloor (x - x_{\\text{min}}) / (x_{\\text{max}} - x_{\\text{min}}) \\rfloor \\cdot (x_{\\text{max}} - x_{\\text{min}})" }, { "math_id": 2, "text": "x_{\\text{max}}" }, { "math_id": 3, "text": "x_{\\text{min}}" } ]
https://en.wikipedia.org/wiki?curid=6665119
666526
Greedoid
Set system used in greedy optimization In combinatorics, a greedoid is a type of set system. It arises from the notion of the matroid, which was originally introduced by Whitney in 1935 to study planar graphs and was later used by Edmonds to characterize a class of optimization problems that can be solved by greedy algorithms. Around 1980, Korte and Lovász introduced the greedoid to further generalize this characterization of greedy algorithms; hence the name greedoid. Besides mathematical optimization, greedoids have also been connected to graph theory, language theory, order theory, and other areas of mathematics. Definitions. A set system ("F", "E") is a collection F of subsets of a ground set E (i.e. F is a subset of the power set of E). When considering a greedoid, a member of F is called a feasible set. When considering a matroid, a feasible set is also known as an "independent set". An accessible set system ("F", "E") is a set system in which every nonempty feasible set X contains an element x such that formula_0 is feasible. This implies that any nonempty, finite, accessible set system necessarily contains the empty set ∅. A greedoid ("F", "E") is an accessible set system that satisfies the "exchange property": A basis of a greedoid is a maximal feasible set, meaning it is a feasible set but not contained in any other one. A basis of a subset X of E is a maximal feasible set contained in X. The rank of a greedoid is the size of a basis. By the exchange property, all bases have the same size. Thus, the rank function is well defined. The rank of a subset X of E is the size of a basis of X. Just as with matroids, greedoids have a cryptomorphism in terms of rank functions. A function formula_5 is the rank function of a greedoid on the ground set E if and only if r is subcardinal, monotonic, and locally semimodular, that is, for any formula_6 and any formula_7 we have: Classes. Most classes of greedoids have many equivalent definitions in terms of set system, language, poset, simplicial complex, and so on. The following description takes the traditional route of listing only a couple of the more well-known characterizations. An interval greedoid ("F", "E") is a greedoid that satisfies the "Interval Property": formula_16 Equivalently, an interval greedoid is a greedoid such that the union of any two feasible sets is feasible if it is contained in another feasible set. An antimatroid ("F", "E") is a greedoid that satisfies the "Interval Property without Upper Bounds": Equivalently, an antimatroid is (i) a greedoid with a unique basis; or (ii) an accessible set system closed under union. It is easy to see that an antimatroid is also an interval greedoid. A matroid ("F", "E") is a non-empty greedoid that satisfies the "Interval Property without Lower Bounds": It is easy to see that a matroid is also an interval greedoid. Greedy algorithm. In general, a greedy algorithm is just an iterative process in which a "locally best choice", usually an input of maximum weight, is chosen each round until all available choices have been exhausted. In order to describe a greedoid-based condition in which a greedy algorithm is optimal (i.e., obtains a basis of maximum value), we need some more common terminologies in greedoid theory. Without loss of generality, we consider a greedoid "G" = ("F", "E") with E finite. A subset X of E is rank feasible if the largest intersection of X with any feasible set has size equal to the rank of X. In a matroid, every subset of E is rank feasible. But the equality does not hold for greedoids in general. A function formula_18 is "R"-compatible if formula_19 is rank feasible for all real numbers c. An objective function formula_20 is linear over a set formula_21 if, for all formula_22 we have formula_23 for some weight function formula_24 Proposition. A greedy algorithm is optimal for every R-compatible linear objective function over a greedoid. The intuition behind this proposition is that, during the iterative process, each optimal exchange of minimum weight is made possible by the exchange property, and optimal results are obtainable from the feasible sets in the underlying greedoid. This result guarantees the optimality of many well-known algorithms. For example, a minimum spanning tree of a weighted graph may be obtained using Kruskal's algorithm, which is a greedy algorithm for the cycle matroid. Prim's algorithm can be explained by taking the line search greedoid instead. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\setminus \\{x\\}" }, { "math_id": 1, "text": "X, Y \\in F" }, { "math_id": 2, "text": "|X|>|Y|," }, { "math_id": 3, "text": "x \\in X \\setminus Y" }, { "math_id": 4, "text": "Y \\cup \\{x\\} \\in F." }, { "math_id": 5, "text": "r:2^E \\to \\Z" }, { "math_id": 6, "text": "X,Y \\subseteq E" }, { "math_id": 7, "text": "e,f \\in E" }, { "math_id": 8, "text": "r(X)\\le|X|" }, { "math_id": 9, "text": "r(X)\\le r(Y)" }, { "math_id": 10, "text": "X \\subseteq Y \\subseteq E" }, { "math_id": 11, "text": "r(X) = r(X\\cup\\{e,f\\})" }, { "math_id": 12, "text": "r(X) = r(X \\cup \\{e\\}) = r(X \\cup \\{f\\})" }, { "math_id": 13, "text": "A,B,C \\in F" }, { "math_id": 14, "text": "A \\subseteq B \\subseteq C," }, { "math_id": 15, "text": "x \\in E \\setminus C:" }, { "math_id": 16, "text": "\n\\begin{matrix} A \\cup \\{x\\} \\in F \\\\ \nC \\cup \\{x\\} \\in F \\end{matrix}\n\\implies B \\cup \\{x\\} \\in F." }, { "math_id": 17, "text": "F = \\{X \\subseteq E: \\text{ submatrix } M_{\\{1,\\ldots,|X|\\},X} \\text{ is an invertible matrix}\\}." }, { "math_id": 18, "text": "w: E \\to \\R" }, { "math_id": 19, "text": "\\{x \\in E: w(x) \\geq c\\}" }, { "math_id": 20, "text": "f: 2^S \\to \\R" }, { "math_id": 21, "text": "S" }, { "math_id": 22, "text": "X \\subseteq S," }, { "math_id": 23, "text": "f(X) = \\sum_{x \\in X} w(x)" }, { "math_id": 24, "text": "w: S \\to \\Re." } ]
https://en.wikipedia.org/wiki?curid=666526
66659052
Simplification of disjunctive antecedents
In formal semantics and philosophical logic, simplification of disjunctive antecedents (SDA) is the phenomenon whereby a disjunction in the antecedent of a conditional appears to distribute over the conditional as a whole. This inference is shown schematically below: This inference has been argued to be valid on the basis of sentence pairs such as that below, since Sentence 1 seems to imply Sentence 2. The SDA inference was first discussed as a potential problem for the similarity analysis of counterfactuals. In these approaches, a counterfactual formula_1 is predicted to be true if formula_2 holds throughout the possible worlds where formula_3 holds which are most similar to the world of evaluation. On a Boolean semantics for disjunction, formula_3 can hold at a world simply in virtue of formula_4 being true there, meaning that the most similar formula_3-worlds could all be ones where formula_4 holds but formula_5 does not. If formula_2 is also true at these worlds but not at the closest worlds here formula_5 is true, then this approach will predict a failure of SDA: formula_6 will be true at the world of evaluation while formula_7 will be false. In more intuitive terms, imagine that Yde missed the most recent party because he happened to get a flat tire while Dani missed it because she hates parties and is also deceased. In all of the closest worlds where either Yde or Dani comes to the party, it will be Yde and not Dani who attends. If Yde is a fun person to have at parties, this will mean that Sentence 1 above is predicted to be true on the similarity approach. However, if Dani tends to have the opposite effect on parties she attends, then Sentence 2 is predicted false, in violation of SDA. SDA has been analyzed in a variety of ways. One is to derive it as a semantic entailment by positing a non-classical treatment of disjunction such as that of alternative semantics or inquisitive semantics. Another approach also derives it as a semantic entailment, but does so by adopting an alternative denotation for conditionals such as the strict conditional or any of the options made available in situation semantics. Finally, some researchers have suggested that it can be analyzed as a pragmatic implicature derived on the basis of classical disjunction and a standard semantics for conditionals. SDA is sometimes considered an embedded instance of the free choice inference. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (A \\lor B) \\Rightarrow C \\models (A \\Rightarrow C) \\land (B \\Rightarrow C) " }, { "math_id": 1, "text": " (A \\lor B) > C" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "A \\lor B" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "(A \\lor B) > C" }, { "math_id": 7, "text": "(B > C)" } ]
https://en.wikipedia.org/wiki?curid=66659052
66660615
Hurford disjunction
Disjunction in formal semantics In formal semantics, a Hurford disjunction is a disjunction in which one of the disjuncts entails the other. The concept was first identified by British linguist James Hurford. The sentence "Mary is in the Netherlands or she is in Amsterdam" is an example of a Hurford disjunction since one cannot be in Amsterdam without being in the Netherlands. Other examples are shown below: As indicated by the octothorps in the above examples, Hurford disjunctions are typically infelicitous. Their infelicity has been argued to arise from them being redundant, since simply uttering the stronger of the two disjuncts would have had the same semantic effect. Thus, they have been taken as motivation for a principle such as the following: Local Redundancy: An utterance is infelicitous if its logical form contains an instance of a binary operator formula_0 applied to arguments formula_1 or formula_2, whose semantic contribution is contextually equivalent to that of either formula_1 or formula_2 on its own. However, some particular instances of Hurford disjunctions are felicitous. Felicitous Hurford disjunctions have been analyzed by positing that the weaker disjunct is strengthened by an embedded scalar implicature which eliminates the entailment between the disjuncts. For instance, in the first of the felicitous examples above, the left disjunct's unenriched meaning is simply that Sofia ate a nonzero amount of pizza. This would result in a redundancy violation since eating all the pizza entails eating a nonzero amount of it. However, if an embedded scalar implicature enriches this disjunct so that it denotes the proposition that that Sofia ate some "but not all" of the pizza, this entailment no longer goes through. Eating all of the pizza does not entail eating some but not all of it. Thus, Local Redundancy will still be satisfied. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\oplus " }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" } ]
https://en.wikipedia.org/wiki?curid=66660615
66664550
Gnu code
In quantum information, the "gnu" code refers to a particular family of quantum error correcting codes, with the special property of being invariant under permutations of the qubits. Given integers "g" (the "gap"), "n" (the occupancy), and "m" (the length of the code), the two codewords are formula_0 formula_1 where formula_2 are the Dicke states consisting of a uniform superposition of all weight-"k" words on "m" qubits, e.g. formula_3 The real parameter formula_4 scales the density of the code. The length formula_5, hence the name of the code. For odd formula_6 and formula_7, the "gnu" code is capable of correcting formula_8 erasure errors, or deletion errors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|0_{\\rm L}\\rangle = \\sum_{\\ell\\, \\textrm{even}\\atop 0\\le\\ell\\le n} \\sqrt{\\frac{{n\\choose \\ell}}{2^{n-1}}} |D^m_{g\\ell}\\rangle" }, { "math_id": 1, "text": "|1_{\\rm L}\\rangle = \\sum_{\\ell\\, \\textrm{odd}\\atop 0\\le\\ell\\le n} \\sqrt{\\frac{{n\\choose \\ell}}{2^{n-1}}} |D^m_{g\\ell}\\rangle" }, { "math_id": 2, "text": "|D^m_k\\rangle" }, { "math_id": 3, "text": "|D^4_2\\rangle = \\frac{|0011\\rangle + |0101\\rangle + |1001\\rangle + |0110\\rangle + |1010\\rangle + |1100\\rangle}{\\sqrt{6}}" }, { "math_id": 4, "text": "u = \\frac{m}{gn}" }, { "math_id": 5, "text": "m = gnu" }, { "math_id": 6, "text": "g = n" }, { "math_id": 7, "text": "u \\ge 1" }, { "math_id": 8, "text": "\\frac{g-1}{2}" } ]
https://en.wikipedia.org/wiki?curid=66664550
66664708
Galileo's law of odd numbers
Mechanical law describing falling objects In classical mechanics and kinematics, Galileo's law of odd numbers states that the distance covered by a falling object in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length. This mathematical model is accurate if the body is not subject to any forces besides uniform gravity (for example, it is falling in a vacuum in a uniform gravitational field). This law was established by Galileo Galilei who was the first to make quantitative studies of free fall. Explanation. Using a speed-time graph. The graph in the figure is a plot of speed versus time. Distance covered is the area under the line. Each time interval is coloured differently. The distance covered in the second and subsequent intervals is the area of its trapezium, which can be subdivided into triangles as shown. As each triangle has the same base and height, they have the same area as the triangle in the first interval. It can be observed that every interval has two more triangles than the previous one. Since the first interval has one triangle, this leads to the odd numbers. Using the sum of first "n" odd numbers. From the equation for uniform linear acceleration, the distance covered formula_0 for initial speed formula_1 constant acceleration formula_2 (acceleration due to gravity without air resistance), and time elapsed formula_3 it follows that the distance formula_4 is proportional to formula_5 (in symbols, formula_6), thus the distance from the starting point are consecutive squares for integer values of time elapsed. The middle figure in the diagram is a visual proof that the sum of the first formula_7 odd numbers is formula_8 In equations: That the pattern continues forever can also be proven algebraically: formula_9 To clarify this proof, since the formula_7th odd positive integer is formula_10 if formula_11 denotes the sum of the first formula_7 odd integers then formula_12 so that formula_13 Substituting formula_14 and formula_15 gives, respectively, the formulas formula_16 where the first formula expresses the sum entirely in terms of the odd integer formula_17 while the second expresses it entirely in terms of formula_18 which is formula_17's ordinal position in the list of odd integers formula_19 Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s = u t + \\tfrac{1}{2} a t^2" }, { "math_id": 1, "text": "u = 0," }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "t," }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "t^2" }, { "math_id": 6, "text": "s \\propto t^2" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "n^2." }, { "math_id": 9, "text": "\n\\begin{align}\n \\sum_{k=1}^n (2\\,k-1)&= \\frac{1}{2}\\,\\left( \\sum_{k=1}^n (2\\,k-1)+ \\sum_{k=1}^n (2\\,(n-k+1)-1) \\right)\\\\\n &= \\frac{1}{2}\\,\\sum_{k=1}^n (2\\,(n+1)-1-1)\\\\\n &= n^2 \n\\end{align}\n" }, { "math_id": 10, "text": "m \\,\\colon=\\, 2 n - 1," }, { "math_id": 11, "text": "S \\,\\colon=\\, \\sum_{k=1}^n (2\\,k-1) \\,=\\, 1 + 3 + \\cdots + (m-2) + m" }, { "math_id": 12, "text": "\\begin{alignat}{4}\nS + S\n&=\\;\\; 1 &&+\\;\\; 3 &&\\;+ \\cdots + (m-2) &&+\\;\\; m \\\\\n&+\\;\\; m &&+ (m-2) &&\\;+ \\cdots +\\;\\; 3 &&+\\;\\; 1 \\\\\n&=\\; (m+1) &&+ (m+1) &&\\;+ \\cdots + (m+1) &&+ (m+1) \\quad \\text{ (} n \\text{ terms)}\\\\\n&=\\; n \\, (m+1) && && && && \\\\\n\\end{alignat}" }, { "math_id": 13, "text": "S = \\tfrac{1}{2} \\, n \\, (m+1)." }, { "math_id": 14, "text": "n = \\tfrac{1}{2} (m + 1)" }, { "math_id": 15, "text": "m + 1 = 2 \\, n" }, { "math_id": 16, "text": "1 + 3 + \\cdots + m \\;=\\; \\tfrac{1}{4} (m+1)^2 \\quad \\text{ and } \\quad 1 + 3 + \\cdots + (2 \\, n - 1) \\;=\\; n^2" }, { "math_id": 17, "text": "m" }, { "math_id": 18, "text": "n," }, { "math_id": 19, "text": "1, 3, 5, \\ldots." } ]
https://en.wikipedia.org/wiki?curid=66664708
6666785
List of New York Americans head coaches
This is a list of New York Americans head coaches. The Americans had nine different head coaches during the team's existence. Tommy Gorman is the most successful head coach in team's history, accumulating a .488 winning percentage during two stints as head coach. Red Dutton is the longest-tenured head coach for the team and lead the Americans to the Stanley Cup playoffs in three seasons. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{Wins+\\frac{1}{2}Ties}{Games}" } ]
https://en.wikipedia.org/wiki?curid=6666785
66672458
Type shifter
Interpretation rule in formal semantics In formal semantics, a type shifter is an interpretation rule that changes an expression's semantic type. For instance, the English expression "John" might ordinarily denote John himself, but a type shifting rule called Lift can raise its denotation to a function which takes a property and returns "true" if John himself has that property. Lift can be seen as mapping an individual onto the principal ultrafilter that it generates. Type shifters were proposed by Barbara Partee and Mats Rooth in 1983 to allow for systematic type ambiguity. Work of the period assumed that syntactic categories corresponded directly with semantic types, and researchers thus had to "generalize to the worst case" when particular uses of particular expressions from a given category required an especially high type. Moreover, Partee argued that evidence, in fact, supported expressions having different types in different contexts. Thus, she and Rooth proposed type shifting as a principled mechanism for generating the ambiguity. Type shifters remain a standard tool in formal semantic work, particularly in categorial grammar and related frameworks. Type shifters have also been used to interpret quantifiers in object position and to capture scope ambiguities. In that regard, they serve as an alternative to syntactic operations such as quantifier raising used in mainstream generative approaches to semantics. Type shifters have also been used to generate and compose alternative sets without the need to fully adopt an alternative-based semantics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\, \\, [\\![John]\\!] = j " }, { "math_id": 1, "text": "[\\![John]\\!] = \\lambda P_{\\langle e,t \\rangle} . P(j) " } ]
https://en.wikipedia.org/wiki?curid=66672458
66676
Tropic of Cancer
Line of northernmost latitude at which the Sun can be directly overhead The Tropic of Cancer, also known as the Northern Tropic, is the Earth's northernmost circle of latitude where the Sun can be seen directly overhead. This occurs on the June solstice, when the Northern Hemisphere is tilted toward the Sun to its maximum extent. It also reaches 90 degrees below the horizon at solar midnight on the December Solstice. Using a continuously updated formula, the circle is currently °′″ (or °) north of the Equator. Its Southern Hemisphere counterpart, marking the most southerly position at which the Sun can be seen directly overhead, is the Tropic of Capricorn. These tropics are two of the five major circles of latitude that mark maps of Earth, the others being the Arctic and Antarctic circles and the Equator. The positions of these two circles of latitude (relative to the Equator) are dictated by the tilt of Earth's axis of rotation relative to the plane of its orbit, and since the tilt changes, the location of these two circles also changes. In geopolitics, it is known for being the southern limitation on the mutual defence obligation of NATO, as member states of NATO are not obligated to come to the defence of territory south of the Tropic of Cancer. Name. When this line of latitude was named in the last centuries BCE, the Sun was in the constellation Cancer (Latin: "Crab") at the June solstice (90° ecliptic longitude). Due to the precession of the equinoxes, this is no longer the case; today the Sun is in constellation Taurus at the June solstice. The word "tropic" itself comes from the Greek "trope (τροπή)", meaning turn (change of direction or circumstance), inclination, referring to the fact that the Sun appears to "turn back" at the solstices. Drift. The Tropic of Cancer's position is not fixed, but constantly changes because of a slight wobble in the Earth's longitudinal alignment relative to the ecliptic, the plane in which the Earth orbits around the Sun. Earth's axial tilt varies over a 41,000-year period from about 22.1 to 24.5 degrees, and as of 2000[ [update]] is about 23.4 degrees, which will continue to remain valid for about a millennium. This wobble means that the Tropic of Cancer is currently drifting southward at a rate of almost half an arcsecond (0.468″) of latitude, or , per year. The circle's position was at exactly 23° 27′N in 1917 and will be at 23° 26'N in 2045. The distance between the Antarctic Circle and the Tropic of Cancer is essentially constant as they move in tandem. This is based on an assumption of a constant equator, but the precise location of the equator is not truly fixed. See: equator, axial tilt and circles of latitude for additional details. Geography. North of the tropic are the subtropics and the North Temperate Zone. The equivalent line of latitude south of the Equator is called the Tropic of Capricorn, and the region between the two, centered on the Equator, is the tropics. In the year 2000, more than half of the world's population lived north of the Tropic of Cancer. On the Tropic of Cancer there are approximately 13 hours, 35 minutes of daylight during the summer solstice. During the winter solstice, there are 10 hours, 41 minutes of daylight. Using 23°26'N for the Tropic of Cancer, the tropic passes through the following 17 countries (including two disputed territories) and 8 water bodies, starting at the prime meridian and heading eastward: Climate. The climate at the Tropic of Cancer is generally hot and dry, except for cooler highland regions in China, marine environments such as Hawaii, and easterly coastal areas, where orographic rainfall can be very heavy, in some places reaching annually. Most regions on the Tropic of Cancer experience two distinct seasons: an extremely hot summer with temperatures often reaching and a warm winter with maxima around . Much land on or near the Tropic of Cancer is part of the Sahara Desert, while to the east, the climate is torrid monsoonal with a short wet season from June to September, and very little rainfall for the rest of the year. The highest mountain on or adjacent to the Tropic of Cancer is Yu Shan in Taiwan. It had glaciers descending as low as during the Last Glacial Maximum. At present glaciers still exist around the Tropic. The nearest currently surviving are the Minyong and Baishui in the Himalayas to the north and on Iztaccíhuatl in Mexico to the south. Circumnavigation. According to the rules of the Fédération Aéronautique Internationale, for a flight to compete for a round-the-world speed record, it must cover a distance no less than the length of the Tropic of Cancer, cross all meridians, and end on the same airfield where it started. Length of the Tropic of Cancer is : formula_0 where "φ" is the latitude of the Tropic of Cancer For an ordinary circumnavigation the rules are somewhat relaxed and the distance is set to a rounded value of at least . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "l=2\\pi \\cos(\\varphi) 6378137 (1-0.00669438(\\sin(\\varphi))^2)^{-0.5}" } ]
https://en.wikipedia.org/wiki?curid=66676
6668150
Clearing the neighbourhood
Criterion for a celestial body to be considered a planet "Clearing the neighbourhood" (or dynamical dominance) around a celestial body's orbit describes the body becoming gravitationally dominant such that there are no other bodies of comparable size other than its natural satellites or those otherwise under its gravitational influence. "Clearing the neighbourhood" is one of three necessary criteria for a celestial body to be considered a planet in the Solar System, according to the definition adopted in 2006 by the International Astronomical Union (IAU). In 2015, a proposal was made to extend the definition to exoplanets. In the end stages of planet formation, a planet, as so defined, will have "cleared the neighbourhood" of its own orbital zone, i.e. removed other bodies of comparable size. A large body that meets the other criteria for a planet but has not cleared its neighbourhood is classified as a dwarf planet. This includes Pluto, whose orbit intersects with Neptune's orbit and shares its orbital neighbourhood with many Kuiper belt objects. The IAU's definition does not attach specific numbers or equations to this term, but all IAU-recognised planets have cleared their neighbourhoods to a much greater extent (by orders of magnitude) than any dwarf planet or candidate for dwarf planet. The phrase stems from a paper presented to the 2000 IAU general assembly by the planetary scientists Alan Stern and Harold F. Levison. The authors used several similar phrases as they developed a theoretical basis for determining if an object orbiting a star is likely to "clear its neighboring region" of planetesimals based on the object's mass and its orbital period. Steven Soter prefers to use the term "dynamical dominance", and Jean-Luc Margot notes that such language "seems less prone to misinterpretation". Prior to 2006, the IAU had no specific rules for naming planets, as no new planets had been discovered for decades, whereas there were well-established rules for naming an abundance of newly discovered small bodies such as asteroids or comets. The naming process for Eris stalled after the announcement of its discovery in 2005, because its size was comparable to that of Pluto. The IAU sought to resolve the naming of Eris by seeking a taxonomical definition to distinguish planets from minor planets. Criteria. The phrase refers to an orbiting body (a planet or protoplanet) "sweeping out" its orbital region over time, by gravitationally interacting with smaller bodies nearby. Over many orbital cycles, a large body will tend to cause small bodies either to accrete with it, or to be disturbed to another orbit, or to be captured either as a satellite or into a resonant orbit. As a consequence it does not then share its orbital region with other bodies of significant size, except for its own satellites, or other bodies governed by its own gravitational influence. This latter restriction excludes objects whose orbits may cross but that will never collide with each other due to orbital resonance, such as Jupiter and its trojans, Earth and 3753 Cruithne, or Neptune and the plutinos. As to the extent of orbit clearing required, Jean-Luc Margot emphasises "a planet can never completely clear its orbital zone, because gravitational and radiative forces continually perturb the orbits of asteroids and comets into planet-crossing orbits" and states that the IAU did not intend the impossible standard of impeccable orbit clearing. Stern–Levison's Λ. In their paper, Stern and Levison sought an algorithm to determine which "planetary bodies control the region surrounding them". They defined Λ (lambda), a measure of a body's ability to scatter smaller masses out of its orbital region over a period of time equal to the age of the Universe (Hubble time). Λ is a dimensionless number defined as formula_0 where "m" is the mass of the body, "a" is the body's semi-major axis, and "k" is a function of the orbital elements of the small body being scattered and the degree to which it must be scattered. In the domain of the solar planetary disc, there is little variation in the average values of "k" for small bodies at a particular distance from the Sun. If Λ &gt; 1, then the body will likely clear out the small bodies in its orbital zone. Stern and Levison used this discriminant to separate the gravitationally rounded, Sun-orbiting bodies into "überplanets", which are "dynamically important enough to have cleared [their] neighboring planetesimals", and "unterplanets". The überplanets are the eight most massive solar orbiters (i.e. the IAU planets), and the unterplanets are the rest (i.e. the IAU dwarf planets). Soter's μ. Steven Soter proposed an observationally based measure μ (mu), which he called the "planetary discriminant", to separate bodies orbiting stars into planets and non-planets. He defines μ as formula_1 where μ is a dimensionless parameter, "M" is the mass of the candidate planet, and "m" is the mass of all other bodies that share an "orbital zone", that is all bodies whose orbits cross a common radial distance from the primary, and whose non-resonant periods differ by less than an order of magnitude. The order-of-magnitude similarity in period requirement excludes comets from the calculation, but the combined mass of the comets turns out to be negligible compared with the other small Solar System bodies, so their inclusion would have little impact on the results. μ is then calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone. It is a measure of the actual degree of cleanliness of the orbital zone. Soter proposed that if μ &gt; 100, then the candidate body be regarded as a planet. Margot's Π. Astronomer Jean-Luc Margot has proposed a discriminant, Π (pi), that can categorise a body based only on its own mass, its semi-major axis, and its star's mass. Like Stern–Levison's Λ, Π is a measure of the ability of the body to clear its orbit, but unlike Λ, it is solely based on theory and does not use empirical data from the Solar System. Π is based on properties that are feasibly determinable even for exoplanetary bodies, unlike Soter's μ, which requires an accurate census of the orbital zone. formula_2 where "m" is the mass of the candidate body in Earth masses, "a" is its semi-major axis in AU, "M" is the mass of the parent star in solar masses, and "k" is a constant chosen so that Π &gt; 1 for a body that can clear its orbital zone. "k" depends on the extent of clearing desired and the time required to do so. Margot selected an extent of formula_3 times the Hill radius and a time limit of the parent star's lifetime on the main sequence (which is a function of the mass of the star). Then, in the mentioned units and a main-sequence lifetime of 10 billion years, "k" = 807. The body is a planet if Π &gt; 1. The minimum mass necessary to clear the given orbit is given when Π = 1. Π is based on a calculation of the number of orbits required for the candidate body to impart enough energy to a small body in a nearby orbit such that the smaller body is cleared out of the desired orbital extent. This is unlike Λ, which uses an average of the clearing times required for a sample of asteroids in the asteroid belt, and is thus biased to that region of the Solar System. Π's use of the main-sequence lifetime means that the body will eventually clear an orbit around the star; Λ's use of a Hubble time means that the star might disrupt its planetary system (e.g. by going nova) before the object is actually able to clear its orbit. The formula for Π assumes a circular orbit. Its adaptation to elliptical orbits is left for future work, but Margot expects it to be the same as that of a circular orbit to within an order of magnitude. To accommodate planets in orbit around brown dwarfs, an updated version of the criterion with a uniform clearing time scale of 10 billion years was published in 2024. The values of Π for Solar System bodies remain unchanged. Numerical values. Below is a list of planets and dwarf planets ranked by Margot's planetary discriminant Π, in decreasing order. For all eight planets defined by the IAU, Π is orders of magnitude greater than 1, whereas for all dwarf planets, Π is orders of magnitude less than 1. Also listed are Stern–Levison's Λ and Soter's μ; again, the planets are orders of magnitude greater than 1 for Λ and 100 for μ, and the dwarf planets are orders of magnitude less than 1 for Λ and 100 for μ. Also shown are the distances where Π = 1 and Λ = 1 (where the body would change from being a planet to being a dwarf planet). The mass of Sedna is not known; it is very roughly estimated here as , on the assumption of a density of about . Disagreement. Stern, the principal investigator of the "New Horizons" mission to Pluto, disagreed with the reclassification of Pluto on the basis of its inability to clear a neighbourhood. He argued that the IAU's wording is vague, and that — like Pluto — Earth, Mars, Jupiter and Neptune have not cleared their orbital neighbourhoods either. Earth co-orbits with 10,000 near-Earth asteroids (NEAs), and Jupiter has 100,000 trojans in its orbital path. "If Neptune had cleared its zone, Pluto wouldn't be there", he said. The IAU category of 'planets' is nearly identical to Stern's own proposed category of 'überplanets'. In the paper proposing Stern and Levison's Λ discriminant, they stated, "we define an "überplanet" as a planetary body in orbit about a star that is dynamically important enough to have cleared its neighboring planetesimals ..." and a few paragraphs later, "From a dynamical standpoint, our solar system clearly contains 8 überplanets" — including Earth, Mars, Jupiter, and Neptune. Although Stern proposed this to define dynamical subcategories of planets, he rejected it for defining what a planet is, advocating the use of intrinsic attributes over dynamical relationships. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Lambda = \\frac{m^2}{a^{3/2}}\\,k" }, { "math_id": 1, "text": "\\mu = \\frac{M}{m}" }, { "math_id": 2, "text": "\\Pi = \\frac{m}{M^{5/2}a^{9/8}}\\,k" }, { "math_id": 3, "text": "2\\sqrt{3}" } ]
https://en.wikipedia.org/wiki?curid=6668150
66683669
1 Chronicles 10
First Book of Chronicles, chapter 10 1 Chronicles 10 is the tenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter describes Saul's downfall and the reasons of his rejection by God. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30). Text. This chapter was originally written in the Hebrew language. It is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Death of Saul and his sons (10:1–10). This section marks the change of form in the Books of Chronicles from a list-based text to a more narrative description based on the historical documents such as the books of Samuel and books of Kings, and additional materials to provide information on the legitimate Davidic kingdom. It begins with Saul's downfall to theologically link the whole exposition with the Babylonian Exile at the end. "So Saul died, and his three sons, and all his house died together." "And they put his armour in the house of their gods, and fastened his head in the temple of Dagon." Verse 10. According to Saul's armour was placed in the temple of Ashtaroth (Astarte) and his body fastened to the walls of Beth-shan. The Chronicler avoids naming foreign gods, with few exception, such as Dagon. Burial of Saul (10:11–14). The narrative of Saul's burial is shorter than the account in , omitting details such as the all-night walk of the valiant men from Jabesh Gilead to fetch Saul's body and the hanging of the corpses on the city walls of Beth-shan. The Chronicler focuses more on Saul's rejection by God, giving no less than four reasons: "all the valiant men arose and took away the body of Saul and the bodies of his sons, and brought them to Jabesh. And they buried their bones under the oak in Jabesh and fasted seven days." Verse 12. The brave action of the men, marching from Jabesh-Gilead to Beth-Shan and back (about one way), recalls the high point of Saul's leadership at the beginning of his reign when he saved the people of Jabesh-Gilead from foreign attacks (1 Samuel 11). "But he did not inquire of the Lord; therefore He killed him, and turned the kingdom over to David the son of Jesse." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66683669
66687082
Chessboard paradox
Mathematical paradox and logic puzzle The chessboard paradox or paradox of Loyd and Schlömilch is a falsidical paradox based on an optical illusion. A chessboard or a square with a side length of 8 units is cut into four pieces. Those four pieces are used to form a rectangle with side lengths of 13 and 5 units. Hence the combined area of all four pieces is 64 area units in the square but 65 area units in the rectangle, this seeming contradiction is due an optical illusion as the four pieces don't fit exactly in the rectangle, but leave a small barely visible gap around the rectangle's diagonal. The paradox is sometimes attributed to the American puzzle inventor Sam Loyd (1841–1911) and the German mathematician Oskar Schlömilch (1832–1901). Analysis. Upon close inspection one can see that the four pieces don't fit quite together but leave a small barely visible gap around the diagonal of the rectangle. This gap formula_0 has the shape of a parallelogram, which can be checked by showing that the opposing angles are of equal size. formula_1 formula_2 An exact fit of the four pieces along the rectangle's requires the parallelogram to collapse into a line segments, which means its need to have the following sizes: formula_3 formula_4 Since the actual angles deviate only slightly from those values, it creates the optical illusion of the parallelogram being just a line segment and the pieces fitting exactly. Alternatively one can verify the parallelism by placing the reactangle in a coordinate system and compare slopes or vector representation of the sides. The side length and diagonals of the parallelogram are: formula_5 formula_6 formula_7 formula_8 Using Heron's formula one can compute the area of half of the parallelogram (formula_9). The halved circumference is formula_10 which yields the area of the whole parallelogram: formula_11 So the area of the gap accounts exactly for the additional area of the rectangle. Generalization. The line segments occurring in the drawing of the last chapters are of length 2, 3, 5, 8 and 13. These are all sequential Fibonacci numbers, suggesting a generalization of the dissection scheme based on Fibonacci numbers. The properties of the Fibonacci numbers also provide some deeper insight, why the optical illusion works so well. A square whose side length is the Fibonacci number formula_12 can be dissected using line segments of lengths formula_13 in the same way the chessboard was dissected using line segments of lengths 8, 5, 3 (see graphic). Cassini's identity states: formula_14 From this it is immediately clear that the difference in area between square and rectangle must always be 1 area unit, in particular for the original chessboard paradox one has: formula_15 Note than for an uneven index formula_16 the area of the square is not smaller by one area unit but larger. In this case the four pieces don't create a small gap when assembled into the rectangle, but they overlap slightly instead. Since the difference in area is always 1 area unit the optical illusion can be improved using larger Fibonacci numbers allowing gap's percentage of the rectangle area to become arbitrarily small and hence for practical purposes invisible. Since the ratio of neighboring Fibonacci numbers converges rather quickly against the golden ratio formula_17, the following ratios converge quickly as well: formula_18 For the four cut-outs of the square to fit together exactly to form a rectangle the small parallelogram formula_0 needs to collapse into a line segment being the diagonal of the rectangle. In this case the following holds for angles in the rectangle due to being corresponding angles of parallels: formula_19, formula_20, formula_21, formula_22 As a consequence the following right triangles formula_23, formula_24 , formula_9 and formula_25 must be similar and the ratio of their legs must be the same. Due to the quick convergence stated above the according ratios of Fibonacci numbers in the assembled rectangle are almost the same: formula_26 Hence they almost fit together exactly, this creates the optical illusion. One can also look at the angles of the parallelogram as in the original chessboard analysis. For those angles the following formulas can derived: formula_27 formula_28 Hence the angles converge quickly towards the values needed for an exact fit. It is however possible to use the dissection scheme without a creating an area mismatch, that is the four cut-outs will assemble exactly into a rectangle of the same area as the square. Instead of using Fibonacci numbers one bases the dissection directly on the golden ratio itself (see drawing). For a square of side length formula_29 this yields for the area of the rectangle formula_30 since formula_31 is a property of the golden ratio. History. Hooper's paradox can be seen as a precursor to chess paradox. In it you have the same figure of four pieces assembled into a rectangle, however the dissected shape from which the four pieces originate is not a square yet nor are the involved line segments based on Fibonacci numbers. Hooper published the paradox now named after him under the name "The geometric money" in his book "Rational Recreations". It was however not his invention since his book was essentially a translation of the "Nouvelles récréations physiques et mathétiques" by Edmé Gilles Guyot (1706–1786), which had been published in France in 1769.' The first known publication of the actual chess paradox is due to the German mathematician Oskar Schlömilch. He published in 1868 it under title "Ein geometrisches Paradoxon" ("a geometrical Paradox") in the German science journal "Zeitschrift für Mathematik und Physik". In the same journal Victor Schlegel published in 1879 the article "Verallgemeinerung eines geometrischen Paradoxons" ("a generalisation of a geometrical paradox"), in which he generalised the construction and pointed out the connection to the Fibonacci numbers. The chessboard paradox was also a favorite of the British mathematician and author Lewis Carroll, who worked on generalization as well but without publishing it. This was later discovered in his notes after his death. The American puzzle inventor Sam Loyd claimed to have presented the chessboard paradox at the world chess congress in 1858 and it was later contained in "Sam Loyd's Cyclopedia of 5,000 Puzzles, Tricks and Conundrums" (1914), which was posthumously published by his son of the same name. The son stated that the assembly of the four pieces into a figure of 63 area units (see graphic at the top) was his idea. It was however already published in 1901 in the article "Some postcard puzzles" by Walter Dexter.' References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "DEBF" }, { "math_id": 1, "text": "\n\\begin{align}\n\\angle FDE &=90^\\circ-\\angle FDG -\\angle EDI\\\\[6pt]\n &=90^\\circ-\\arctan\\left( \\frac{|EI|}{|DI|}\\right)-\\arctan\\left( \\frac{|FG|}{|DG|}\\right)\\\\[6pt]\n &=90^\\circ-\\arctan\\left( \\frac{5}{2}\\right)-\\arctan\\left( \\frac{3}{8}\\right)\\\\[6pt]\n &=90^\\circ-\\arctan\\left( \\frac{|FJ|}{|BJ|}\\right)-\\arctan\\left( \\frac{|EH|}{|BH|}\\right)\\\\[6pt]\n &=90^\\circ-\\angle FBJ -\\angle EBH \\\\[6pt]\n &=\\angle FBE \\approx 1.24536^\\circ\n\\end{align}\n" }, { "math_id": 2, "text": "\n\\begin{align}\n\\angle DEB &=360^\\circ-\\angle DEI -\\angle IEH -\\angle EBH \\\\[6pt]\n &=360^\\circ-\\arctan\\left( \\frac{|DI|}{|EI|}\\right)-90^\\circ-\\arctan\\left( \\frac{|DG|}{|FG|}\\right) \\\\[6pt]\n &=360^\\circ-\\arctan\\left( \\frac{2}{5}\\right)-90^\\circ-\\arctan\\left( \\frac{8}{3}\\right) \\\\[6pt]\n &=360^\\circ-\\arctan\\left( \\frac{|BJ|}{|FJ|}\\right)-90^\\circ-\\arctan\\left( \\frac{|BH|}{|EH|}\\right)\\\\[6pt]\n &=360^\\circ-\\angle BFJ -\\angle JFG -\\angle GFD \\\\[6pt]\n &=\\angle DFB \\approx 178.75464^\\circ\n\\end{align}\n" }, { "math_id": 3, "text": "\\angle FBE = \\angle FDE = 0^\\circ" }, { "math_id": 4, "text": "\\angle DFB = \\angle DEB = 180^\\circ" }, { "math_id": 5, "text": "|DE|=|FB|=\\sqrt{2^2+5^2}=\\sqrt{29} " }, { "math_id": 6, "text": "|DF|=|EB|=\\sqrt{3^2+8^2}=\\sqrt{73} " }, { "math_id": 7, "text": "|EF|=\\sqrt{1^2+3^2}=\\sqrt{10} " }, { "math_id": 8, "text": "|DB|=\\sqrt{5^2+13^2}=\\sqrt{194} " }, { "math_id": 9, "text": "\\triangle DFG" }, { "math_id": 10, "text": "s=\\frac{|EF|+|DE|+|DF|}{2}=\\frac{\\sqrt{10}+\\sqrt{29}+\\sqrt{73}}{2}" }, { "math_id": 11, "text": "\n\\begin{align}\nF&=2\\cdot\\sqrt{s\\cdot(s-|EF|)\\cdot(s-|DE|)\\cdot(s-|DF|)}\\\\[5pt]\n &=2\\cdot \\frac{1}{4} \\cdot \\sqrt{(\\sqrt{10}+\\sqrt{29}+\\sqrt{73})\\cdot (-\\sqrt{10}+\\sqrt{29}+\\sqrt{73})\\cdot(\\sqrt{10}-\\sqrt{29}+\\sqrt{73})\\cdot(\\sqrt{10}+\\sqrt{29}-\\sqrt{73})}\\\\[5pt]\n&=2\\cdot \\frac{1}{4} \\cdot 2 \\\\[5pt]\n&=1\n\\end{align}\n" }, { "math_id": 12, "text": " f_n" }, { "math_id": 13, "text": " f_n, f_{n-1}, f_{n-2}" }, { "math_id": 14, "text": "f_{n+1} \\cdot f_{n-1} - f_n^2=(-1)^n" }, { "math_id": 15, "text": "13 \\cdot 5 -8^2= f_7 \\cdot f_5 - f_6^2=(-1)^6=1" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "\\varphi" }, { "math_id": 18, "text": "\\frac{f_n}{f_{n-2}}= \\frac{f_n}{f_{n-1}} \\cdot \\frac{f_{n-1}}{f_{n-2}}\\rightarrow \\varphi^2,\\, \\frac{f_{n-2}}{f_n}= \\frac{f_{n-2}}{f_{n-1}} \\cdot \\frac{f_{n-1}}{f_n}\\rightarrow \\varphi^{-2}" }, { "math_id": 19, "text": "\\angle IDE =\\angle HEB" }, { "math_id": 20, "text": " \\angle DEI =\\angle EBH" }, { "math_id": 21, "text": "\\angle FDG =\\angle BFJ" }, { "math_id": 22, "text": " \\angle GFD =\\angle JBF" }, { "math_id": 23, "text": "\\triangle IED" }, { "math_id": 24, "text": "\\triangle HBE" }, { "math_id": 25, "text": "\\triangle FBJ" }, { "math_id": 26, "text": "\\frac{f_n}{f_{n-2}} \\approx \\frac{f_{n-1}}{f_{n-3}},\\, \\frac{f_{n-2}}{f_n} \\approx \\frac{f_{n-3}}{f_{n-1}} " }, { "math_id": 27, "text": "n\\geq 4:\\quad\\angle FBE = \\angle FDE=\\arctan\\left( \\frac{1}{f_{2n-3}+2f_{n-3}f_{n-2}}\\right)\\rightarrow 0^\\circ " }, { "math_id": 28, "text": "n\\geq 4:\\quad\\angle DFB = \\angle DEB=180^\\circ - \\arctan\\left( \\frac{1}{f_{2n-3}+2f_{n-3}f_{n-2}}\\right)\\rightarrow 180^\\circ " }, { "math_id": 29, "text": "a" }, { "math_id": 30, "text": "(\\varphi-1)a \\cdot (\\varphi a)=(\\varphi^2-\\varphi)a^2=a^2," }, { "math_id": 31, "text": "\\varphi^2-\\varphi=1 " } ]
https://en.wikipedia.org/wiki?curid=66687082
666987
Frenet–Serret formulas
Formulas in differential geometry In differential geometry, the Frenet–Serret formulas describe the kinematic properties of a particle moving along a differentiable curve in three-dimensional Euclidean space formula_0, or the geometric properties of the curve itself irrespective of any motion. More specifically, the formulas describe the derivatives of the so-called tangent, normal, and binormal unit vectors in terms of each other. The formulas are named after the two French mathematicians who independently discovered them: Jean Frédéric Frenet, in his thesis of 1847, and Joseph Alfred Serret, in 1851. Vector notation and linear algebra currently used to write these formulas were not yet available at the time of their discovery. The tangent, normal, and binormal unit vectors, often called T, N, and B, or collectively the Frenet–Serret frame (TNB frame or TNB basis), together form an orthonormal basis spanning formula_0 and are defined as follows: The Frenet–Serret formulas are: formula_1 where "d"/"ds" is the derivative with respect to arclength, "κ" is the curvature, and "τ" is the torsion of the space curve. (Intuitively, curvature measures the failure of a curve to be a straight line, while torsion measures the failure of a curve to be planar.) The TNB basis combined with the two scalars, "κ" and "τ", is called collectively the Frenet–Serret apparatus. Definitions. Let r("t") be a curve in Euclidean space, representing the position vector of the particle as a function of time. The Frenet–Serret formulas apply to curves which are "non-degenerate", which roughly means that they have nonzero curvature. More formally, in this situation the velocity vector r′("t") and the acceleration vector r′′("t") are required not to be proportional. Let "s"("t") represent the arc length which the particle has moved along the curve in time "t". The quantity "s" is used to give the curve traced out by the trajectory of the particle a natural parametrization by arc length (i.e. arc-length parametrization), since many different particle paths may trace out the same geometrical curve by traversing it at different rates. In detail, "s" is given by formula_2 Moreover, since we have assumed that r′ ≠ 0, it follows that "s"("t") is a strictly monotonically increasing function. Therefore, it is possible to solve for "t" as a function of "s", and thus to write r("s") = r("t"("s")). The curve is thus parametrized in a preferred manner by its arc length. With a non-degenerate curve r("s"), parameterized by its arc length, it is now possible to define the Frenet–Serret frame (or TNB frame): from which it follows that B is always perpendicular to both T and N. Thus, the three unit vectors T, N, and B are all perpendicular to each other. The Frenet–Serret formulas are: formula_1 where formula_3 is the curvature and formula_4 is the torsion. The Frenet–Serret formulas are also known as "Frenet–Serret theorem", and can be stated more concisely using matrix notation: formula_5 This matrix is skew-symmetric. Formulas in "n" dimensions. The Frenet–Serret formulas were generalized to higher-dimensional Euclidean spaces by Camille Jordan in 1874. Suppose that r("s") is a smooth curve in formula_6, and that the first "n" derivatives of r are linearly independent. The vectors in the Frenet–Serret frame are an orthonormal basis constructed by applying the Gram-Schmidt process to the vectors (r′("s"), r′′("s"), ..., r("n")("s")). In detail, the unit tangent vector is the first Frenet vector "e"1("s") and is defined as formula_7 where formula_8 The normal vector, sometimes called the curvature vector, indicates the deviance of the curve from being a straight line. It is defined as formula_9 Its normalized form, the unit normal vector, is the second Frenet vector e2("s") and defined as formula_10 The tangent and the normal vector at point "s" define the "osculating plane" at point r("s"). The remaining vectors in the frame (the binormal, trinormal, etc.) are defined similarly by formula_11 formula_12 The last vector in the frame is defined by the cross-product of the first formula_13 vectors: formula_14 The real valued functions used below χ"i"("s") are called generalized curvature and are defined as formula_15 The Frenet–Serret formulas, stated in matrix language, are formula_16 Notice that as defined here, the generalized curvatures and the frame may differ slightly from the convention found in other sources. The top curvature formula_17 (also called the torsion, in this context) and the last vector in the frame formula_18, differ by a sign formula_19 (the orientation of the basis) from the usual torsion. The Frenet–Serret formulas are invariant under flipping the sign of both formula_17 and formula_18, and this change of sign makes the frame positively oriented. As defined above, the frame inherits its orientation from the jet of formula_20. Proof of the Frenet-Serret formulas. The first Frenet-Serret formula holds by the definition of the normal N and the curvature κ, and the third Frenet-Serret formula holds by the definition of the torsion τ. Thus what is needed is to show the second Frenet-Serret formula. Since T, N, and B are orthogonal unit vectors with B = T × N, one also has T = N × B and N = B × T. Differentiating the last equation with respect to s gives ∂N / ∂s = (∂B / ∂s) × T + B × (∂T / ∂s) Using that ∂B / ∂s = -τN and ∂T / ∂s = κN, this becomes ∂N / ∂s = -τ (N × T) + κ (B × N) = τB - κT This is exactly the second Frenet-Serret formula. Applications and interpretation. Kinematics of the frame. The Frenet–Serret frame consisting of the tangent T, normal N, and binormal B collectively forms an orthonormal basis of 3-space. At each point of the curve, this "attaches" a frame of reference or rectilinear coordinate system (see image). The Frenet–Serret formulas admit a kinematic interpretation. Imagine that an observer moves along the curve in time, using the attached frame at each point as their coordinate system. The Frenet–Serret formulas mean that this coordinate system is constantly rotating as an observer moves along the curve. Hence, this coordinate system is always non-inertial. The angular momentum of the observer's coordinate system is proportional to the Darboux vector of the frame. Concretely, suppose that the observer carries an (inertial) top (or gyroscope) with them along the curve. If the axis of the top points along the tangent to the curve, then it will be observed to rotate about its axis with angular velocity -τ relative to the observer's non-inertial coordinate system. If, on the other hand, the axis of the top points in the binormal direction, then it is observed to rotate with angular velocity -κ. This is easily visualized in the case when the curvature is a positive constant and the torsion vanishes. The observer is then in uniform circular motion. If the top points in the direction of the binormal, then by conservation of angular momentum it must rotate in the "opposite" direction of the circular motion. In the limiting case when the curvature vanishes, the observer's normal precesses about the tangent vector, and similarly the top will rotate in the opposite direction of this precession. The general case is illustrated below. There are further on Wikimedia. Applications. The kinematics of the frame have many applications in the sciences. Frenet–Serret formulas in calculus. The Frenet–Serret formulas are frequently introduced in courses on multivariable calculus as a companion to the study of space curves such as the helix. A helix can be characterized by the height 2π"h" and radius "r" of a single turn. The curvature and torsion of a helix (with constant radius) are given by the formulas formula_21 formula_22 The sign of the torsion is determined by the right-handed or left-handed sense in which the helix twists around its central axis. Explicitly, the parametrization of a single turn of a right-handed helix with height 2π"h" and radius "r" is "x" = "r" cos "t" "y" = "r" sin "t" "z" = "h" "t" (0 ≤ t ≤ 2 π) and, for a left-handed helix, "x" = "r" cos "t" "y" = −"r" sin "t" "z" = "h" "t" (0 ≤ t ≤ 2 π). Note that these are not the arc length parametrizations (in which case, each of "x", "y", and "z" would need to be divided by formula_23.) In his expository writings on the geometry of curves, Rudy Rucker employs the model of a slinky to explain the meaning of the torsion and curvature. The slinky, he says, is characterized by the property that the quantity formula_24 remains constant if the slinky is vertically stretched out along its central axis. (Here 2π"h" is the height of a single twist of the slinky, and "r" the radius.) In particular, curvature and torsion are complementary in the sense that the torsion can be increased at the expense of curvature by stretching out the slinky. Taylor expansion. Repeatedly differentiating the curve and applying the Frenet–Serret formulas gives the following Taylor approximation to the curve near "s" = 0 if the curve is parameterized by arclength: formula_25 For a generic curve with nonvanishing torsion, the projection of the curve onto various coordinate planes in the T, N, B coordinate system at "s" = 0 have the following interpretations: Ribbons and tubes. The Frenet–Serret apparatus allows one to define certain optimal "ribbons" and "tubes" centered around a curve. These have diverse applications in materials science and elasticity theory, as well as to computer graphics. The Frenet ribbon along a curve "C" is the surface traced out by sweeping the line segment [−N,N] generated by the unit normal along the curve. This surface is sometimes confused with the tangent developable, which is the envelope "E" of the osculating planes of "C". This is perhaps because both the Frenet ribbon and "E" exhibit similar properties along "C". Namely, the tangent planes of both sheets of "E", near the singular locus "C" where these sheets intersect, approach the osculating planes of "C"; the tangent planes of the Frenet ribbon along "C" are equal to these osculating planes. The Frenet ribbon is in general not developable. Congruence of curves. In classical Euclidean geometry, one is interested in studying the properties of figures in the plane which are "invariant" under congruence, so that if two figures are congruent then they must have the same properties. The Frenet–Serret apparatus presents the curvature and torsion as numerical invariants of a space curve. Roughly speaking, two curves "C" and "C"′ in space are "congruent" if one can be rigidly moved to the other. A rigid motion consists of a combination of a translation and a rotation. A translation moves one point of "C" to a point of "C"′. The rotation then adjusts the orientation of the curve "C" to line up with that of "C"′. Such a combination of translation and rotation is called a Euclidean motion. In terms of the parametrization r("t") defining the first curve "C", a general Euclidean motion of "C" is a composite of the following operations: The Frenet–Serret frame is particularly well-behaved with regard to Euclidean motions. First, since T, N, and B can all be given as successive derivatives of the parametrization of the curve, each of them is insensitive to the addition of a constant vector to r("t"). Intuitively, the TNB frame attached to r("t") is the same as the TNB frame attached to the new curve r("t") + v. This leaves only the rotations to consider. Intuitively, if we apply a rotation "M" to the curve, then the TNB frame also rotates. More precisely, the matrix "Q" whose rows are the TNB vectors of the Frenet–Serret frame changes by the matrix of a rotation formula_31 "A fortiori", the matrix "Q"T is unaffected by a rotation: formula_32 since "MM"T = "I" for the matrix of a rotation. Hence the entries "κ" and τ of "Q"T are "invariants" of the curve under Euclidean motions: if a Euclidean motion is applied to a curve, then the resulting curve has "the same" curvature and torsion. Moreover, using the Frenet–Serret frame, one can also prove the converse: any two curves having the same curvature and torsion functions must be congruent by a Euclidean motion. Roughly speaking, the Frenet–Serret formulas express the Darboux derivative of the TNB frame. If the Darboux derivatives of two frames are equal, then a version of the fundamental theorem of calculus asserts that the curves are congruent. In particular, the curvature and torsion are a "complete" set of invariants for a curve in three-dimensions. Other expressions of the frame. The formulas given above for T, N, and B depend on the curve being given in terms of the arclength parameter. This is a natural assumption in Euclidean geometry, because the arclength is a Euclidean invariant of the curve. In the terminology of physics, the arclength parametrization is a natural choice of gauge. However, it may be awkward to work with in practice. A number of other equivalent expressions are available. Suppose that the curve is given by r("t"), where the parameter "t" need no longer be arclength. Then the unit tangent vector T may be written as formula_33 The normal vector N takes the form formula_34 The binormal B is then formula_35 An alternative way to arrive at the same expressions is to take the first three derivatives of the curve r′("t"), r′′("t"), r′′′("t"), and to apply the Gram-Schmidt process. The resulting ordered orthonormal basis is precisely the TNB frame. This procedure also generalizes to produce Frenet frames in higher dimensions. In terms of the parameter "t", the Frenet–Serret formulas pick up an additional factor of ||r′("t")|| because of the chain rule: formula_36 Explicit expressions for the curvature and torsion may be computed. For example, formula_37 The torsion may be expressed using a scalar triple product as follows, formula_38 Special cases. If the curvature is always zero then the curve will be a straight line. Here the vectors N, B and the torsion are not well defined. If the torsion is always zero then the curve will lie in a plane. A curve may have nonzero curvature and zero torsion. For example, the circle of radius "R" given by r("t")=("R" cos "t", "R" sin "t", 0) in the "z"=0 plane has zero torsion and curvature equal to 1/"R". The converse, however, is false. That is, a regular curve with nonzero torsion must have nonzero curvature. This is just the contrapositive of the fact that zero curvature implies zero torsion. A helix has constant curvature and constant torsion. Plane curves. If a curve formula_39 is contained in the formula_40-plane, then its tangent vector formula_41 and principal unit normal vector formula_42 will also lie in the formula_40-plane. As a result, the unit binormal vector formula_43 is perpendicular to the formula_40 plane and thus must be either formula_44 or formula_45. By the right-hand rule formula_46 will be formula_44 if, when viewed from above, the curve's trajectory is turning leftward, and will be formula_45 if it is turning rightward. As a result, the torsion formula_47 will always be zero and the formula formula_48 for the curvature formula_49 becomes &lt;br&gt; formula_50 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^{3}" }, { "math_id": 1, "text": " \n\\begin{align}\n\\frac{\\mathrm{d} \\mathbf{T} }{ \\mathrm{d} s } &= \\kappa\\mathbf{N}, \\\\\n\\frac{\\mathrm{d} \\mathbf{N} }{ \\mathrm{d} s } &= -\\kappa\\mathbf{T}+\\tau\\mathbf{B},\\\\\n\\frac{\\mathrm{d} \\mathbf{B} }{ \\mathrm{d} s } &= -\\tau\\mathbf{N},\n\\end{align}\n" }, { "math_id": 2, "text": "s(t) = \\int_0^t \\left\\|\\mathbf{r}'(\\sigma)\\right\\|d\\sigma." }, { "math_id": 3, "text": "\\kappa" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": " \\begin{bmatrix} \\mathbf{T'} \\\\ \\mathbf{N'} \\\\ \\mathbf{B'} \\end{bmatrix} = \\begin{bmatrix}\n 0 & \\kappa & 0 \\\\\n -\\kappa & 0 & \\tau \\\\\n 0 & -\\tau & 0\n\\end{bmatrix}\n\\begin{bmatrix} \\mathbf{T} \\\\ \\mathbf{N} \\\\ \\mathbf{B} \\end{bmatrix}." }, { "math_id": 6, "text": "\\mathbb{R}^{n}" }, { "math_id": 7, "text": "\\mathbf{e}_1(s) = \\frac{\\overline{\\mathbf{e}_1}(s)} {\\| \\overline{\\mathbf{e}_1}(s) \\|}" }, { "math_id": 8, "text": "\\overline{\\mathbf{e}_1}(s) = \\mathbf{r}'(s)" }, { "math_id": 9, "text": "\\overline{\\mathbf{e}_2}(s) = \\mathbf{r}''(s) - \\langle \\mathbf{r}''(s), \\mathbf{e}_1(s) \\rangle \\, \\mathbf{e}_1(s)" }, { "math_id": 10, "text": "\\mathbf{e}_2(s) = \\frac{\\overline{\\mathbf{e}_2}(s)} {\\| \\overline{\\mathbf{e}_2}(s) \\|}\n" }, { "math_id": 11, "text": "\\begin{align}\n\\mathbf{e}_{j}(s) = \\frac{\\overline{\\mathbf{e}_{j}}(s)}{\\|\\overline{\\mathbf{e}_{j}}(s) \\|} \n\\mbox{, } \n\\end{align} " }, { "math_id": 12, "text": "\\begin{align}\n\\overline{\\mathbf{e}_{j}}(s) = \\mathbf{r}^{(j)}(s) - \\sum_{i=1}^{j-1} \\langle \\mathbf{r}^{(j)}(s), \\mathbf{e}_i(s) \\rangle \\, \\mathbf{e}_i(s).\n\\end{align} " }, { "math_id": 13, "text": "n-1" }, { "math_id": 14, "text": "{\\mathbf{e}_n}(s)={\\mathbf{e}_1}(s)\\times{\\mathbf{e}_2}(s)\\times\\dots\\times{\\mathbf{e}_{n-2}}(s)\\times{\\mathbf{e}_{n-1}}(s)" }, { "math_id": 15, "text": "\\chi_i(s) = \\frac{\\langle \\mathbf{e}_i'(s), \\mathbf{e}_{i+1}(s) \\rangle}{\\| \\mathbf{r}'(s) \\|} " }, { "math_id": 16, "text": "\\begin{align}\n\\begin{bmatrix}\n \\mathbf{e}_1'(s)\\\\\n \\vdots \\\\\n \\mathbf{e}_n'(s) \\\\\n\\end{bmatrix}\n\n= \\\\\n\\end{align}\n\n\\| \\mathbf{r}'(s) \\| \\cdot\n\n\\begin{align}\n\\begin{bmatrix}\n 0 & \\chi_1(s) & & 0 \\\\\n -\\chi_1(s) & \\ddots & \\ddots & \\\\\n & \\ddots & 0 & \\chi_{n-1}(s) \\\\\n 0 & & -\\chi_{n-1}(s) & 0 \\\\\n\\end{bmatrix}\n\n\\begin{bmatrix}\n \\mathbf{e}_1(s) \\\\\n \\vdots \\\\\n \\mathbf{e}_n(s) \\\\\n\\end{bmatrix} \n\\end{align} " }, { "math_id": 17, "text": "\\chi_{n-1}" }, { "math_id": 18, "text": " \\mathbf{e}_{n} " }, { "math_id": 19, "text": " \\operatorname{or}\\left(\\mathbf{r}^{(1)},\\dots,\\mathbf{r}^{(n)}\\right) " }, { "math_id": 20, "text": " \\mathbf{r} " }, { "math_id": 21, "text": " \\kappa = \\frac{r}{r^2+h^2} " }, { "math_id": 22, "text": " \\tau = \\pm\\frac{h}{r^2+h^2}. " }, { "math_id": 23, "text": "\\sqrt{h^2+r^2}" }, { "math_id": 24, "text": " A^2 = h^2+r^2" }, { "math_id": 25, "text": "\\mathbf r(s) = \\mathbf r(0) + \\left(s-\\frac{s^3\\kappa^2(0)}{6}\\right)\\mathbf T(0) + \\left(\\frac{s^2\\kappa(0)}{2}+\\frac{s^3\\kappa'(0)}{6}\\right)\\mathbf N(0) + \\left(\\frac{s^3\\kappa(0)\\tau(0)}{6}\\right)\\mathbf B(0) + o(s^3)." }, { "math_id": 26, "text": "\\mathbf r(0) + s\\mathbf T(0) + \\frac{s^2\\kappa(0)}{2} \\mathbf N(0) + o(s^2)." }, { "math_id": 27, "text": "O(s^2) " }, { "math_id": 28, "text": "O(s^3)" }, { "math_id": 29, "text": " \\mathbf r(0) + \\left(\\frac{s^2\\kappa(0)}{2}+\\frac{s^3\\kappa'(0)}{6}\\right)\\mathbf N(0) + \\left(\\frac{s^3\\kappa(0)\\tau(0)}{6}\\right)\\mathbf B(0)+ o(s^3)" }, { "math_id": 30, "text": "\\mathbf r(0) + \\left(s-\\frac{s^3\\kappa^2(0)}{6}\\right)\\mathbf T(0) + \\left(\\frac{s^3\\kappa(0)\\tau(0)}{6}\\right)\\mathbf B(0)+ o(s^3)" }, { "math_id": 31, "text": " Q \\rightarrow QM." }, { "math_id": 32, "text": "\\frac{ \\mathrm{d} (QM) }{ \\mathrm{d} s} (QM)^\\top = \\frac{ \\mathrm{d} Q}{ \\mathrm{d} s } MM^\\top Q^\\top = \\frac{ \\mathrm{d} Q}{ \\mathrm{d} s} Q^\\top" }, { "math_id": 33, "text": "\\mathbf{T}(t) = \\frac{\\mathbf{r}'(t)}{\\|\\mathbf{r}'(t)\\|}" }, { "math_id": 34, "text": "\\mathbf{N}(t) = \\frac{\\mathbf{T}'(t)}{\\|\\mathbf{T}'(t)\\|} = \\frac{\\mathbf{r}'(t) \\times \\left(\\mathbf{r}''(t) \\times \\mathbf{r}'(t) \\right)}{\\left\\|\\mathbf{r}'(t)\\right\\| \\, \\left\\|\\mathbf{r}''(t) \\times \\mathbf{r}'(t)\\right\\|}" }, { "math_id": 35, "text": "\\mathbf{B}(t) = \\mathbf{T}(t)\\times\\mathbf{N}(t) = \\frac{\\mathbf{r}'(t)\\times\\mathbf{r}''(t)}{\\|\\mathbf{r}'(t)\\times\\mathbf{r}''(t)\\|}" }, { "math_id": 36, "text": "\\frac{\\mathrm{d} }{\\mathrm{d} t} \\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{N}\\\\\n\\mathbf{B}\n\\end{bmatrix}\n= \\|\\mathbf{r}'(t)\\|\n\\begin{bmatrix}\n0&\\kappa&0\\\\\n-\\kappa&0&\\tau\\\\\n0&-\\tau&0\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{N}\\\\\n\\mathbf{B}\n\\end{bmatrix}\n" }, { "math_id": 37, "text": "\\kappa = \\frac{\\|\\mathbf{r}'(t)\\times\\mathbf{r}''(t)\\|}{\\|\\mathbf{r}'(t)\\|^3}" }, { "math_id": 38, "text": "\\tau = \\frac{[\\mathbf{r}'(t),\\mathbf{r}''(t),\\mathbf{r}'''(t)]}{\\|\\mathbf{r}'(t)\\times\\mathbf{r}''(t)\\|^2}" }, { "math_id": 39, "text": "{\\bf r}(t) = \\langle x(t),y(t),0 \\rangle" }, { "math_id": 40, "text": "xy" }, { "math_id": 41, "text": "{\\displaystyle {\\bf T} = \\frac{{\\bf r}'(t)}{||{\\bf r}'(t)||}}" }, { "math_id": 42, "text": "{\\displaystyle {\\bf N} = \\frac{{\\bf T}'(t)}{||{\\bf T}'(t)||}}" }, { "math_id": 43, "text": "{\\bf B} = {\\bf T} \\times {\\bf N}" }, { "math_id": 44, "text": "\\langle 0,0,1 \\rangle" }, { "math_id": 45, "text": "\\langle 0,0,-1 \\rangle" }, { "math_id": 46, "text": "\\bf{B}" }, { "math_id": 47, "text": " \\tau " }, { "math_id": 48, "text": " \\frac{|| {\\bf r}'(t) \\times {\\bf r}''(t)||}{||{\\bf r}'(t)||^3} " }, { "math_id": 49, "text": " \\kappa " }, { "math_id": 50, "text": " \\kappa = \\frac{|x'(t)y''(t) - y'(t)x''(t)|}{((x'(t))^2 + (y'(t))^2)^{3/2}}" } ]
https://en.wikipedia.org/wiki?curid=666987
667063
Antimatroid
Mathematical system of orderings or sets In mathematics, an antimatroid is a formal system that describes processes in which a set is built up by including elements one at a time, and in which an element, once available for inclusion, remains available until it is included. Antimatroids are commonly axiomatized in two equivalent ways, either as a set system modeling the possible states of such a process, or as a formal language modeling the different sequences in which elements may be included. Dilworth (1940) was the first to study antimatroids, using yet another axiomatization based on lattice theory, and they have been frequently rediscovered in other contexts. The axioms defining antimatroids as set systems are very similar to those of matroids, but whereas matroids are defined by an "exchange axiom", antimatroids are defined instead by an "anti-exchange axiom", from which their name derives. Antimatroids can be viewed as a special case of greedoids and of semimodular lattices, and as a generalization of partial orders and of distributive lattices. Antimatroids are equivalent, by complementation, to convex geometries, a combinatorial abstraction of convex sets in geometry. Antimatroids have been applied to model precedence constraints in scheduling problems, potential event sequences in simulations, task planning in artificial intelligence, and the states of knowledge of human learners. Definitions. An antimatroid can be defined as a finite family formula_0 of finite sets, called "feasible sets", with the following two properties: Antimatroids also have an equivalent definition as a formal language, that is, as a set of strings defined from a finite alphabet of symbols. A string that belongs to this set is called a "word" of the language. A language formula_4 defining an antimatroid must satisfy the following properties: The equivalence of these two forms of definition can be seen as follows. If formula_4 is an antimatroid defined as a formal language, then the sets of symbols in words of formula_4 form an accessible union-closed set system. It is accessible by the hereditary property of strings, and it can be shown to be union-closed by repeated application of the concatenation property of strings. In the other direction, from an accessible union-closed set system formula_0, the language of normal strings whose prefixes all have sets of symbols belonging to formula_0 meets the requirements for a formal language to be an antimatroid. These two transformations are the inverses of each other: transforming a formal language into a set family and back, or vice versa, produces the same system. Thus, these two definitions lead to mathematically equivalent classes of objects. Examples. The following systems provide examples of antimatroids: The prefixes of a single string, and the sets of symbols in these prefixes, form an antimatroid. For instance the chain antimatroid defined by the string formula_7 has as its formal language the set of strings formula_8 (where formula_9 denotes the empty string) and as its family of feasible sets the family formula_10 The lower sets of a finite partially ordered set form an antimatroid, with the full-length words of the antimatroid forming the linear extensions of the partial order. By Birkhoff's representation theorem for distributive lattices, the feasible sets in a poset antimatroid (ordered by set inclusion) form a distributive lattice, and all distributive lattices can be formed in this way. Thus, antimatroids can be seen as generalizations of distributive lattices. A chain antimatroid is the special case of a poset antimatroid for a total order. A "shelling sequence" of a finite set formula_11 of points in the Euclidean plane or a higher-dimensional Euclidean space is formed by repeatedly removing vertices of the convex hull. The feasible sets of the antimatroid formed by these sequences are the intersections of formula_11 with the complement of a convex set. A "perfect elimination ordering" of a chordal graph is an ordering of its vertices such that, for each vertex formula_12, the neighbors of formula_12 that occur later than formula_12 in the ordering form a clique. The prefixes of perfect elimination orderings of a chordal graph form an antimatroid. Chip-firing games such as the abelian sandpile model are defined by a directed graph together with a system of "chips" placed on its vertices. Whenever the number of chips on a vertex formula_12 is at least as large as the number of edges out of formula_12, it is possible to "fire" formula_12, moving one chip to each neighboring vertex. The event that formula_12 fires for the formula_13th time can only happen if it has already fired formula_14 times and accumulated formula_15 total chips. These conditions do not depend on the ordering of previous firings, and remain true until formula_12 fires, so any given graph and initial placement of chips for which the system terminates defines an antimatroid on the pairs formula_16. A consequence of the antimatroid property of these systems is that, for a given initial state, the number of times each vertex fires and the eventual stable state of the system do not depend on the firing order. Paths and basic words. In the set theoretic axiomatization of an antimatroid there are certain special sets called "paths" that determine the whole antimatroid, in the sense that the sets of the antimatroid are exactly the unions of paths. If formula_1 is any feasible set of the antimatroid, an element formula_2 that can be removed from formula_1 to form another feasible set is called an "endpoint" of formula_1, and a feasible set that has only one endpoint is called a "path" of the antimatroid. The family of paths can be partially ordered by set inclusion, forming the "path poset" of the antimatroid. For every feasible set formula_1 in the antimatroid, and every element formula_2 of formula_1, one may find a path subset of formula_1 for which formula_2 is an endpoint: to do so, remove one at a time elements other than formula_2 until no such removal leaves a feasible subset. Therefore, each feasible set in an antimatroid is the union of its path subsets. If formula_1 is not a path, each subset in this union is a proper subset of formula_1. But, if formula_1 is itself a path with endpoint formula_2, each proper subset of formula_1 that belongs to the antimatroid excludes formula_2. Therefore, the paths of an antimatroid are exactly the feasible sets that do not equal the unions of their proper feasible subsets. Equivalently, a given family of sets formula_17 forms the family of paths of an antimatroid if and only if, for each formula_1 in formula_17, the union of subsets of formula_1 in formula_17 has one fewer element than formula_1 itself. If so, formula_0 itself is the family of unions of subsets of formula_17. In the formal language formalization of an antimatroid, the longest strings are called "basic words". Each basic word forms a permutation of the whole alphabet. If formula_18 is the set of basic words, formula_4 can be defined from formula_18 as the set of prefixes of words in formula_18. Convex geometries. If formula_0 is the set system defining an antimatroid, with formula_11 equal to the union of the sets in formula_0, then the family of sets formula_19 complementary to the sets in formula_0 is sometimes called a convex geometry and the sets in formula_20 are called convex sets. For instance, in a shelling antimatroid, the convex sets are intersections of the given point set with convex subsets of Euclidean space. The set system defining a convex geometry must be closed under intersections. For any set formula_1 in formula_20 that is not equal to formula_11 there must be an element formula_2 not in formula_1 that can be added to formula_1 to form another set in formula_20. A convex geometry can also be defined in terms of a closure operator formula_21 that maps any subset of formula_11 to its minimal closed superset. To be a closure operator, formula_21 should have the following properties: The family of closed sets resulting from a closure operation of this type is necessarily closed under intersections, but might not be a convex geometry. The closure operators that define convex geometries also satisfy an additional anti-exchange axiom: A closure operation satisfying this axiom is called an anti-exchange closure. If formula_1 is a closed set in an anti-exchange closure, then the anti-exchange axiom determines a partial order on the elements not belonging to formula_1, where formula_31 in the partial order when formula_2 belongs to formula_29. If formula_2 is a minimal element of this partial order, then formula_32 is closed. That is, the family of closed sets of an anti-exchange closure has the property that for any set other than the universal set there is an element formula_2 that can be added to it to produce another closed set. This property is complementary to the accessibility property of antimatroids, and the fact that intersections of closed sets are closed is complementary to the property that unions of feasible sets in an antimatroid are feasible. Therefore, the complements of the closed sets of any anti-exchange closure form an antimatroid. The undirected graphs in which the convex sets (subsets of vertices that contain all shortest paths between vertices in the subset) form a convex geometry are exactly the Ptolemaic graphs. Join-distributive lattices. Every two feasible sets of an antimatroid have a unique least upper bound (their union) and a unique greatest lower bound (the union of the sets in the antimatroid that are contained in both of them). Therefore, the feasible sets of an antimatroid, partially ordered by set inclusion, form a lattice. Various important features of an antimatroid can be interpreted in lattice-theoretic terms; for instance the paths of an antimatroid are the join-irreducible elements of the corresponding lattice, and the basic words of the antimatroid correspond to maximal chains in the lattice. The lattices that arise from antimatroids in this way generalize the finite distributive lattices, and can be characterized in several different ways. These three characterizations are equivalent: any lattice with unique meet-irreducible decompositions has boolean atomistic intervals and is join-distributive, any lattice with boolean atomistic intervals has unique meet-irreducible decompositions and is join-distributive, and any join-distributive lattice has unique meet-irreducible decompositions and boolean atomistic intervals. Thus, we may refer to a lattice with any of these three properties as join-distributive. Any antimatroid gives rise to a finite join-distributive lattice, and any finite join-distributive lattice comes from an antimatroid in this way. Another equivalent characterization of finite join-distributive lattices is that they are graded (any two maximal chains have the same length), and the length of a maximal chain equals the number of meet-irreducible elements of the lattice. The antimatroid representing a finite join-distributive lattice can be recovered from the lattice: the elements of the antimatroid can be taken to be the meet-irreducible elements of the lattice, and the feasible set corresponding to any element formula_2 of the lattice consists of the set of meet-irreducible elements formula_27 such that formula_27 is not greater than or equal to formula_2 in the lattice. This representation of any finite join-distributive lattice as an accessible family of sets closed under unions (that is, as an antimatroid) may be viewed as an analogue of Birkhoff's representation theorem under which any finite distributive lattice has a representation as a family of sets closed under unions and intersections. Supersolvable antimatroids. Motivated by a problem of defining partial orders on the elements of a Coxeter group, studied antimatroids which are also supersolvable lattices. A supersolvable antimatroid is defined by a totally ordered collection of elements, and a family of sets of these elements. The family must include the empty set. Additionally, it must have the property that if two sets formula_42 and formula_18 belong to the family, if the set-theoretic difference formula_43 is nonempty, and if formula_2 is the smallest element of formula_43, then formula_44 also belongs to the family. As Armstrong observes, any family of sets of this type forms an antimatroid. Armstrong also provides a lattice-theoretic characterization of the antimatroids that this construction can form. Join operation and convex dimension. If formula_45 and formula_46 are two antimatroids, both described as a family of sets over the same universe of elements, then another antimatroid, the "join" of formula_45 and formula_46, can be formed as follows: formula_47 This is a different operation than the join considered in the lattice-theoretic characterizations of antimatroids: it combines two antimatroids to form another antimatroid, rather than combining two sets in an antimatroid to form another set. The family of all antimatroids over the same universe forms a semilattice with this join operation. Joins are closely related to a closure operation that maps formal languages to antimatroids, where the closure of a language formula_4 is the intersection of all antimatroids containing formula_4 as a sublanguage. This closure has as its feasible sets the unions of prefixes of strings in formula_4. In terms of this closure operation, the join is the closure of the union of the languages of formula_45 and formula_46. Every antimatroid can be represented as a join of a family of chain antimatroids, or equivalently as the closure of a set of basic words; the "convex dimension" of an antimatroid formula_45 is the minimum number of chain antimatroids (or equivalently the minimum number of basic words) in such a representation. If formula_48 is a family of chain antimatroids whose basic words all belong to formula_45, then formula_48 generates formula_45 if and only if the feasible sets of formula_48 include all paths of formula_45. The paths of formula_45 belonging to a single chain antimatroid must form a chain in the path poset of formula_45, so the convex dimension of an antimatroid equals the minimum number of chains needed to cover the path poset, which by Dilworth's theorem equals the width of the path poset. If one has a representation of an antimatroid as the closure of a set of formula_49 basic words, then this representation can be used to map the feasible sets of the antimatroid to points in formula_49-dimensional Euclidean space: assign one coordinate per basic word formula_50, and make the coordinate value of a feasible set formula_1 be the length of the longest prefix of formula_50 that is a subset of formula_1. With this embedding, formula_1 is a subset of another feasible set formula_5 if and only if the coordinates for formula_1 are all less than or equal to the corresponding coordinates of formula_5. Therefore, the order dimension of the inclusion ordering of the feasible sets is at most equal to the convex dimension of the antimatroid. However, in general these two dimensions may be very different: there exist antimatroids with order dimension three but with arbitrarily large convex dimension. Enumeration. The number of possible antimatroids on a set of elements grows rapidly with the number of elements in the set. For sets of one, two, three, etc. elements, the number of distinct antimatroids is formula_51 Applications. Both the precedence and release time constraints in the standard notation for theoretic scheduling problems may be modeled by antimatroids. use antimatroids to generalize a greedy algorithm of Eugene Lawler for optimally solving single-processor scheduling problems with precedence constraints in which the goal is to minimize the maximum penalty incurred by the late scheduling of a task. use antimatroids to model the ordering of events in discrete event simulation systems. uses antimatroids to model progress towards a goal in artificial intelligence planning problems. In Optimality Theory, a mathematical model for the development of natural language based on optimization under constraints, grammars are logically equivalent to antimatroids. In mathematical psychology, antimatroids have been used to describe feasible states of knowledge of a human learner. Each element of the antimatroid represents a concept that is to be understood by the learner, or a class of problems that he or she might be able to solve correctly, and the sets of elements that form the antimatroid represent possible sets of concepts that could be understood by a single person. The axioms defining an antimatroid may be phrased informally as stating that learning one concept can never prevent the learner from learning another concept, and that any feasible state of knowledge can be reached by learning a single concept at a time. The task of a knowledge assessment system is to infer the set of concepts known by a given learner by analyzing his or her responses to a small and well-chosen set of problems. In this context antimatroids have also been called "learning spaces" and "well-graded knowledge spaces". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{F}" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "S\\setminus\\{x\\}" }, { "math_id": 4, "text": "\\mathcal{L}" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "Tx" }, { "math_id": 7, "text": "abcd" }, { "math_id": 8, "text": "\\{\\varepsilon, a, ab, abc, abcd\\}" }, { "math_id": 9, "text": "\\varepsilon" }, { "math_id": 10, "text": "\\bigl\\{\\emptyset,\\{a\\},\\{a,b\\},\\{a,b,c\\},\\{a,b,c,d\\}\\bigr\\}." }, { "math_id": 11, "text": "U" }, { "math_id": 12, "text": "v" }, { "math_id": 13, "text": "i" }, { "math_id": 14, "text": "i-1" }, { "math_id": 15, "text": "i\\cdot\\deg(v)" }, { "math_id": 16, "text": "(v,i)" }, { "math_id": 17, "text": "\\mathcal{P}" }, { "math_id": 18, "text": "B" }, { "math_id": 19, "text": "\\mathcal{G} = \\{U\\setminus S\\mid S\\in \\mathcal{F}\\}" }, { "math_id": 20, "text": "\\mathcal{G}" }, { "math_id": 21, "text": "\\tau" }, { "math_id": 22, "text": "\\tau(\\emptyset)=\\emptyset" }, { "math_id": 23, "text": "\\tau(S)" }, { "math_id": 24, "text": "\\tau(S)=\\tau\\bigl(\\tau(S)\\bigr)" }, { "math_id": 25, "text": "S\\subset T\\subset U" }, { "math_id": 26, "text": "\\tau(T)" }, { "math_id": 27, "text": "y" }, { "math_id": 28, "text": "z" }, { "math_id": 29, "text": "\\tau(S\\cup\\{y\\})" }, { "math_id": 30, "text": "\\tau(S\\cup\\{z\\})" }, { "math_id": 31, "text": "x\\le y" }, { "math_id": 32, "text": "S\\cup\\{x\\}" }, { "math_id": 33, "text": "S_x" }, { "math_id": 34, "text": "T\\cup\\{x\\}" }, { "math_id": 35, "text": "x\\le z\\le y" }, { "math_id": 36, "text": "x\\wedge y" }, { "math_id": 37, "text": "x\\vee y" }, { "math_id": 38, "text": "Y" }, { "math_id": 39, "text": "X" }, { "math_id": 40, "text": "x\\wedge z" }, { "math_id": 41, "text": "x\\wedge (y\\vee z)" }, { "math_id": 42, "text": "A" }, { "math_id": 43, "text": "B\\setminus A" }, { "math_id": 44, "text": "A\\cup\\{x\\}" }, { "math_id": 45, "text": "\\mathcal{A}" }, { "math_id": 46, "text": "\\mathcal{B}" }, { "math_id": 47, "text": "\\mathcal{A}\\vee\\mathcal{B} = \\{ S\\cup T \\mid S\\in\\mathcal{A}\\wedge T\\in\\mathcal{B}\\}." }, { "math_id": 48, "text": "\\mathfrak{F}" }, { "math_id": 49, "text": "d" }, { "math_id": 50, "text": "W" }, { "math_id": 51, "text": "1, 3, 22, 485, 59386, 133059751, \\dots\\, ." } ]
https://en.wikipedia.org/wiki?curid=667063
66708690
Metalog distribution
The metalog distribution is a flexible continuous probability distribution designed for ease of use in practice. Together with its transforms, the metalog family of continuous distributions is unique because it embodies "all" of following properties: virtually unlimited shape flexibility; a choice among unbounded, semi-bounded, and bounded distributions; ease of fitting to data with linear least squares; simple, closed-form quantile function (inverse CDF) equations that facilitate simulation; a simple, closed-form PDF; and Bayesian updating in closed form in light of new data. Moreover, like a Taylor series, metalog distributions may have any number of terms, depending on the degree of shape flexibility desired and other application needs. Applications where metalog distributions can be useful typically involve fitting empirical data, simulated data, or expert-elicited quantiles to smooth, continuous probability distributions. Fields of application are wide-ranging, and include economics, science, engineering, and numerous other fields. The metalog distributions, also known as the Keelin distributions, were first published in 2016 by Tom Keelin. History. The history of probability distributions can be viewed, in part, as a progression of developments towards greater flexibility in shape and bounds when fitting to data. The normal distribution was first published in 1756, and Bayes’ theorem in 1763. The normal distribution laid the foundation for much of the development of classical statistics. In contrast, Bayes' theorem laid the foundation for the state-of-information, belief-based probability representations. Because belief-based probabilities can take on any shape and may have natural bounds, probability distributions flexible enough to accommodate both were needed. Moreover, many empirical and experimental data sets exhibited shapes that could not be well matched by the normal or other continuous distributions. So began the search for continuous probability distributions with flexible shapes and bounds. Early in the 20th century, the Pearson family of distributions, which includes the normal, beta, uniform, gamma, student-t, chi-square, F, and five others, emerged as a major advance in shape flexibility. These were followed by the Johnson distributions. Both families can represent the first four moments of data (mean, variance, skewness, and kurtosis) with smooth continuous curves. However, they have no ability to match fifth or higher-order moments. Moreover, for a given skewness and kurtosis, there is no choice of bounds. For example, matching the first four moments of a data set may yield a distribution with a negative lower bound, even though it might be known that the quantity in question cannot be negative. Finally, their equations include intractable integrals and complex statistical functions, so that fitting to data typically requires iterative methods. Early in the 21st century, decision analysts began working to develop continuous probability distributions that would exactly fit any specified three points on the cumulative distribution function for an uncertain quantity (e.g., expert-elicited formula_0, and formula_1 quantiles). The Pearson and the Johnson family distributions were generally inadequate for this purpose. In addition, decision analysts also sought probability distributions that would be easy to parameterize with data (e.g., by using linear least squares, or equivalently, multiple linear regression). Introduced in 2011, the class of quantile-parameterized distributions (QPDs) accomplished both goals. While being a significant advance for this reason, the QPD originally used to illustrate this class of distributions, the Simple Q-Normal distribution, had less shape flexibility than the Pearson and Johnson families, and lacked the ability to represent semi-bounded and bounded distributions. Shortly thereafter, Keelin developed the family of metalog distributions, another instance of the QPD class, which is more shape-flexible than the Pearson and Johnson families, offers a choice of boundedness, has closed-form equations that can be fit to data with linear least squares, and has closed-form quantile functions, which facilitate Monte Carlo simulation. Definition and quantile function. The metalog distribution is a generalization of the logistic distribution, where the term "metalog" is short for "metalogistic". Starting with the logistic quantile function, formula_2, Keelin substituted power series expansions in cumulative probability formula_3 for the formula_4 and the formula_5 parameters, which control location and scale, respectively. formula_6 formula_7 Keelin's rationale for this substitution was fivefold. First, the resulting quantile function would have significant shape flexibility, governed by the coefficients formula_8. Second, it would have a simple closed form that is linear in these coefficients, implying that they could easily be determined from CDF data by linear least squares. Third, the resulting quantile function would be smooth, differentiable, and analytic, ensuring that a smooth, closed-form PDF would be available. Fourth, simulation would be facilitated by the resulting closed-form inverse CDF. Fifth, like a Taylor series, any number of terms formula_9 could be used, depending on the degree of shape flexibility desired and other application needs. Note that the subscripts of the formula_10-coefficients are such that formula_11 and formula_12 are in the formula_4 expansion, formula_13 and formula_14 are in the formula_5 expansion, and subscripts alternate thereafter. This ordering was chosen so that the first two terms in the resulting metalog quantile function correspond to the logistic distribution exactly; adding a third term with formula_15 adjusts skewness; adding a fourth term with formula_16 adjusts kurtosis primarily; and adding subsequent non-zero terms yields more nuanced shape refinements. Rewriting the logistic quantile function to incorporate the above substitutions for formula_4 and formula_5 yields the metalog quantile function, for cumulative probability formula_17. formula_18 Equivalently, the metalog quantile function can be expressed in terms of basis functions: formula_19, where the metalog basis functions are formula_20 and each subsequent formula_21 is defined as the expression that is multiplied by formula_8 in the equation for formula_22 above. Note that coefficient formula_11 is the median, since all other terms equal zero when formula_23. Special cases of the metalog quantile function are the logistic distribution (formula_24) and the uniform distribution (formula_25 otherwise). Probability density function. Differentiating formula_26 with respect to formula_27 yields the quantile density function formula_28. The reciprocal of this quantity, formula_29, is the probability density function expressed as a p-PDF, formula_31 which may be equivalently expressed in terms of basis functions as formula_32 where formula_17. Note that this PDF is expressed as a function of cumulative probability, formula_27, rather than variable of interest, formula_30. To plot the PDF (e.g., as shown in the figures on this page), one can vary formula_33 parametrically, and then plot formula_26 on the horizontal axis and formula_34 on the vertical axis. Based on the above equations and the following transformations that enable a choice of bounds, the family of metalog distributions is composed of unbounded, semibounded, and bounded metalogs, along with their symmetric-percentile triplet (SPT) special cases. Unbounded, semi-bounded, and bounded metalog distributions. As defined above, the metalog distribution is unbounded, except in the unusual special case where formula_35 for all terms that contain formula_36. However, many applications require flexible probability distributions that have a lower bound formula_37, an upper bound formula_38, or both. To meet this need, Keelin used transformations to derive semi-bounded and bounded metalog distributions. Such transformations are governed by a general property of quantile functions: for any quantile function formula_39 and increasing function formula_40 is also a quantile function. For example, the quantile function of the normal distribution is formula_41; since the natural logarithm, formula_42, is an increasing function, formula_43 is the quantile function of the lognormal distribution. Analogously, applying this property to the metalog quantile function formula_44 using the transformations below yields the semi-bounded and bounded members of the metalog family. By considering formula_45 to be metalog-distributed, all members of the metalog family meet Keelin and Powley's definition of a quantile-parameterized distribution and thus possess the properties thereof. formula_46 Note that the number of shape parameters in the metalog family increases linearly with the number of terms formula_9. Therefore, any of the above metalogs may have any number of shape parameters. By contrast, the Pearson and Johnson families of distributions are limited to two shape parameters. SPT metalog distributions. The symmetric-percentile triplet (SPT) metalog distributions are a three-term formula_47 special case of the unbounded, semi-bounded, and bounded metalog distributions. These are parameterized by the three formula_48 points off the CDF curve, of the form formula_49, formula_50, and formula_51, where formula_52. SPT metalogs are useful when, for example, quantiles formula_53 corresponding to the CDF probabilities (e.g. formula_54) are elicited from an expert and used to parameterize the three-term metalog distributions. As noted below, certain mathematical properties are simplified by the SPT parameterization. Properties. The metalog family of probability distributions has the following properties. Feasibility. A function of the form of formula_44 or any of its above transforms is a feasible probability distribution if and only if its PDF is greater than zero for all formula_55 This implies a feasibility constraint on the set of coefficients formula_56, formula_57 for all formula_58. In practical applications, feasibility must generally be checked rather than assumed. For formula_24, formula_59 ensures feasibility. For formula_60 (including SPT metalogs), the feasibility condition is formula_59 and formula_61. For formula_62, a similar closed form has been derived. For formula_63, feasibility is typically checked graphically or numerically. The unbounded metalog and its above transforms share the same set of feasible coefficients. Therefore, for a given set of coefficients, confirming that formula_57 for all formula_58 is sufficient regardless of the transform in use. Convexity. The set of feasible metalog coefficients formula_64 for all formula_65 is convex. Because convex optimization problems require convex feasible sets, this property can simplify optimization problems involving metalogs. Moreover, this property guarantees that any convex combination of the formula_66 vectors of feasible metalogs is feasible, which is useful, for example, when combining the opinion of multiple experts or interpolating among feasible metalogs. By implication, any probabilistic mixture of metalog distributions is itself a metalog. Fitting to data. The coefficients formula_66 can be determined from data by linear least squares. Given formula_67 data points formula_68 that are intended to characterize a metalog CDF, and an formula_69 matrix formula_70 whose elements consist of the basis functions formula_71, then as long as formula_72 is invertible, the column vector formula_66 of the coefficients formula_8 is given by formula_73, where formula_74 and column vector formula_75. If formula_76, this equation reduces to formula_77, where the resulting metalog CDF runs through all data points exactly. For SPT metalogs, it further reduces to expressions in terms of the three formula_68 points directly. An alternate fitting method, implemented as a linear program, determines the coefficients by minimizing the sum of absolute distances between the CDF and the data, subject to feasibility constraints. Shape flexibility. According to the metalog flexibility theorem, any probability distribution with a continuous quantile function can be approximated arbitrarily closely by a metalog. Moreover, in the original paper, Keelin showed that ten-term metalog distributions parameterized by 105 CDF points from 30 traditional source distributions (including the normal, student-t, lognormal, gamma, beta, and extreme-value distributions) approximate each such source distribution within a K-S distance of 0.001 or less. Thus, metalog shape flexibility is virtually unlimited. The animated figure on the right illustrates this for the standard normal distribution, where metalogs with various numbers of terms are parameterized by the same set of 105 points from the standard normal CDF. The metalog PDF converges to the standard normal PDF as the number of terms increases. With two terms, the metalog approximates the normal with a logistic distribution. With each increment in number of terms, the fit gets closer. With 10 terms, the metalog PDF and standard normal PDF are visually indistinguishable. Similarly, nine-term semi-bounded metalog PDFs with formula_78 are visually indistinguishable from a range of Weibull distributions. The six cases shown to the right correspond to Weibull shape parameters 0.5, 0.8, 1.0, 1.5, 2, and 4. In each case, the metalog is parameterized by the nine formula_30 points from the Weibull CDF that correspond to the cumulative probabilities formula_79. Such convergence is not unique to the normal and Weibull distributions. Keelin originally showed analogous results for a wide range of distributions and has since provided further illustrations. Median. The median of any distribution in the metalog family has a simple closed form. Note that formula_80 defines the median, and formula_81 (since all subsequent terms are zero for formula_80). It follows that the medians of the unbounded metalog, log metalog, negative-log metalog, and logit metalog distributions are formula_11, formula_82, formula_83, and formula_84, respectively. Moments. The formula_85 moment of the unbounded metalog distribution, formula_86, is a special case of the more general formula for QPDs. For the unbounded metalog, such integrals evaluate to closed-form moments that are formula_85 order polynomials in the coefficients formula_8. The first four central moments of the four-term unbounded metalog are: formula_87 Moments for fewer terms are subsumed in these equations. For example, moments of the three-term metalog can be obtained by setting formula_12 to zero. Moments for metalogs with more terms, and higher-order moments (formula_88), are also available. Moments for semi-bounded and bounded metalogs are not available in closed form. Parameterization with moments. Three-term unbounded metalogs can be parameterized in closed form with their first three central moments. Let formula_89 and formula_5 be the mean, variance, and skewness, and let formula_90 be the standardized skewness, formula_91. Equivalent expressions of the moments in terms of coefficients, and coefficients in terms of moments, are as follows: formula_92 The equivalence of these two sets of expressions can be derived by noting that the moments equations on the left determine a cubic polynomial in terms of the coefficients formula_93 and formula_14, which can be solved in closed form as functions of formula_89 and formula_5. Moreover, this solution is unique. In terms of moments, the feasibility condition is formula_94, which can be shown to be equivalent to the following feasibility condition in terms of the coefficients: formula_59; and formula_61. This property can be used, for example, to represent the sum of independent, non-identically distributed random variables. Based on cumulants, it is known that for any set of independent random variables, the mean, variance, and skewness of the sum are the sums of the respective means, variances, and skewnesses. Parameterizing a three-term metalog with these central moments yields a continuous distribution that exactly preserves these three moments, and accordingly provides a reasonable approximation to the shape of the distribution of the sum of independent random variables. Simulation. Since their quantile functions are expressed in closed form, metalogs facilitate Monte Carlo simulation. Substituting uniformly distributed random samples of formula_27 into the Metalog quantile function (inverse CDF) produces random samples of formula_30 in closed form, thereby eliminating the need to invert a CDF. See below for simulation applications. Eliciting and Combining Expert Opinion. Due to their shape flexibility, metalog distributions can be an attractive choice for eliciting and representing expert opinion. Moreover, if the opinions of multiple experts are expressed as formula_9-term metalogs, the consensus opinion may be calculated as a formula_9-term metalog in closed form, where the formula_66-coefficients of the consensus metalog are simply a weighted average of those of the individual experts. This result follows from Vincentization, where the consensus quantile function is a weighted average of individual quantile functions. Bayesian Updating in Closed Form. In a classic paper, Howard (1970) shows how the beta-binomial distribution can be used to update, according to Bayes rule in closed form, uncertainty over the long-run frequency formula_95 of a coin toss coming up "heads" in light of new coin-toss data. In contrast, if the uncertainty of interest to be updated is defined not by a scalar probability over a discrete event (like the result of a coin toss) but by a probability density function over a continuous variable, metalog Bayesian updating may be used. Under certain conditions, metalog quantile parameters and formula_66-coefficients may be updated in closed form in light of new data according to Bayes rule. Applications. Due to their shape and bounds flexibility, metalogs can be used to represent empirical or other data in virtually any field of human endeavor. Choosing number of terms. For a given application and data set, choosing the number of metalog terms formula_9 depends on context and may require judgment. For expert elicitation, three to five terms is usually sufficient. For data exploration and matching other probability distributions such as the sum of lognormals, eight to 12 terms is usually sufficient. A metalog panel, which displays the metalog PDFs corresponding to differing numbers of terms formula_9 for a given data set, may aid this judgment. For example, in the steelhead weight metalog panel, using less than seven terms arguably underfits the data by obscuring the data's inherent bimodality. Using more than 11 terms is unnecessary and could, in principle, overfit the data. The case with 16 terms is infeasible for this data set, as indicated by the blank cell in the metalog panel. Other tools, such as regularization and model selection (Akaike information criterion and Bayesian information criterion) may also be useful. For example, when applied to the steelhead weight data, the AIC ranking of metalog distributions from 2-16 terms along with a wide range of classical distributions identifies the 11-term log metalog as the best fit to this data. A similar BIC ranking identifies the 10-term log metalog as the best fit. Keelin (2016) offers further perspectives on the distribution selection within the metalog family. Related distributions. The metalog distributions belong to the group of distributions defined in terms of the quantile function, which include the quantile-parameterized distributions, the Tukey lambda distribution, its generalization, GLD, the Govindarajulu distribution and others. The following distributions are subsumed within the metalog family: Software. Freely available software tools can be used to work with metalog distributions: Commercially available packages also support the use of metalog distributions: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.10, 0.50" }, { "math_id": 1, "text": "0.90" }, { "math_id": 2, "text": "x=Q(y)=\\mu+s\\mbox{ } \\ln\\Bigl({y\\over{1-y}}\\Bigr)" }, { "math_id": 3, "text": "y=F(x)" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "\\mu =a_1 + a_4(y -0.5) + a_5(y -0.5)^2 + a_7(y -0.5)^3 + a_9(y -0.5)^4 + \\dots " }, { "math_id": 7, "text": "s =a_2 + a_3(y -0.5) + a_6(y -0.5)^2 + a_8(y -0.5)^3 + a_{10}(y -0.5)^4 + \\dots " }, { "math_id": 8, "text": "a_i" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "a_1" }, { "math_id": 12, "text": "a_4" }, { "math_id": 13, "text": "a_2" }, { "math_id": 14, "text": "a_3" }, { "math_id": 15, "text": "a_3 \\neq 0" }, { "math_id": 16, "text": "a_4 \\neq 0" }, { "math_id": 17, "text": "0<y<1" }, { "math_id": 18, "text": "\nM_k(y)= \\left\\{\n\\begin{array}{ll}\na_1+a_2\\ln\\Bigl({y\\over{1-y}}\\Bigr) & \\mbox{for } k=2\\\\\na_1+a_2\\ln\\Bigl({y\\over{1-y}}\\Bigr)+a_3(y-0.5) \\ln \\Bigl({y\\over{1-y}}\\Bigr) & \\mbox{for } k=3\\\\\na_1+a_2\\ln\\Bigl({y\\over{1-y}}\\Bigr)+a_3(y-0.5) \\ln \\Bigl({y\\over{1-y}}\\Bigr)+a_4(y-0.5) & \\mbox{for } k=4\\\\\nM_{k-1}(y)+a_k(y-0.5)^{k-1\\over2} & \\mbox{for odd } k\\geq5\\\\\nM_{k-1}(y)+a_k(y-0.5)^{{k\\over2}-1}\\ln\\Bigl({y\\over{1-y}}\\Bigr) & \\mbox{for even } k\\geq6\\\\\n\\end{array}\\right.\n" }, { "math_id": 19, "text": " M_k(y) = \\sum_{i=1}^k a_i g_i(y)" }, { "math_id": 20, "text": " g_1(y)=1,\\mbox{ } g_2(y)=\\ln\\Bigl({y\\over{1 -y}}\\Bigr)," }, { "math_id": 21, "text": "g_i(y)" }, { "math_id": 22, "text": " M_k(y)" }, { "math_id": 23, "text": "y=0.5" }, { "math_id": 24, "text": "k=2" }, { "math_id": 25, "text": "k\\geq4,a_1 = 0.5,a_4 =1,a_i =0" }, { "math_id": 26, "text": "x=M_k(y)" }, { "math_id": 27, "text": "y" }, { "math_id": 28, "text": "q(y)=dx/dy" }, { "math_id": 29, "text": "(q(y))^{-1}=dy/dx=f(Q(y))" }, { "math_id": 30, "text": "x" }, { "math_id": 31, "text": "\nm_k(y)= \\left\\{\n\\begin{array}{ll}\n{y(1-y)\\over{a_2}} & \\mbox{for } k=2\\\\\n\\Bigl({a_2\\over{y(1-y)}}+a_3\\Bigl({y-0.5\\over{y(1-y)}}+\\ln{y\\over{1-y}}\\Bigr)\\Bigr)^{-1} & \\mbox{for } k=3\\\\\n\\Bigl({a_2\\over{y(1-y)}}+a_3\\Bigl({y-0.5\\over{y(1-y)}}+\\ln{y\\over{1-y}}\\Bigr)+a_4\\Bigr)^{-1} & \\mbox{for } k=4\\\\\n\\Bigl({1\\over{m_{k-1}(y)}}+a_k{k-1\\over2}(y-0.5)^{(k-3)/2}\\Bigr)^{-1} & \\mbox{for odd } k\\geq5\\\\\n\\Bigl({1\\over{m_{k-1}(y)}}+a_k\\Bigl({(y-0.5)^{k/2-1}\\over{y(1-y)}}+({k\\over2}-1)(y-0.5)^{(k/2-2)}\\ln{y\\over{1-y}}\\Bigr)\\Bigr)^{-1} & \\mbox{for even } k\\geq6,\\\\\n\\end{array}\\right.\n" }, { "math_id": 32, "text": "m_k(y) = \\left( \\sum_{i=1}^k\n a_i {{d g_i(y)}\\over{dy}} \\right)^{-1}" }, { "math_id": 33, "text": "y\\in(0,1)" }, { "math_id": 34, "text": "m_k(y)" }, { "math_id": 35, "text": "a_i = 0" }, { "math_id": 36, "text": "\\ln\\Bigl({y\\over{1-y}}\\Bigr)" }, { "math_id": 37, "text": "b_l" }, { "math_id": 38, "text": "b_u" }, { "math_id": 39, "text": "x=Q(y)" }, { "math_id": 40, "text": "z(x), x=z^{-1} (Q(y))" }, { "math_id": 41, "text": "x=\\mu+\\sigma \\Phi^{-1} (y)" }, { "math_id": 42, "text": "z(x)=\\ln(x-b_l)" }, { "math_id": 43, "text": "x=b_l+e^{\\mu+\\sigma \\Phi^{-1} (y)}" }, { "math_id": 44, "text": "M_k(y)" }, { "math_id": 45, "text": "z(x)" }, { "math_id": 46, "text": "\n\\begin{array}{l|c|c|c|c|c}\n& & & & & \\mbox{shape}\\\\\n& \\mbox{transformation} & \\mbox{quantile function} & \\mbox{PDF} & \\mbox{where} &\\mbox{parameters}\\\\\n\\hline\n\\mbox{Metalog (unbounded)} & z(x)=x & M_k(y) & m_k(y) & 0<y<1 & k-2\\\\\n\\hline\n\\mbox{Log metalog} & z(x)=\\ln(x-b_l) & M_k^{\\log} (y)=b_l+e^{M_k(y)} & m_k^{\\log}(y)=m_k(y)e^{-M_k(y)} & 0<y<1 & k-1\\\\\n\\mbox{(bounded below)} && =b_l & =0 & y=0\\\\\n\\hline\n\\mbox{Negative-log metalog} & z(x)=-\\ln(b_u-x) & M_k^\\operatorname{nlog}(y)=b_u-e^{-M_k(y)} & m_k^\\operatorname{nlog}(y) = m_k(y)e^{M_k(y)}& 0<y<1 & k-1\\\\\n\\mbox{(bounded above)} && =b_u & =0 & y=1\\\\\n\\hline\n\\mbox{Logit metalog} & z(x)=\\ln\\Bigl({x-b_l\\over{b_u-x}}\\Bigr) & M_k^\\operatorname{logit}(y)={b_l+b_u e^{M_k(y)}\\over{1+e^{M_k(y)}}} & m_k^\\operatorname{logit}(y)=m_k(y){\\bigl(1+e^{M_k(y)}\\bigr)^2\\over{(b_u-b_l)e^{M_k(y)}}} & 0<y<1 & k\\\\\n\\mbox{(bounded)}&&=b_l & =0 & y=0\\\\\n& & =b_u & =0 & y=1\n\\end{array}\n" }, { "math_id": 47, "text": "(k=3)" }, { "math_id": 48, "text": "(x,y)" }, { "math_id": 49, "text": "(x_1,\\alpha)" }, { "math_id": 50, "text": "(x_2,0.5)" }, { "math_id": 51, "text": "(x_3,1-\\alpha)" }, { "math_id": 52, "text": "0 <\\alpha<0.5" }, { "math_id": 53, "text": "(x_1,x_2,x_3)" }, { "math_id": 54, "text": "0.1, 0.5, 0.9" }, { "math_id": 55, "text": "y \\in (0,1)." }, { "math_id": 56, "text": "\\boldsymbol a=(a_1,...,a_k) \\in \\R^k" }, { "math_id": 57, "text": "m_k(y)>0" }, { "math_id": 58, "text": "y \\in (0,1)" }, { "math_id": 59, "text": "a_2>0" }, { "math_id": 60, "text": "k=3" }, { "math_id": 61, "text": "{|a_3|/a_2}<1.66711" }, { "math_id": 62, "text": "k=4" }, { "math_id": 63, "text": "k\\geq5" }, { "math_id": 64, "text": "S_\\boldsymbol a=\\{\\boldsymbol a\\in\\R^k |m_k(y) > 0" }, { "math_id": 65, "text": "y\\in (0,1)\\}" }, { "math_id": 66, "text": "\\boldsymbol a" }, { "math_id": 67, "text": "n" }, { "math_id": 68, "text": "(x_i,y_i)" }, { "math_id": 69, "text": "n \\times k" }, { "math_id": 70, "text": "\\boldsymbol Y" }, { "math_id": 71, "text": "g_j (y_i)" }, { "math_id": 72, "text": "\\boldsymbol Y^T \\boldsymbol Y" }, { "math_id": 73, "text": "\\boldsymbol a=(\\boldsymbol Y^T \\boldsymbol Y)^{-1} \\boldsymbol Y^T \\boldsymbol z" }, { "math_id": 74, "text": "n\\geq k" }, { "math_id": 75, "text": "\\boldsymbol z=(z(x_1),\\ldots,z(x_n))" }, { "math_id": 76, "text": "n=k" }, { "math_id": 77, "text": "\\boldsymbol a=\\boldsymbol Y^{-1} \\boldsymbol z" }, { "math_id": 78, "text": "b_l=0" }, { "math_id": 79, "text": "y = (0.001, 0.02, 0.10, 0.25, 0.5, 0.75, 0.9, 0.98, 0.999)" }, { "math_id": 80, "text": "\ny=0.5" }, { "math_id": 81, "text": "M_k(0.5)=a_1" }, { "math_id": 82, "text": "b_l+e^{a_1}" }, { "math_id": 83, "text": "b_u-e^{-a_1}" }, { "math_id": 84, "text": "{b_l+b_u e^{a_1}\\over{1+e^{a_1}}}" }, { "math_id": 85, "text": "m^{th}" }, { "math_id": 86, "text": "E[x^m] =\\int_{y=0}^1 {M_k(y)}^m \\, dy" }, { "math_id": 87, "text": "\n\\begin{align}\n\\text{mean} = {} & a_1 +{a_3\\over2}\\\\[6pt]\n\\text{variance} = {} &\n\\pi^2{{a_2}^2\\over3}+{{a_3}^2\\over{12}}+\\pi^2{{a_3}^2\\over{36}}+a_2a_4\n+{{a_4}^2\\over{12}}\\\\[6pt]\n\\text{skewness} = {} & \\pi^2 {a_2}^2{a_3}+\\pi^2 {{a_3}^3 \\over{24}} + {{{a_2} {a_3} {a_4}} \\over{2}}+ \\pi^2{{{a_2} {a_3} {a_4}} \\over{6}} + {{{a_3} {a_4}^2} \\over{8}}\\\\[6pt]\n\\text{kurtosis} = {} & 7\\pi^4 {{a_2}^4\\over{15}}+3\\pi^2 {{{a_2}^2{a_3}^2}\n\\over{24}}+7\\pi^4 {{{a_2}^2{a_3}^2} \\over{30}}+{{{a_3}^4}\\over{80}} +\n\\pi^2{{a_3}^4 \\over{24}} + 7\\pi^4 {{a_3}^4\\over{1200}} + 2\\pi^2{a_2}^3{a_4}\\\\[6pt]\n& {} +{{{a_2} {a_3}^2 {a_4}} \\over{2}}+2\\pi^2{{{a_2}\n{a_3}^2 {a_4}} \\over{3}}+2{{a_2}^2 {a_4}^2}+\\pi^2{{{a_2}^2 {a_4}^2 }\n\\over{6}}+{{{a_3}^2 {a_4}^2 } \\over{8}}+\\pi^2{{{a_3}^2 {a_4}^2 } \\over{40}}+{{{a_2} {a_4}^3 } \\over{3}} +{{a_4}^4 \\over{80}} \\end{align} " }, { "math_id": 88, "text": "m>4" }, { "math_id": 89, "text": "m, v," }, { "math_id": 90, "text": "s_s" }, { "math_id": 91, "text": "s_s=s/v^{3/2}" }, { "math_id": 92, "text": "\n\\begin{array}{ll}\nm = a_1 +{a_3\\over2} && a_1 = m -{a_3\\over2}\\\\[6pt]\nv =\n\\pi^2{{a_2}^2\\over3}+{{a_3}^2\\over{12}}+\\pi^2{{a_3}^2\\over{36}} &&\na_2 = {1\\over{\\pi}}\\Bigl[3\\Bigl(v-\\Bigl({1\\over{12}}+{{\\pi}^2\\over{36}}\\Bigr){a_3}^2\\Bigr)\\Bigr]^{1\\over{2}} \\\\[6pt]\ns = \\pi^2 {a_2}^2{a_3}+\\pi^2 {{a_3}^3 \\over{24}} &&\na_3 = 4\\Bigl({6v\\over{6+\\pi^2}}\\Bigr)^{1\\over{2}}\\cos\\Bigl[{1\\over{3}}\\Bigl(\\cos^{-1}\\Bigl(-{s_s\\over{4}}\\Bigl(1+{{\\pi}^2\\over{6}}\\Bigr)^{1\\over{2}}\\Bigr)+4\\pi\\Bigr)\\Bigr]\n\\\\[6pt]\n \\end{array} " }, { "math_id": 93, "text": "a_1, a_2," }, { "math_id": 94, "text": "|s_s|\\leq 2.07093" }, { "math_id": 95, "text": "\\phi" }, { "math_id": 96, "text": "i>2" }, { "math_id": 97, "text": "k\\geq4" }, { "math_id": 98, "text": "a_1 = 0.5" }, { "math_id": 99, "text": "a_4=1" }, { "math_id": 100, "text": "a_i=0" }, { "math_id": 101, "text": "k\\geq2" }, { "math_id": 102, "text": "b_l = 0" }, { "math_id": 103, "text": "b_u = 1" }, { "math_id": 104, "text": "a_2 = 1" }, { "math_id": 105, "text": "(x_i)" }, { "math_id": 106, "text": "y_i" } ]
https://en.wikipedia.org/wiki?curid=66708690
667175
Hermitian adjoint
Conjugate transpose of an operator in infinite dimensions In mathematics, specifically in operator theory, each linear operator formula_0 on an inner product space defines a Hermitian adjoint (or adjoint) operator formula_1 on that space according to the rule formula_2 where formula_3 is the inner product on the vector space. The adjoint may also be called the Hermitian conjugate or simply the Hermitian after Charles Hermite. It is often denoted by "A"† in fields like physics, especially when used in conjunction with bra–ket notation in quantum mechanics. In finite dimensions where operators can be represented by matrices, the Hermitian adjoint is given by the conjugate transpose (also known as the Hermitian transpose). The above definition of an adjoint operator extends verbatim to bounded linear operators on Hilbert spaces formula_4. The definition has been further extended to include unbounded "densely defined" operators, whose domain is topologically dense in, but not necessarily equal to, formula_5 Informal definition. Consider a linear map formula_6 between Hilbert spaces. Without taking care of any details, the adjoint operator is the (in most cases uniquely defined) linear operator formula_7 fulfilling formula_8 where formula_9 is the inner product in the Hilbert space formula_10, which is linear in the first coordinate and conjugate linear in the second coordinate. Note the special case where both Hilbert spaces are identical and formula_11 is an operator on that Hilbert space. When one trades the inner product for the dual pairing, one can define the adjoint, also called the transpose, of an operator formula_12, where formula_13 are Banach spaces with corresponding norms formula_14. Here (again not considering any technicalities), its adjoint operator is defined as formula_15 with formula_16 I.e., formula_17 for formula_18. The above definition in the Hilbert space setting is really just an application of the Banach space case when one identifies a Hilbert space with its dual. Then it is only natural that we can also obtain the adjoint of an operator formula_19, where formula_4 is a Hilbert space and formula_20 is a Banach space. The dual is then defined as formula_21 with formula_22 such that formula_23 Definition for unbounded operators between Banach spaces. Let formula_24 be Banach spaces. Suppose formula_25 and formula_26, and suppose that formula_11 is a (possibly unbounded) linear operator which is densely defined (i.e., formula_27 is dense in formula_20). Then its adjoint operator formula_1 is defined as follows. The domain is formula_28 Now for arbitrary but fixed formula_29 we set formula_30 with formula_31. By choice of formula_32 and definition of formula_33, f is (uniformly) continuous on formula_27 as formula_34. Then by the Hahn–Banach theorem, or alternatively through extension by continuity, this yields an extension of formula_35, called formula_36, defined on all of formula_20. This technicality is necessary to later obtain formula_1 as an operator formula_37 instead of formula_38 Remark also that this does not mean that formula_11 can be extended on all of formula_20 but the extension only worked for specific elements formula_39. Now, we can define the adjoint of formula_11 as formula_40 The fundamental defining identity is thus formula_41 for formula_42 Definition for bounded operators between Hilbert spaces. Suppose H is a complex Hilbert space, with inner product formula_43. Consider a continuous linear operator "A" : "H" → "H" (for linear operators, continuity is equivalent to being a bounded operator). Then the adjoint of A is the continuous linear operator "A"∗ : "H" → "H" satisfying formula_44 Existence and uniqueness of this operator follows from the Riesz representation theorem. This can be seen as a generalization of the "adjoint" matrix of a square matrix which has a similar property involving the standard complex inner product. Properties. The following properties of the Hermitian adjoint of bounded operators are immediate: If we define the operator norm of A by formula_46 then formula_47 Moreover, formula_48 One says that a norm that satisfies this condition behaves like a "largest value", extrapolating from the case of self-adjoint operators. The set of bounded linear operators on a complex Hilbert space H together with the adjoint operation and the operator norm form the prototype of a C*-algebra. Adjoint of densely defined unbounded operators between Hilbert spaces. Definition. Let the inner product formula_49 be linear in the "first" argument. A densely defined operator A from a complex Hilbert space H to itself is a linear operator whose domain "D"("A") is a dense linear subspace of H and whose values lie in H. By definition, the domain "D"("A"∗) of its adjoint "A"∗ is the set of all "y" ∈ "H" for which there is a "z" ∈ "H" satisfying formula_50 Owing to the density of formula_27 and Riesz representation theorem, formula_51 is uniquely defined, and, by definition, formula_52 Properties 1.–5. hold with appropriate clauses about domains and codomains. For instance, the last property now states that ("AB")∗ is an extension of "B"∗"A"∗ if A, B and AB are densely defined operators. ker A*=(im A)⊥. For every formula_53 the linear functional formula_54 is identically zero, and hence formula_55 Conversely, the assumption that formula_56 causes the functional formula_57 to be identically zero. Since the functional is obviously bounded, the definition of formula_1 assures that formula_58 The fact that, for every formula_59 formula_60 shows that formula_61 given that formula_27 is dense. This property shows that formula_62 is a topologically closed subspace even when formula_33 is not. Geometric interpretation. If formula_63 and formula_64 are Hilbert spaces, then formula_65 is a Hilbert space with the inner product formula_66 where formula_67 and formula_68 Let formula_69 be the symplectic mapping, i.e. formula_70 Then the graph formula_71 of formula_72 is the orthogonal complement of formula_73 formula_74 The assertion follows from the equivalences formula_75 and formula_76 Corollaries. A* is closed. An operator formula_11 is "closed" if the graph formula_77 is topologically closed in formula_78 The graph formula_79 of the adjoint operator formula_1 is the orthogonal complement of a subspace, and therefore is closed. A* is densely defined ⇔ A is closable. An operator formula_11 is "closable" if the topological closure formula_80 of the graph formula_77 is the graph of a function. Since formula_81 is a (closed) linear subspace, the word "function" may be replaced with "linear operator". For the same reason, formula_11 is closable if and only if formula_82 unless formula_83 The adjoint formula_72 is densely defined if and only if formula_11 is closable. This follows from the fact that, for every formula_84 formula_85 which, in turn, is proven through the following chain of equivalencies: formula_86 A** = Acl. The "closure" formula_87 of an operator formula_11 is the operator whose graph is formula_88 if this graph represents a function. As above, the word "function" may be replaced with "operator". Furthermore, formula_89 meaning that formula_90 To prove this, observe that formula_91 i.e. formula_92 for every formula_93 Indeed, formula_94 In particular, for every formula_95 and every subspace formula_96 formula_97 if and only if formula_98 Thus, formula_99 and formula_100 Substituting formula_101 obtain formula_102 A* = (Acl)*. For a closable operator formula_103 formula_104 meaning that formula_105 Indeed, formula_106 Counterexample where the adjoint is not densely defined. Let formula_107 where formula_108 is the linear measure. Select a measurable, bounded, non-identically zero function formula_109 and pick formula_110 Define formula_111 It follows that formula_112 The subspace formula_27 contains all the formula_113 functions with compact support. Since formula_114 formula_11 is densely defined. For every formula_115 and formula_116 formula_117 Thus, formula_118 The definition of adjoint operator requires that formula_119 Since formula_109 this is only possible if formula_120 For this reason, formula_121 Hence, formula_1 is not densely defined and is identically zero on formula_122 As a result, formula_11 is not closable and has no second adjoint formula_123 Hermitian operators. A bounded operator "A" : "H" → "H" is called Hermitian or self-adjoint if formula_124 which is equivalent to formula_125 In some sense, these operators play the role of the real numbers (being equal to their own "complex conjugate") and form a real vector space. They serve as the model of real-valued observables in quantum mechanics. See the article on self-adjoint operators for a full treatment. Adjoints of conjugate-linear operators. For a conjugate-linear operator the definition of adjoint needs to be adjusted in order to compensate for the complex conjugation. An adjoint operator of the conjugate-linear operator A on a complex Hilbert space H is an conjugate-linear operator "A"∗ : "H" → "H" with the property: formula_126 Other adjoints. The equation formula_127 is formally similar to the defining properties of pairs of adjoint functors in category theory, and this is where adjoint functors got their name from. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " A " }, { "math_id": 1, "text": "A^*" }, { "math_id": 2, "text": "\\langle Ax,y \\rangle = \\langle x,A^*y \\rangle," }, { "math_id": 3, "text": "\\langle \\cdot,\\cdot \\rangle" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "H." }, { "math_id": 6, "text": "A: H_1\\to H_2" }, { "math_id": 7, "text": "A^* : H_2 \\to H_1" }, { "math_id": 8, "text": "\\left\\langle A h_1, h_2 \\right\\rangle_{H_2} = \\left\\langle h_1, A^* h_2 \\right\\rangle_{H_1}," }, { "math_id": 9, "text": "\\langle\\cdot, \\cdot \\rangle_{H_i}" }, { "math_id": 10, "text": "H_i" }, { "math_id": 11, "text": "A" }, { "math_id": 12, "text": "A: E \\to F" }, { "math_id": 13, "text": "E, F" }, { "math_id": 14, "text": "\\|\\cdot\\|_E, \\|\\cdot\\|_F" }, { "math_id": 15, "text": "A^*: F^* \\to E^*" }, { "math_id": 16, "text": "A^*f = f \\circ A : u \\mapsto f(Au), " }, { "math_id": 17, "text": "\\left(A^*f\\right)(u) = f(Au)" }, { "math_id": 18, "text": "f \\in F^*, u \\in E" }, { "math_id": 19, "text": "A: H \\to E" }, { "math_id": 20, "text": "E" }, { "math_id": 21, "text": "A^*: E^* \\to H" }, { "math_id": 22, "text": "A^*f = h_f " }, { "math_id": 23, "text": "\\langle h_f, h\\rangle_H = f(Ah)." }, { "math_id": 24, "text": "\\left(E, \\|\\cdot\\|_E\\right), \\left(F, \\|\\cdot\\|_F\\right)" }, { "math_id": 25, "text": " A: D(A) \\to F " }, { "math_id": 26, "text": "D(A) \\subset E" }, { "math_id": 27, "text": "D(A)" }, { "math_id": 28, "text": "D\\left(A^*\\right) := \\left\\{g \\in F^*:~ \\exists c \\geq 0:~ \\mbox{ for all } u \\in D(A):~ |g(Au)| \\leq c \\cdot \\|u\\|_E\\right\\}." }, { "math_id": 29, "text": "g \\in D(A^*)" }, { "math_id": 30, "text": "f: D(A) \\to \\R" }, { "math_id": 31, "text": "f(u) = g(Au)" }, { "math_id": 32, "text": "g" }, { "math_id": 33, "text": "D(A^*)" }, { "math_id": 34, "text": "|f(u)| = |g(Au)| \\leq c\\cdot \\|u\\|_E" }, { "math_id": 35, "text": "f" }, { "math_id": 36, "text": "\\hat{f}" }, { "math_id": 37, "text": "D\\left(A^*\\right) \\to E^*" }, { "math_id": 38, "text": "D\\left(A^*\\right) \\to (D(A))^*." }, { "math_id": 39, "text": "g \\in D\\left(A^*\\right)" }, { "math_id": 40, "text": "\\begin{align}\n A^*: F^* \\supset D(A^*) &\\to E^* \\\\\n g &\\mapsto A^*g = \\hat f.\n\\end{align}" }, { "math_id": 41, "text": "g(Au) = \\left(A^* g\\right)(u)" }, { "math_id": 42, "text": "u \\in D(A)." }, { "math_id": 43, "text": "\\langle\\cdot,\\cdot\\rangle" }, { "math_id": 44, "text": "\\langle Ax , y \\rangle = \\left\\langle x , A^* y\\right\\rangle \\quad \\mbox{for all } x, y \\in H." }, { "math_id": 45, "text": "\\left(A^*\\right)^{-1} = \\left(A^{-1}\\right)^*" }, { "math_id": 46, "text": "\\| A \\|_\\text{op} := \\sup \\left\\{\\|Ax\\| : \\|x\\| \\le 1\\right\\}" }, { "math_id": 47, "text": "\\left\\|A^* \\right\\|_\\text{op} = \\|A\\|_\\text{op}." }, { "math_id": 48, "text": "\\left\\|A^* A \\right\\|_\\text{op} = \\|A\\|_\\text{op}^2." }, { "math_id": 49, "text": "\\langle \\cdot, \\cdot \\rangle" }, { "math_id": 50, "text": " \\langle Ax , y \\rangle = \\langle x , z \\rangle \\quad \\mbox{for all } x \\in D(A)." }, { "math_id": 51, "text": "z" }, { "math_id": 52, "text": "A^*y=z." }, { "math_id": 53, "text": "y \\in \\ker A^*," }, { "math_id": 54, "text": "x \\mapsto \\langle Ax,y \\rangle = \\langle x,A^*y\\rangle " }, { "math_id": 55, "text": " y \\in (\\operatorname{im} A)^\\perp." }, { "math_id": 56, "text": " y \\in (\\operatorname{im} A)^\\perp" }, { "math_id": 57, "text": "x \\mapsto \\langle Ax,y \\rangle" }, { "math_id": 58, "text": " y \\in D(A^*)." }, { "math_id": 59, "text": " x \\in D(A)," }, { "math_id": 60, "text": "\\langle Ax,y \\rangle = \\langle x,A^*y\\rangle = 0" }, { "math_id": 61, "text": " A^* y \\in D(A)^\\perp =\\overline{D(A)}^\\perp = \\{0\\}, " }, { "math_id": 62, "text": "\\operatorname{ker}A^*" }, { "math_id": 63, "text": "H_1" }, { "math_id": 64, "text": "H_2" }, { "math_id": 65, "text": "H_1 \\oplus H_2" }, { "math_id": 66, "text": "\\bigl \\langle (a,b),(c,d) \\bigr \\rangle_{H_1 \\oplus H_2} \\stackrel{\\text{def}}{=} \\langle a,c \\rangle_{H_1} + \\langle b,d \\rangle_{H_2}, " }, { "math_id": 67, "text": "a,c \\in H_1" }, { "math_id": 68, "text": "b,d \\in H_2." }, { "math_id": 69, "text": "J\\colon H\\oplus H \\to H \\oplus H" }, { "math_id": 70, "text": "J(\\xi, \\eta) = (-\\eta, \\xi)." }, { "math_id": 71, "text": "G(A^*) =\\{(x,y) \\mid x\\in D(A^*),\\ y=A^*x\\} \\subseteq H \\oplus H " }, { "math_id": 72, "text": " A^* " }, { "math_id": 73, "text": "JG(A):" }, { "math_id": 74, "text": "G(A^*) = (JG(A))^\\perp = \\{ (x, y) \\in H \\oplus H : \\bigl \\langle (x, y) , (-A\\xi, \\xi) \\bigr \\rangle_{H \\oplus H} = 0\\;\\;\\forall \\xi \\in D(A)\\}. " }, { "math_id": 75, "text": " \\bigl \\langle (x, y) , (-A\\xi, \\xi) \\bigr \\rangle = 0 \\quad \\Leftrightarrow \\quad \\langle A\\xi, x \\rangle = \\langle \\xi, y \\rangle, " }, { "math_id": 76, "text": "\\Bigl[ \\forall \\xi \\in D(A)\\ \\ \\langle A\\xi, x \\rangle = \\langle \\xi, y \\rangle \\Bigr] \\quad \\Leftrightarrow \\quad x \\in D(A^*)\\ \\&\\ y = A^*x. " }, { "math_id": 77, "text": "G(A)" }, { "math_id": 78, "text": "H \\oplus H." }, { "math_id": 79, "text": "G(A^*)" }, { "math_id": 80, "text": "G^\\text{cl}(A) \\subseteq H \\oplus H " }, { "math_id": 81, "text": "G^\\text{cl}(A)" }, { "math_id": 82, "text": "(0,v) \\notin G^\\text{cl}(A)" }, { "math_id": 83, "text": "v=0." }, { "math_id": 84, "text": "v \\in H," }, { "math_id": 85, "text": "v \\in D(A^*)^\\perp\\ \\Leftrightarrow\\ (0,v) \\in G^\\text{cl}(A)," }, { "math_id": 86, "text": "\n\\begin{align}\nv \\in D(A^*)^\\perp &\\Longleftrightarrow (v,0) \\in G(A^*)^\\perp \n\\Longleftrightarrow (v,0) \\in (JG(A))^\\text{cl} = JG^\\text{cl}(A) \\\\\n&\\Longleftrightarrow (0,-v) = J^{-1}(v,0) \\in G^\\text{cl}(A) \\\\\n&\\Longleftrightarrow (0,v) \\in G^\\text{cl}(A).\n\\end{align}\n" }, { "math_id": 87, "text": " A^\\text{cl} " }, { "math_id": 88, "text": " G^\\text{cl}(A) " }, { "math_id": 89, "text": " A^{**} = A^{\\text{cl}}," }, { "math_id": 90, "text": " G(A^{**}) = G^{\\text{cl}}(A). " }, { "math_id": 91, "text": "J^* = -J," }, { "math_id": 92, "text": " \\langle Jx,y\\rangle_{H \\oplus H} = -\\langle x,Jy\\rangle_{H \\oplus H}," }, { "math_id": 93, "text": "x,y \\in H \\oplus H." }, { "math_id": 94, "text": "\n\\begin{align}\n\\langle J(x_1,x_2),(y_1,y_2)\\rangle_{H \\oplus H}\n&= \\langle (-x_2,x_1),(y_1,y_2)\\rangle_{H \\oplus H}\n= \\langle -x_2,y_1\\rangle_H + \\langle x_1,y_2 \\rangle_H \\\\\n&= \\langle x_1,y_2 \\rangle_H + \\langle x_2,-y_1 \\rangle_H\n= \\langle (x_1,x_2),-J(y_1,y_2)\\rangle_{H \\oplus H}.\n\\end{align}\n" }, { "math_id": 95, "text": "y \\in H \\oplus H" }, { "math_id": 96, "text": " V \\subseteq H \\oplus H," }, { "math_id": 97, "text": "y \\in (JV)^\\perp" }, { "math_id": 98, "text": "Jy \\in V^\\perp." }, { "math_id": 99, "text": " J[(JV)^\\perp] = V^\\perp " }, { "math_id": 100, "text": " [J[(JV)^\\perp]]^\\perp = V^\\text{cl}." }, { "math_id": 101, "text": " V = G(A)," }, { "math_id": 102, "text": " G^\\text{cl}(A) = G(A^{**})." }, { "math_id": 103, "text": "A," }, { "math_id": 104, "text": " A^* = \\left(A^\\text{cl}\\right)^*, " }, { "math_id": 105, "text": "G(A^*) = G\\left(\\left(A^\\text{cl}\\right)^*\\right)." }, { "math_id": 106, "text": "\nG\\left(\\left(A^\\text{cl}\\right)^*\\right) = \\left(JG^\\text{cl}(A)\\right)^\\perp = \\left(\\left(JG(A)\\right)^\\text{cl}\\right)^\\perp = (JG(A))^\\perp = G(A^*).\n" }, { "math_id": 107, "text": "H=L^2(\\mathbb{R},l)," }, { "math_id": 108, "text": "l" }, { "math_id": 109, "text": "f \\notin L^2," }, { "math_id": 110, "text": "\\varphi_0 \\in L^2 \\setminus \\{0\\}." }, { "math_id": 111, "text": "A \\varphi = \\langle f,\\varphi\\rangle \\varphi_0." }, { "math_id": 112, "text": "D(A) = \\{\\varphi \\in L^2 \\mid \\langle f,\\varphi\\rangle \\neq \\infty\\}." }, { "math_id": 113, "text": "L^2" }, { "math_id": 114, "text": "\\mathbf{1}_{[-n,n]} \\cdot \\varphi\\ \\stackrel{L^2}{\\to}\\ \\varphi," }, { "math_id": 115, "text": "\\varphi \\in D(A)" }, { "math_id": 116, "text": "\\psi \\in D(A^*)," }, { "math_id": 117, "text": "\\langle \\varphi, A^*\\psi \\rangle = \\langle A\\varphi, \\psi \\rangle = \\langle \\langle f,\\varphi \\rangle\\varphi_0, \\psi \\rangle = \\langle f,\\varphi \\rangle\\cdot \\langle \\varphi_0, \\psi \\rangle = \\langle \\varphi, \\langle \\varphi_0, \\psi \\rangle f\\rangle. " }, { "math_id": 118, "text": "A^* \\psi = \\langle \\varphi_0, \\psi \\rangle f." }, { "math_id": 119, "text": "\\mathop{\\text{Im}}A^* \\subseteq H=L^2." }, { "math_id": 120, "text": "\\langle \\varphi_0, \\psi \\rangle= 0." }, { "math_id": 121, "text": "D(A^*) = \\{\\varphi_0\\}^\\perp." }, { "math_id": 122, "text": "D(A^*)." }, { "math_id": 123, "text": "A^{**}." }, { "math_id": 124, "text": "A = A^*" }, { "math_id": 125, "text": "\\langle Ax , y \\rangle = \\langle x , A y \\rangle \\mbox{ for all } x, y \\in H." }, { "math_id": 126, "text": "\\langle Ax , y \\rangle = \\overline{\\left\\langle x , A^* y \\right\\rangle} \\quad \\text{for all } x, y \\in H." }, { "math_id": 127, "text": "\\langle Ax , y \\rangle = \\left\\langle x, A^* y \\right\\rangle" } ]
https://en.wikipedia.org/wiki?curid=667175
667179
Typing
Text input method Typing is the process of writing or inputting text by pressing keys on a typewriter, computer keyboard, mobile phone, or calculator. It can be distinguished from other means of text input, such as handwriting and speech recognition. Text can be in the form of letters, numbers and other symbols. The world's first typist was Lillian Sholes from Wisconsin in the United States, the daughter of Christopher Sholes, who invented the first practical typewriter. User interface features such as spell checker and autocomplete serve to facilitate and speed up typing and to prevent or correct errors the typist may make. Techniques. Hunt and peck. Hunt and peck ("two-fingered typing") is a common form of typing in which the typist presses each key individually. Instead of relying on the memorized position of keys, the typist must find each key by sight. Although good accuracy may be achieved, the use of this method may also prevent the typist from being able to see what has been typed without glancing away from the keys, and any typing errors that are made may not be noticed immediately. Due to the fact that only a few fingers are used in this technique, this also means that the fingers are forced to move a much greater distance. Touch typing. In this technique, the typist keeps their eyes on the source copy at all times. Touch typing also involves the use of the home row method, where typists rest their wrist down, rather than lifting up and typing (which can cause carpal tunnel syndrome ). To avoid this, typists should sit up tall, leaning slightly forward from the waist, place their feet flat on the floor in front of them with one foot slightly in front of the other, and keep their elbows close to their sides with forearms slanted slightly upward to the keyboard; fingers should be curved slightly and rest on the home row. Many touch typists also use keyboard shortcuts when typing on a computer. This allows them to edit their document without having to take their hands off the keyboard to use a mouse. An example of a keyboard shortcut is pressing the key plus the key to save a document as they type, or the key plus the key to undo a mistake. Other shortcuts are the key plus the to copy and the key and the key to paste, and the key and the key to cut. Many experienced typists can feel or sense when they have made an error and can hit the key and make the correction with no increase in time between keystrokes. Hybrid. There are many idiosyncratic typing styles in between novice-style "hunt and peck" and touch typing. For example, many "hunt and peck" typists have the keyboard layout memorized and are able to type while focusing their gaze on the screen. Some use just two fingers, while others use 3–6 fingers. Some use their fingers very consistently, with the same finger being used to type the same character every time, while others vary the way they use their fingers. One study examining 30 subjects, of varying different styles and expertise, has found minimal difference in typing speed between touch typists and self-taught hybrid typists. According to the study, "The number of fingers does not determine typing speed... People using self-taught typing strategies were found to be as fast as trained typists... instead of the number of fingers, there are other factors that predict typing speed... fast typists... keep their hands fixed on one position, instead of moving them over the keyboard, and more consistently use the same finger to type a certain letter." To quote Prof. Dr. Anna Feit: "We were surprised to observe that people who took a typing course, performed at similar average speed and accuracy, as those that taught typing to themselves and only used 6 fingers on average." Thumbing. A late 20th century trend in typing, primarily used with devices with small keyboards (such as PDAs and Smartphones), is "thumbing" or thumb typing. This can be accomplished using either only one thumb or both the thumbs, with more proficient typists reaching speeds of 100 words per minute. Similar to desktop keyboards and input devices, if a user overuses keys which need hard presses and/or have small and unergonomic layouts, it could cause thumb tendonitis or other repetitive strain injury. Words per minute. Words per minute (WPM) is a measure of typing speed, commonly used in recruitment. For the purposes of WPM measurement a word is standardized to five characters or keystrokes. Therefore, "brown" counts as one word, but "mozzarella" counts as two. The benefits of a standardized measurement of input speed are that it enables comparison across language and hardware boundaries. The speed of an Afrikaans-speaking operator in Cape Town can be compared with a French-speaking operator in Paris. Today, even Written Chinese can be typed very quickly using the combination of a software prediction system and by typing their sounds in Pinyin. Such prediction software even allows typing short-hand forms while producing complete characters. For example, the phrase "nǐ chī le ma" (你吃了吗) meaning "Have you eaten yet?" can be typed with just 4 strokes: "nclm". Alphanumeric entry. In one study of average computer users, the average rate for transcription was 33 words per minute, and 19 words per minute for composition. In the same study, when the group was divided into "fast", "moderate" and "slow" groups, the average speeds were 40 wpm, 35 wpm, and 23 wpm respectively. An average professional typist reaches 50 to 80 wpm, while some positions can require 80 to 95 wpm (usually the minimum required for dispatch positions and other typing jobs), and some advanced typists work at speeds above 120 wpm. Two-finger typists, sometimes also referred to as "hunt and peck" typists, commonly reach sustained speeds of about 37 wpm for memorized text and 27 wpm when copying text, but in bursts may be able to reach speeds of 60 to 70 wpm. From the 1920s through the 1970s, typing speed (along with shorthand speed) was an important secretarial qualification and typing contests were popular and often publicized by typewriter companies as promotional tools. A less common measure of the speed of a typist, CPM is used to identify the number of characters typed per minute. This is a common measurement for typing programs, or typing tutors, as it can give a more accurate measure of a person's typing speed without having to type for a prolonged period of time. The common conversion factor between WPM and CPM is 5. It is also used occasionally for associating the speed of a reader with the amount they have read. CPM has also been applied to 20th century printers, but modern faster printers more commonly use PPM (pages per minute). The fastest typing speed ever, 216 words per minute, was achieved by Stella Pajunas-Garnand from Chicago in 1946 in one minute on an IBM electric using the QWERTY keyboard layout. As of 2005[ [update]], writer Barbara Blackburn was the fastest English language typist in the world, according to "The Guinness Book of World Records". Using the Dvorak keyboard layout, she had maintained 150 wpm for 50 minutes, and 170 wpm for shorter periods, with a peak speed of 212 wpm. Barbara Blackburn, who failed her QWERTY typing class in high school, first encountered the Dvorak layout in 1938 and then she quickly learned to achieve very high speeds of typing, also she occasionally toured giving speed-typing demonstrations during her secretarial career. She appeared on "Late Night with David Letterman" on January 24, 1985, but felt that Letterman made a spectacle of her. The recent emergence of several competitive typing websites has allowed fast typists on computer keyboards to emerge along with new records, though many of these are unverifiable. Some notable, verified records include 255 wpm on a one-minute, random-word test by a user under the username slekap and occasionally bailey, 213 wpm on a 1-hour, random-word test by Joshua Hu, 221 wpm average on 10 random quotes by Joshua Hu, and first place in the 2020 Ultimate Typing Championship by Anthony Ermollin based on an average of 180.88 wpm on texts of various lengths. These three people are the most commonly cited fastest typists in online typing communities. All of their records were set on the QWERTY keyboard layout. Using a personalized interface, physicist Stephen Hawking, who suffered from amyotrophic lateral sclerosis, managed to type 15 wpm with a switch and adapted software created by Walt Woltosz. Due to a slowdown of his motor skills, his interface was upgraded with an infrared camera that detected "twitches in the cheek muscle under the eye." His typing speed decreased to approximately one word per minute in the later part of his life. Numeric entry. The numeric entry, or 10-key, speed is a measure of one's ability to manipulate a numeric keypad. Generally, it is measured in keystrokes per hour (KPH). Text-entry research. Error analysis. With the introduction of computers and word-processors, there has been a change in how text-entry is performed. In the past, using a typewriter, speed was measured with a stopwatch and errors were tallied by hand. With the current technology, document preparation is more about using word-processors as a composition aid, changing the meaning of error rate and how it is measured. Research performed by R. William Soukoreff and I. Scott MacKenzie, has led to a discovery of the application of a well-known algorithm. Through the use of this algorithm and accompanying analysis technique, two statistics were used, "minimum string distance error rate" (MSD error rate) and "keystrokes per character" (KSPC). The two advantages of this technique include: Deconstructing the text input process. Through analysis of keystrokes, the keystrokes of the input stream were divided into four classes: Correct (C), Incorrect Fixed (IF), Fixes (F), and Incorrect Not Fixed (INF). These key stroke classification are broken down into the following Using these classes, the Minimum String Distance Error Rate and the Key Strokes per Character statistics can both be calculated. Minimum string distance error rate. The minimum string distance (MSD) is the number of "primitives" which is the number of insertions, deletions, or substitutions to transform one string into another. The following equation was found for the MSD Error Rate. "MSD Error Rate" = formula_0 Key strokes per character (KSPC). With the minimum string distance error, errors that are corrected do not appear in the transcribed text. The following example shows why this can be an important class of errors to consider: "Presented Text": the quick brown "Input Stream": the quix&lt;-ck brown "Transcribed Text": the quick brown In the above example, the incorrect character ('x') was deleted with a backspace ('&lt;-'). Since these errors do not appear in the transcribed text, the MSD error rate is 0%. That is the purpose of the key strokes per character (KSPC) statistic. "KSPC" = formula_1 There are some shortcomings of the KSPC statistic, such as: Further metrics. Using the classes described above, further metrics were defined by R. William Soukoreff and I.Scott MacKenzie: "Error correction efficiency" refers to the ease with which the participant performed error correction. "Participant conscientiousness" is the ratio of corrected errors to the total number of error, which helps distinguish perfectionists from apathetic participants. If C represents the amount of useful information transferred, INF, IF, and F represent the proportion of bandwidth wasted. Total error rate. The classes described also provide an intuitive definition of total error rate: Since these three error rates are ratios, they are comparable between different devices, something that cannot be done with the KSPC statistic, which is device dependent. Tools for text entry research. Currently, two tools are publicly available for text entry researchers to record text entry performance metrics. The first is TEMA that runs only on the Android (operating system). The second is WebTEM that runs on any device with a modern Web browser, and works with almost all text entry technique. Keystroke dynamics. Keystroke dynamics, or "typing dynamics", is the obtaining of detailed timing information that describes exactly when each key was pressed and when it was released as a person is typing at a computer keyboard for biometric identification, similar to speaker recognition. Data needed to analyze keystroke dynamics is obtained by keystroke logging. The behavioral biometric of Keystroke Dynamics uses the manner and rhythm in which an individual types characters on a keyboard or keypad. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(INF/(C + INF)) * 100\\%" }, { "math_id": 1, "text": "(C+INF+IF+F)/(C+INF)" } ]
https://en.wikipedia.org/wiki?curid=667179
66718077
Vitaly Khlopin
Russian chemist (1890–1950) Vitaly Grigorievich Khlopin (Russian:Вита́лий Григо́рьевич Хло́пин) (January 1890 - 10 July 1950) was a Russian and Soviet scientist- radiochemist, professor, academician of the USSR Academy of Sciences (1939), Hero of Socialist Labour (1949), and director of the Radium Institute of the USSR Academy of Sciences (1939-1950). He was one of the founders of Soviet radiochemistry and radium industry, received the first domestic radium preparations (1921), one of the founders of the Radium Institute and leading participants in the atomic project and founder of the school of Soviet radiochemists. Biography. He was born on January 14 (26), 1890 in Perm, in the family of a doctor Grigory Vitalievich Khlopin (1863-1929). From 1905 the Khlopins lived in St. Petersburg. Brief chronology of his life path: 1922-1934 - Head of the gas department of NPFR, - Geochemical Institute of the USSR Academy of Sciences (Leningrad); He died on July 10, 1950, and was buried in Leningrad, on the Necropolis of the Masters of Arts of the Alexander Nevsky Lavra[14]. Family. Khlopin was first married to Nadezhda Pavlovna Annenkova (daughter of the Narodovtsy P. S. Annenkov[clarification]). Scientific works. V. G. Khlopin began his independent scientific activity as a student in 1911 - in his father's laboratory at the Clinical Institute he carried out work, the results of which were published in the article "On the formation of oxidants in the air under the action of ultraviolet rays". In these studies V. G. Khlopin first proved the formation in atmospheric air under the action of ultraviolet rays not only hydrogen peroxide and ozone, but also nitrogen oxides, the latter statement began a long discussion that lasted until 1931, when D. Vorländer () proved the correctness of the observations of V. G. Khlopin. The circle of V. G. Khlopin's interests is not strictly confined to any one area. It is determined by the school, which he passed under the guidance of L. A. Chugaev and V. I. Vernadsky, respectively - in general chemistry and geochemistry, which, in turn, allowed V. G. Khlopin to develop his own scientific direction - to create the first domestic school of radiochemists. Working with L. A. Chugaev. At the initial stage of his research activity (1911-1917), V. G. Khlopin was mainly concerned with problems related to inorganic and analytical chemistry. In 1913, together with L. A. Chugaev, he worked on the synthesis of complex compounds of platonitrite with dithioethers. Of his further works, especially important are those aimed at the development of a new method for the preparation of various derivatives of univalent nickel, and the creation of a device for determining the solubility of compounds at different temperatures. To the most interesting works of this period belongs the discovery of hydroxopentamine series of complex compounds of platinum made in 1915 by L. A. Chugaev and V. G. Khlopin; curiously, but methodologically, from the point of view of the theory of cognition, it is quite natural that historically it was made somewhat earlier than the discovery by L. A. Chugaev and N. A. Vladimirov of the pentamine series, later called Chugaev's salts. Two works hold a special place in this period of V. G. Khlopin's scientific work: 1. The action of hydrosulfur sodium salt on metallic selenium and tellurium, leading to the development of a convenient method of obtaining sodium telluride and selenide and a convenient synthesis of organic compounds of tellurium and selenium (1914), 2. On the action of hydrosulfurosodium salt on nickel salts in the presence of nitrous sodium salt. The work led to the synthesis of univalent nickel derivatives (1915), which were much later (in 1925) obtained in Germany by S. Mansho and co-workers by the action of carbon monoxide and nitric oxide on nickel salts. Here, at the same department, already in the First World War, on the assignment of the Chemical Committee of the Main Artillery Department, V.G. Khlopin performed his first technological work - he developed a method of obtaining pure platinum from Russian raw materials. The importance of this work was due to the sharp reduction of imports. His participation in several expeditions aimed at identifying Russia's natural resources was subordinated to the solution of the same problems. He wrote reviews on rare elements: boron, lithium, rubidium, cesium and zirconium. At V. I. Vernadsky’s laboratory. All of V.G. Khlopin's further scientific activity was predetermined by this meeting. In the laboratory founded by Vladimir Ivanovich Vernadsky, a systematic study of radioactive minerals and rocks was carried out, the search for which in Russia was carried out by expeditions, also organized on his initiative. V. I. Vernadsky was the first Russian scientist who realized the importance of the discovery of radioactivity: "...For us it is not completely indifferent at all how radioactive minerals of Russia will be studied... Now, when mankind is entering a new age of radiant - atomic energy, we, and not others, should know, should find out what the soil of our native country holds in this respect". In 1909 V. I. Vernadsky headed the research of radioactivity phenomena in Russia, under his chairmanship the Radium Commission was organized - all the works were united under the auspices of the Academy of Sciences, the Radiological Laboratory was founded, since 1914 the publication of the "Proceedings of the Radium Expedition of the Academy of Sciences" was started. In the mentioned speech V. I. Vernadsky notes the specific features of the new direction of scientific research: "This discovery has produced a huge revolution in the scientific outlook, caused the creation of a new science, different from physics and chemistry - the doctrine of radioactivity, put before life and technology practical tasks of a completely new kind...". In 1915, V. I. Vernadsky attracted V. G. Khlopin to work in the Radiological Laboratory. V. G. Khlopin was destined to become the first, and for many years - the leading specialist in the new discipline. But research in the field of radioactivity, study of new radioactive elements already discovered in Russia at that time was still in the state of initial organizational period - there were no domestic radium preparations for laboratory experiments; however, deposits of minerals and ores - raw materials for consistent development of scientific work in this direction, systematic study of radioactive minerals - were already known. The leading experts of the profile - Professors K. A. Nenadkevich and A. E. Fersman - were invited to participate in the present work. In the context of mastering the fundamental areas of activity, which for V.G. Khlopin became his life's work, he develops research of scientific and applied aspects, including methods of geochemistry of radioactive elements and noble gases, analytical chemistry and thermodynamics; at the same time, the scientist develops an independent direction, which gave the preconditions for the formation of a scientific school. By the early 1920s, four main lines had emerged, which in turn led to the establishment of an independent school: 1. radium technology; 2. chemistry of radioelements and applied radiochemistry; 3. geochemistry of radioelements and noble gases; 4.analytical chemistry. First trial radium plant. In 1917, the purely scientific interest in the study of radium was replaced by the practical need to use it for military purposes - the military department and defense organizations received information that radium was used for the production of light compounds. The necessity of radium extraction from domestic raw materials became urgent. A large batch of radium-containing ore from the Tyuya-Muyun deposit was stored in the warehouse of a private commercial firm "Fergana Society for Rare Metals Mining". This organization, due to the lack of specialists-radiochemists in Russia, was preparing the raw material for shipment to Germany for technological extraction of the final product from it, but the war and then the February Revolution of 1917 prevented this. The Congress for the Technical Defense of the State in October 1917 decided to organize a special radium plant under the direct control of the Academy of Sciences, but the October Socialist Revolution again removed this issue from the queue. In January 1918 V. G. Khlopin published an article "A Few Words on the Application of Radioactive Elements in Military Technology and on the Possible Future of the Radium Industry in Russia", in which he characterized the importance and prospective use of radium for military-strategic purposes. In the spring of the same year, the Presidium of the All-Russian Council of National Economy (RCNE) decided to sequester radioactive raw materials belonging to the "Fergana Society"; in April, the Chemical Department of the RCNE, headed by Prof. L. Ya. Karpov, entrusted the Academy of Sciences with the mission of organizing a plant for radium extraction from domestic uranium-vanadium ores and ensuring scientific control over production; at a meeting of specialists convened on 12 April by the Commission for the Study of the Natural Productive Forces of Russia (NPFR), headed by N. S. Kovalev. С. Kurnakov, V. G. Khlopin and L. I. Bogoyavlensky it was reported on the results of the work undertaken to obtain radium from the available raw materials; in July 1918 a special Commission, the Technical Council or later the Board for the organization of a radium plant at the Academy of Sciences was elected, which decided to organize a research laboratory, a special Radium Department (under the Commission) headed by V. I. Vernadsky was established under the chairmanship of A. E. Fersman, senior mineralogist of the Academy of Sciences, professor of the Higher Women's Courses. The secretary of the department, a specialist of the Radium Laboratory of the Academy, an assistant of the Department of General Chemistry of the Petrograd University, 28-year-old V. G. Khlopin, was appointed its commissioner for the organization of the radium plant. His thorough theoretical training and mastery of the methods of fine chemical analysis, his ability to solve practical problems effectively, and his experience in expeditions fully justified his involvement in such a responsible business. L. N. Bogoyavlensky, a specialist on this subject, was invited as the head of the plant. "October 28, 1918.&lt;br&gt;Uralsovnarkhoz (Perm), Usolsk executive committee, Management of Berezniki soda plant.&lt;br&gt; «I order the Berezniki plant to immediately begin work on the organization of a radium plant according to the resolution of the Vysovnarkhoz. The necessary funds have been allocated by the Council of People's Commissars. The work should be carried out under the direction and responsibility of chemical engineer Bogoyavlensky, to whom I propose to render full assistance.&lt;br&gt; Chairman of the Council of People 's Commissars Lenin»". "Lenin V. I. Complete Collected Works, vol. 50, p. 375". In 1918, all radioactive residues that were in Petrograd were evacuated inland - first to the Berezniki soda plant in Perm province, and in May 1920, already by the new plant manager I. Ya. Bashilov, - to the Bondyuzhsky chemical plant of Khimosnov (now Khimzavod named after L. Y. Karpov in Mendeleevsk), where only in the fall of 1920 it became possible to put into operation a temporary pilot plant for radium extraction. Radioactive substances technology. V. G. Khlopin developed a method of mechanical enrichment to improve the quality of raw barium-radium sulfates rich in silica (together with engineer S. P. Alexandrov). Later, the scientist transformed the Curie-Debierne method of conversion of sulfates into carbonates under the condition of saturation of sulfates with silica - through the combination of soda with caustic soda (together with P. A. Volkov). On the basis of theoretical assumptions, V. G. Khlopin proposed several methods of fractional crystallization of barium-radium salts, excluding evaporation of solutions - by increasing the concentration of the same ion in the cold: fractional precipitation of chlorides with hydrochloric acid (1921), fractional precipitation of bromides (together with M. A. Pasvik, 1923), fractional precipitation of nitrates (with P. I. Tolmachev, with A. P. Ratner, 1924-1930). A. Pasvik, 1923), fractional precipitation of nitrates (with P. I. Tolmachev, with A. P. Ratner, 1924-1930), fractional precipitation of chromates (M. S. Merkulova), fractional precipitation of chlorides with zinc chloride (I. Y. Bashilov and Y. S. Vilnyansky, 1926). In 1924, V. G. Khlopin created a general theory of the fractional crystallization process, which greatly facilitated the calculation of the technological process in general and the development of the required apparatus for its implementation in particular. A number of versions of the conventional crystallization scheme were hereby based on calculations used in plant practice. Later this theory was applied and developed in the All-Russian Research Institute of Chemical Reagents and Particularly Pure Chemicals for obtaining chemically pure substances by recrystallization. Chemistry of radio elements and applied radiochemistry. In this field, V. G. Khlopin and his colleagues and students (M. S. Merkulova, V. I. Grebenshchikov and others) developed a methodology for studying the process of isomorphous coprecipitation of microcomponents and ways to achieve equilibrium in the solid phase-solution system, - the influence of many factors on this process was established and the hypothesis of V. G. Khlopin (1924) about the subordination of the process of fractional crystallization to the law of substance distribution between two immiscible phases was proved (Khlopin's law). The possibility of using the method of isomorphic co-crystallization not only for the isolation of radioactive elements, but also for the study of their state in liquid and solid phases - for determining their valence was shown. V. G. Khlopin and A. G. Samartseva established the existence of compounds of divalent and hexavalent polonium by this method. The process of adsorption of crystalline precipitates by the surface was also studied, - the distribution between the gas phase and crystalline precipitate, as well as between the salt melt and the solid phase. Thus, in this section, V. G. Khlopin's studies address the following key issues: &lt;br&gt; 1. conditions for achieving true (thermodynamic) microcomponent equilibrium between the crystalline solid phase and solution; &lt;br&gt; 2. the use of radioelements as indicators in determining the mechanism of isomorphic substitution of dissociated ions; &lt;br&gt; 3. application of general laws of isomorphous substitution for development of a method for fixation of chemical compounds present in extremely small proportions and unstable in the solid phase, establishment of their valence and chemical type, - for revealing new chemical equilibria both in the solid phase and in solution; &lt;br&gt; 4. conditions of adsorption equilibrium between solid crystalline phase and solution. Thermodynamic equilibrium of microcomponent. It has been rigorously experimentally established that: &lt;br&gt; a) When a true (thermodynamic) equilibrium is reached between a crystalline solid phase (electrolyte) and a solution, the microcomponent present in the solution and isomorphic with the solid phase is distributed between the two immiscible solvents according to the Berthelot-Nernst law and at that in all known cases in its simple form: Ск/Ср=К or formula_0 where x is the amount of microcomponent transferred into crystals, a is the total amount of microcomponent, y and b are the corresponding values for macrocomponent.&lt;br&gt; b) The mechanism responsible for achieving true equilibrium between the crystalline phase and the solution is reduced to the process of multiple recrystallization of the solid phase, that replaces in the considered case practically absent under ordinary conditions the diffusion process in the solid state. Recrystallization at submicroscopic sizes of crystals proceeds extremely fast, thus in crystallization from supersaturated solutions recrystallization and establishment of equilibrium are finished at the stage when crystallites are small enough. c) In the case of slow crystallization not from supersaturated solutions, but from saturated ones, in particular, due to slow evaporation, the true equilibrium between crystals and solution is not observed, and the distribution of the microcomponent between the solid phase and solution proceeds in this case according to the logarithmic law of Goskins and Derner, developed on the basis of the idea of continuous ion exchange between the faces of the growing crystal and the solution formula_1 Here, as above: a is the total amount of microcomponent, x is the amount of microcomponent transferred to the solid phase, b is the total amount of macrocomponent, y is the amount of macrocomponent transferred to the solid phase. &lt;br&gt; d) An abrupt change in the value of D with a change in temparature or in the composition of the liquid phase is an indicator of the occurrence of a new chemical equilibrium in solution or in the solid phase. The case of distribution of the microcomponent between the crystalline solid phase and the solution (according to the Berthelot-Nernst or Goskins and Derner law) can serve as evidence for the formation between the microcomponent and the anion or cation of the solid phase of compounds crystallizing isomorphically with the solid phase. Radioactive elements as indicators. Radioactive elements (Ra and RaD) were used by V. G. Khlopin and B. A. Nikitin as indicators in determining the nature of a new kind of mixed Gramm crystals. These studies showed a fundamental difference between true mixed crystals in the spirit of Eilhard Mitscherlich, when the substitution of one component for another is expressed in the form of ion for ion, or atom for atom, molecule for molecule, and mixed crystals of a new kind, in which such a simple substitution is impossible, and proceeds by means of very small sizes of the ready sections of the crystal lattice of each component. Scientists have shown that mixed crystals of a new kind fundamentally differ from true mixed crystals by the presence of a low miscibility limit - they are not formed at low concentration of one of the components at all. In this case, they are similar to anomalous mixed crystals (as shown experimentally by V. G. Khlopin and M. A. Tolstaya), and relate to the latter approximately as a colloidal solution with suspension. These works (on the structure and properties of mixed crystals of a new kind and anomalous mixed crystals) led V. G. Khlopin to the idea of the need to classify isomorphic bodies not by considering the structure of isomorphic mixtures in static equilibrium (as it was done, for example, by V. G. Goldschmidt and his school), but according to the methods of substitution of components - taking into account the dynamics of the formation of an isomorphic mixture. In this case, all isomorphic bodies are strictly divided into two groups according to the method of substitution: (a) Isomorphic compounds in the spirit of E. Mitscherlich, truly isomorphic. Substitution in the formation of mixed crystals by such compounds occurs according to the first principle: ion on ion, etc. The above distribution laws apply to such crystals. Such compounds have similar chemical composition and molecular structure. (b) All other isomorphic compounds, when the formation of mixed crystals is conditioned by the second principle: substitution of sites from the unit cell or close to them (mixed crystals of a new kind or isomorphic of the 2nd kind according to W.G. Goldschmidt), up to microscopic - anomalous mixed crystals such as FeCl2 — NH4Cl, Ba(NO3)2, Pb(NO2)2, methylene blue K2SO4 - Ponsorot, etc., showing heterogeneity). 3.Thanks to the works discussed in the previous two paragraphs, V. G. Khlopin was able to present in a new form the law of E. Mitscherlich, which makes it possible to judge the composition and molecular structure of unknown compounds on the basis of their formation of isomorphous mixtures with compounds whose composition and molecular structure are known. V.G. Khlopin proposed the method of isomorphous co-crystallization from solutions for fixation of weightless and unstable chemical compounds and determination of their composition. The method made it possible to discover and determine the composition of individual compounds of divalent and hexavalent polonium (V. G. Khlopin and A. G. Samartseva). 4. Studying adsorption of isomorphous ions on the surface of crystalline precipitates, V. G. Khlopin showed that adsorption equilibrium is established in 20–30 minutes; adsorption of isomorphous ions does not depend on the charge of the adsorber surface when its solubility does not change. Correctly reproducible results of adsorption study and full reversibility of this process are achieved only if the adsorber surface remains unchanged throughout the experiment - if the adsorber solubility remains unchanged; in case of changes in the liquid phase composition or under other additional conditions, when the adsorber solubility changes, adsorption acquires a more complex character, which is accompanied by co-crystallization distorting the results. Studying the adsorption kinetics, a similar phenomenon was encountered by L. Imre. V. G. Khlopin gave a formula for determining the surface of crystalline precipitates by adsorption of an isomorphic ion on them and experimentally confirmed its applicability (V. G. Khlopin, M. S. Merkulova). Geochemistry of radioelements and noble gases. In this field, the following directions were developed in V. G. Khlopin's works: 1. radioelements migration, in particular - relatively short-lived in the Earth's crust; 2. study of radium-mesothorium containing waters; 3. Determination of geologic age on the basis of radioactive data; 4. distribution of helium and argon in natural gases of the country; 5. effects of natural waters in geochemistry of noble gases; 6. distribution of boron in natural waters. Radioelements migration. The scientist was the first to draw attention to the special importance of studying the migration of relatively short-lived radioelements in the Earth's crust for solving general geological and geochemical problems (1926). V. G. Khlopin pointed out a number of questions of these disciplines, which imply solution by the proposed methods: determination of sequence in geological and geochemical processes, determination of absolute age of relatively young and very young geological formations, and a number of other thematic areas. Migrations of uranium and radium were subjected to experimental study. Radioactive water studies. Extensive studies relating to the establishment of the presence of radium, uranium, and decay products of the thorium series in natural brines of the Soviet Union were carried out under the direction of V. G. Khlopin; numerous expeditions revealed a new form of accumulation in nature of radium and its isotopes in brine waters of the Na, Ca, and Cl types. The following of his students and colleagues participated in these studies: V. I. Baranov, L. V. Komlev, M. S. Merkulov, B. A. Nikitin, V. P. Savchenko, A. G. Samartseva, N. V. Tageev, and others. Determination of geologic age by radiometric method. These works concern, on the one hand, consideration of the basics of the method and analysis of the nature of errors, and, on the other hand, experimental determination of the age of uranites from different pegmatite veins both by the uranium/lead ratio and by Lan's oxygen method, which was developed and refined in the works of V.G. Khlopin. The scientist supervised research in this direction in the Radium Institute - on helium and lead methods, which gave the determination of the geologic age of some formations. The work (with E. K. Gerling and E. M. Ioffe) on helium migration from minerals and rocks and the influence of the gas phase on this process should be attributed to this cycle. Helium and argon distribution in natural gases of the USSR. V. G. Khlopin began to study the distribution of helium in freely emitting gases of the country in 1922-1923. In 1924, he and A. I. Lakashuk discovered helium in the gases of the Novouzensky district of Saratov province; and in the period from 1924 to 1936, V. G. Khlopin and his students (E. K. Gerling, G. M. Ermolina, B. A. Nikitin, I. E. Starik, P. I. Tolmachev, and others) analyzed many samples of natural gases and created a distribution map based on the data. For the first time a new type of gas jets in the Kokand area, called "air jets" and characteristic of wide mountain basins (1936), was identified. Natural waters and geochemistry of noble gases. The works of the present direction were a direct consequence of the previous section, on the basis of which V.G. Khlopin came to the concept of continuous gas exchange between inner and outer gas atmospheres, about the role of natural waters, in a particular case - in the exchange of noble gases (excluding helium) between outer air and underground atmospheres. In accordance with these ideas in underground gas atmospheres there is a gradual enrichment of argon, krypton and xenon, - depletion of neon in relation to their content in the air. Relation formula_2 in underground atmospheres is greater than in air. It has been found that gases dissolved in the lower layers of deep natural reservoirs are sharply enriched with heavy noble gases. Boron in natural waters. The beginning of this direction of geochemistry was the work on boron-acid springs in northwestern Persia and Transcaucasia; later these studies were extended to other areas of the USSR. It was found that boron is a typical element in the waters of oil-bearing areas, enriched in them. V.G. Khlopin also for the first time noted the need for prospecting boron-acid compounds in the Embinsky and Gurievsky counties of the Ural region, where much later the Inderskoye field was discovered. Analytical chemistry. V. G. Khlopin’s work in this area concerns gas, volumetric, gravimetric and colorimetric analysis. Gas analysis. V. G. Khlopin developed instruments for rapid assessment of the amount of helium and neon in gas mixtures (V. G. Khlopin, E. K. Gerling, 1932). These devices have simplified the analysis of noble gases so much that they have made it possible to include it in the general method of gas analysis. Volumetric analysis. For the first time in the USSR, V. G. Khlopin introduced the method of differential reduction and differential oxidation with the simultaneous determination of several cations in a mixture (1922) and experimentally mastered the simultaneous determination of vanadium, iron and uranium - volumetric methods for the determination of vanadium and uranium were proposed. Gravimetric analysis. V. G. Khlopin developed a quantitative method for separating tetravalent uranium in the form UF4NH4F1/2H2O from hexavalent uranium and trivalent and divalent iron. Colorimetric analysis. Scientists have proposed a method for determining small amounts of iridium in the presence of platinum. Under the leadership of V.G. Khlopin, several methods of analysis were also developed: a volumetric method for determining small amounts of boron, a volumetric method for determining &lt;chem&gt;SO4^2-&lt;/chem&gt; and &lt;chem&gt;Mg^2+&lt;/chem&gt;, gravimetric methods for determining uranium, a colorimetric method for determining fluorine, and others. The Uranium Problem and the Atomic Project. In the process of studying natural radioactivity - studying the radiation of radioactive elements and radioactive transformations, new natural radioactive elements were discovered, systematized in radioactive groups - uranium and thorium, which include the third, so-called actinium family - actinides (this name was proposed by S. A. Shchukarev). F.Soddy's discovery of the law of radioactive displacements made it possible to assume that the final stable decay products of elements of all three families are three isotopes of the same element - lead. The Bohr model of the atom is based on the study of natural radioactivity, which showed the complexity of the structure of the atom, the decay of which produces atoms of other elements, which is accompanied by three types of radiation: α, β и γ. The neutron-proton theory of the structure of the atomic nucleus owes its origin to the discovery of new elementary particles that make up the nucleus: the neutron (10n) and the proton (11p), which became possible by the artificial splitting of the atom under the influence of α-particles (1919): 147N+42He→178O+11H, accompanied by the release of a proton (soon experiments were carried out with a number of other light elements). Further fundamental research in this area showed that in light elements the number of neutrons in the nucleus is equal to the number of protons; and as we move to heavy elements, neutrons begin to dominate over protons, and the nuclei become unstable - they become radioactive. As part of the atomic project, he was a member of the technical council and was responsible for the activities of the radium institute. Through the efforts of V.G. Khlopin and the First Secretary of the Leningrad Regional Committee and City Committee of the All-Union Communist Party of Bolsheviks, Alexey Kuznetsov, the Radium Institute received additional premises. The decision to allocate space was made by the Special Committee in November 1945, carried out by the chairmen of the Operations Bureau of the Council of People's Commissars of the RSFSR A. N. Kosygin and the representative of the State Planning Committee in the Special Committee N. A. Borisov. Pedagogical, administrative, social and editorial activities. After graduating from St. Petersburg University, V. G. Khlopin was left at the department of Professor L. A. Chugaev, but while still a student, in 1911 he conducted a workshop on the chemical methods of sanitary analyzes with doctors at the St. Petersburg Clinical Institute, and continued this course of practical training in 1912 and 1913. From 1917 to 1924, V. G. Khlopin served as an assistant in the department of general chemistry at the university, and from 1924, as an assistant professor, he began teaching a special course on radioactivity and the chemistry of radioelements - the first in the USSR; since brief and incomplete data and summaries existed only in foreign literature, this course was completely developed by V. G. Khlopin, who taught it until 1930, and resumed in 1934 as a professor, teaching it until 1935. In the spring of 1945, the scientist organized and headed the department of radiochemistry at Leningrad University. Developed by V. G. Khlopin in collaboration with B. A. Nikitin and A. P. Ratner, a course of lectures on radiochemistry formed the basis of an extensive monograph on the chemistry of radioactive substances. V. G. Khlopin took an active part in the work of the Russian Physical-Chemical Society, and after the latter was transformed into the All-Union Chemical Society, he was a member of the Council of the Leningrad branch of the organization, and later was its chairman. At the Academy of Sciences, V. G. Khlopin was a member of the Analytical Commission, the Commission on Isotopes, and the Commission for the Development of the Scientific Heritage of D. I. Mendeleev. From 1941 to 1945, V. G. Khlopin, as Deputy Academician-Secretary, did a lot of work in the Department of Chemical Sciences of the USSR Academy of Sciences. During the Eastern Front (World War II), V. G. Khlopin served as deputy chairman of the Commission for the Mobilization of Resources of the Volga and Kama Region and chairman of its chemical section. For many years he was a member of the Editorial Council of the Chemical-Technical Publishing House (Khimteoret). The scientist was the executive editor of the journal Uspekhi Khimii and was on the editorial boards of the journals: “Reports of the USSR Academy of Sciences”, “Izvestia of the USSR Academy of Sciences (Department of Chemical Sciences)”, “Journal of General Chemistry” and “Journal of Physical Chemistry”. Vitaly Grigorievich Khlopin trained students in all the most important areas of scientific activity, many of whom became not only independent scientific researchers, but also the creators of their own scientific directions and schools. Memory. The following were named after V. G. Khlopin: Memorial plaques. In the 1950s, a memorial plaque was installed on the house at 61 Lesnoy Avenue with the text: “The outstanding Russian chemist Vitaly Grigorievich Khlopin lived in this house from 1945 to 1950.” References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{x}{a-x} = D \\frac{y}{b-y}" }, { "math_id": 1, "text": "\\ln\\frac{a}{a-x} = \\lambda \\ln\\frac{b}{b-y}" }, { "math_id": 2, "text": "\\frac{Ar}{N_2}, \\frac{Kr}{Ar}, \\frac{Xe}{Kr}" } ]
https://en.wikipedia.org/wiki?curid=66718077
667206
Touchscreen
Input and output device A touchscreen (or touch screen) is a type of display that can detect touch input from a user. It consists of both an input device (a touch panel) and an output device (a visual display). The touch panel is typically layered on the top of the electronic visual display of a device. Touchscreens are commonly found in smartphones, tablets, laptops, and other electronic devices. The display is often an LCD, AMOLED or OLED display. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work, while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size. A touchscreen enables the user to interact directly with what is displayed, instead of using a mouse, touchpad, or other such devices (other than a stylus, which is optional for most modern touchscreens). Touchscreens are common in devices such as smartphones, handheld game consoles, and personal computers. They are common in point-of-sale (POS) systems, automated teller machines (ATMs), and electronic voting machines. They can also be attached to computers or, as terminals, to networks. They play a prominent role in the design of digital appliances such as personal digital assistants (PDAs) and some e-readers. Touchscreens are important in educational settings such as classrooms or on college campuses. The popularity of smartphones, tablets, and many types of information appliances has driven the demand and acceptance of common touchscreens for portable and functional electronics. Touchscreens are found in the medical field, heavy industry, automated teller machines (ATMs), and kiosks such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content. Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, and not by display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers have acknowledged the trend toward acceptance of touchscreens as a user interface component and have begun to integrate touchscreens into the fundamental design of their products. History. One predecessor of the modern touchscreen includes stylus based systems. 1946 DIRECT LIGHT PEN - A patent was filed by Philco Company for a stylus designed for sports telecasting which, when placed against an intermediate cathode ray tube display (CRT) would amplify and add to the original signal. Effectively, this was used for temporarily drawing arrows or circles onto a live television broadcast, as described in &lt;templatestyles src="Citation/styles.css"/&gt;, Denk, William E, "Electronic pointer for television images", issued 1949-11-08 . 1962 OPTICAL - The first version of a touchscreen which operated independently of the light produced from the screen was patented by AT&amp;T Corporation &lt;templatestyles src="Citation/styles.css"/&gt;, Harmon, Leon D, "Electrographic transmitter", issued 1962-01-09 . This touchscreen utilized a matrix of collimated lights shining orthogonally across the touch surface. When a beam is interrupted by a stylus, the photodetectors which no longer are receiving a signal can be used to determine where the interruption is. Later iterations of matrix based touchscreens built upon this by adding more emitters and detectors to improve resolution, pulsing emitters to improve optical signal to noise ratio, and a nonorthogonal matrix to remove shadow readings when using multi-touch. 1963 INDIRECT LIGHT PEN - Later inventions built upon this system to free telewriting styli from their mechanical bindings. By transcribing what a user draws onto a computer, it could be saved for future use. See &lt;templatestyles src="Citation/styles.css"/&gt;, Graham, Robert E, "Telewriting apparatus", issued 1963-05-14 . 1965 CAPACITANCE AND RESISTANCE - The first finger driven touchscreen was developed by Eric Johnson, of the Royal Radar Establishment located in Malvern, England, who described his work on capacitive touchscreens in a short article published in 1965 and then more fully—with photographs and diagrams—in an article published in 1967. MID-60s ULTRASONIC CURTAIN - Another precursor of touchscreens, an ultrasonic-curtain-based pointing device in front of a terminal display, had been developed by a team around Rainer Mallebrein at Telefunken for an air traffic control system. In 1970, this evolved into a device named "Touchinput-" ("touch input facility") for the SIG 50 terminal utilizing a conductively coated glass screen in front of the display. This was patented in 1971 and the patent was granted a couple of years later. The same team had already invented and marketed the mouse RKS 100-86 for the SIG 100-86 a couple of years earlier. 1968 CAPACITANCE - The application of touch technology for air traffic control was described in an article published in 1968. Frank Beck and Bent Stumpe, engineers from CERN (European Organization for Nuclear Research), developed a transparent touchscreen in the early 1970s, based on Stumpe's work at a television factory in the early 1960s. Then manufactured by CERN, and shortly after by industry partners, it was put to use in 1973. 1972 OPTICAL - A group at the University of Illinois filed for a patent on an optical touchscreen that became a standard part of the Magnavox Plato IV Student Terminal and thousands were built for this purpose. These touchscreens had a crossed array of 16×16 infrared position sensors, each composed of an LED on one edge of the screen and a matched phototransistor on the other edge, all mounted in front of a monochrome plasma display panel. This arrangement could sense any fingertip-sized opaque object in close proximity to the screen. 1973 MULTI-TOUCH CAPACITANCE - In 1973, Beck and Stumpe published another article describing their capacitive touchscreen. This indicated that it was capable of multi-touch but this feature was purposely inhibited, presumably as this was not considered useful at the time ("A...variable...called BUT changes value from zero to five when a button is touched. The touching of other buttons would give other non-zero values of BUT but this is protected against by software" (Page 6, section 2.6). "Actual contact between a finger and the capacitor is prevented by a thin sheet of plastic" (Page 3, section 2.3). At that time Projected capacitance had not yet been invented. 1977 RESISTIVE - An American company, Elographics – in partnership with Siemens – began work on developing a transparent implementation of an existing opaque touchpad technology, U.S. patent No. 3,911,215, October 7, 1975, which had been developed by Elographics' founder George Samuel Hurst. The resulting resistive technology touch screen was first shown on the World's Fair at Knoxville in 1982. 1982 MULTI-TOUCH CAMERA - Multi-touch technology began in 1982, when the University of Toronto's Input Research Group developed the first human-input multi-touch system, using a frosted-glass panel with a camera placed behind the glass. 1983 OPTICAL - An optical touchscreen was used on the HP-150 starting in 1983. The HP 150 was one of the world's earliest commercial touchscreen computers. HP mounted their infrared transmitters and receivers around the bezel of a 9-inch Sony cathode ray tube (CRT). 1983 MULTI-TOUCH FORCE SENSING TOUCHSCREEN - Bob Boie of AT&amp;T Bell Labs, used capacitance to track the mechanical changes in thickness of a soft, deformable overlay membrane when one or more physical objects interact with it; the flexible surface being easily replaced, if damaged by these objects. The patent states "the tactile sensor arrangements may be utilized as a touch screen". Many derivative sources retrospectively describe Boie as making a major advancement with his touchscreen technology; but no evidence has been found that a rugged multi-touch capacitive touchscreen, that could sense through a rigid, protective overlay - the sort later required for a mobile phone, was ever developed or patented by Boie. Many of these citations rely on anecdotal evidence from Bill Buxton of Bell Labs. However, Bill Buxton did not have much luck getting his hands on this technology. As he states in the citation: "Our assumption (false, as it turned out) was that the Boie technology would become available to us in the near future. Around 1990 I took a group from Xerox to see this technology it [sic] since I felt that it would be appropriate for the user interface of our large document processors. This did not work out". UP TO 1984 CAPACITANCE - Although, as cited earlier, Johnson is credited with developing the first finger operated capacitive and resistive touchscreens in 1965, these worked by directly touching wires across the front of the screen. Stumpe and Beck developed a self-capacitance touchscreen in 1972, and a mutual capacitance touchscreen in 1977. Both these devices could only sense the finger by direct touch or through a thin insulating film. This was 11 microns thick according to Stumpe's 1977 report. 1984 FIRST PROJECTED CAPACITANCE TOUCHSCREEN PATENT- A capacitance keypad and touchscreen, capable of multi-touch, that could "project" touch sensing through several centimeters of air and other non-conductive materials, was invented by a British inventor, Ron Binstead This enabled the accurate detection of fingers through very thick glass and even double-glazing (see keypad for image). The massively increased functionality was due to much greater processing power being available at that time, and the use of a simple form of artificial intelligence (see Patent Claim1). An Acorn BBC Computer was initially used to process the data. This technique later became known as projected capacitance. Projected capacitance uses a simple form of artificial intelligence to measure the changes in capacitance caused by one or more fingers, by specifically profiling the capacitance effects expected for finger touch, and eliminating any measured capacitance changes attributable to other global and/or local events. Instead of a keypad, the device could be used as a continuous x/y sensing area (see Claim 9). A transparent version was usable as a projected capacitance touchscreen (see Claim 10), but it was limited in size due to the high resistance of the narrow, transparent Indium Tin Oxide tracks (numbered 96 in the image) used to link all sensing zones (80,82,84,86,88,90) independently to a common edge. 1984 TOUCHPAD - Fujitsu released a touch pad for the Micro 16 to accommodate the complexity of kanji characters, which were stored as tiled graphics. 1986 GRAPHIC TABLET - A graphic touch tablet was released for the Sega AI Computer. EARLY 80s EVALUATION FOR AIRCRAFT - Touch-sensitive control-display units (CDUs) were evaluated for commercial aircraft flight decks in the early 1980s. Initial research showed that a touch interface would reduce pilot workload as the crew could then select waypoints, functions and actions, rather than be "head down" typing latitudes, longitudes, and waypoint codes on a keyboard. An effective integration of this technology was aimed at helping flight crews maintain a high level of situational awareness of all major aspects of the vehicle operations including the flight path, the functioning of various aircraft systems, and moment-to-moment human interactions. EARLY 80s EVALUATATION FOR CARS - also, in the early 1980s, General Motors tasked its Delco Electronics division with a project aimed at replacing an automobile's non-essential functions (i.e. other than throttle, transmission, braking, and steering) from mechanical or electro-mechanical systems with solid state alternatives wherever possible. The finished device was dubbed the ECC for "Electronic Control Center", a digital computer and software control system hardwired to various peripheral sensors, servomechanisms, solenoids, antenna and a monochrome CRT touchscreen that functioned both as display and sole method of input. The ECC replaced the traditional mechanical stereo, fan, heater and air conditioner controls and displays, and was capable of providing very detailed and specific information about the vehicle's cumulative and current operating status in real time. The ECC was standard equipment on the 1985–1989 Buick Riviera and later the 1988–1989 Buick Reatta, but was unpopular with consumers—partly due to the technophobia of some traditional Buick customers, but mostly because of costly technical problems suffered by the ECC's touchscreen which would render climate control or stereo operation impossible. 1985 GRAPHIC TABLET - Sega released the Terebi Oekaki, also known as the Sega Graphic Board, for the SG-1000 video game console and SC-3000 home computer. It consisted of a plastic pen and a plastic board with a transparent window where pen presses are detected. It was used primarily with a drawing software application. 1985 MULTI-TOUCH CAPACITANCE - The University of Toronto group, including Bill Buxton, developed a multi-touch tablet that used capacitance rather than bulky camera-based optical sensing systems (see History of multi-touch). 1985 USED FOR POINT OF SALE - The first commercially available graphical point-of-sale (POS) software was demonstrated on the 16-bit Atari 520ST color computer. It featured a color touchscreen widget-driven interface. The ViewTouch POS software was first shown by its developer, Gene Mosher, at the Atari Computer demonstration area of the Fall COMDEX expo in 1986. 1987 FIRST LARGE PROJECTED CAPACITANCE TOUCHSCREEN INVENTED - A large projected capacitance touchscreen was invented, by British inventor Ron Binstead. This sensed fingers through very thick glass and double-glazing, enabling touchscreens to be operated through shop windows. 16 touch sensing zones were connected directly to the four edges of the touchscreen (see inset image), thereby avoiding the necessity for narrow tracks linking sensing zones to the edge. An order was placed for $0.7M worth of these touchscreens for use in a US hotel chain. 1987 CAPACITANCE TOUCH KEYS - Casio launched the Casio PB-1000 pocket computer with a touchscreen consisting of a 4×4 matrix, resulting in 16 touch areas in its small LCD graphic screen. 1988 SELECT ON "LIFT-OFF" - Touchscreens had a bad reputation of being imprecise until 1988. Most user-interface books would state that touchscreen selections were limited to targets larger than the average finger. At the time, selections were done in such a way that a target was selected as soon as the finger came over it, and the corresponding action was performed immediately. Errors were common, due to parallax or calibration problems, leading to user frustration. "Lift-off strategy" was introduced by researchers at the University of Maryland Human–Computer Interaction Lab (HCIL). As users touch the screen, feedback is provided as to what will be selected: users can adjust the position of the finger, and the action takes place only when the finger is lifted off the screen. This allowed the selection of small targets, down to a single pixel on a 640×480 Video Graphics Array (VGA) screen (a standard of that time). 1990 SINGLE AND MULTI-TOUCH GESTURES - Sears et al. (1990) gave a review of academic research on single and multi-touch human–computer interaction of the time, describing gestures such as rotating knobs, adjusting sliders, and swiping the screen to activate a switch (or a U-shaped gesture for a toggle switch). The HCIL team developed and studied small touchscreen keyboards (including a study that showed users could type at 25 wpm on a touchscreen keyboard), aiding their introduction on mobile devices. They also designed and implemented multi-touch gestures such as selecting a range of a line, connecting objects, and a "tap-click" gesture to select while maintaining location with another finger. 1990 TOUCHSCREEN SLIDER AND TOGGLE SWITCHES - HCIL demonstrated a touchscreen slider, which was later cited as prior art in the lock screen patent litigation between Apple and other touchscreen mobile phone vendors (in relation to U.S. patent 7657849). 1991 THE FIRST PROJECTED CAPACITANCE PATENT IS GRANTED - The 1984 projected capacitance patent application is granted to Binstead Designs of Nottingham, England. 1991 INERTIAL CONTROL - From 1991 to 1992, the Sun Star7 prototype PDA implemented a touchscreen with inertial scrolling. 1993 CAPACITANCE MOUSE / KEYPAD - Bob Boie of AT&amp;T Bell Labs, patented a simple mouse or keypad that capacitively sensed just one finger through a thin insulator. Although not claimed or even mentioned in the patent, this technology could potentially have been used as a capacitance touchscreen. 1993 FIRST RESISTIVE TOUCHSCREEN PHONE - IBM released the IBM Simon, which is the first touchscreen phone. EARLY 90s ABANDONED GAME CONTROLLER - An early attempt at a handheld game console with touchscreen controls was Sega's intended successor to the Game Gear, though the device was ultimately shelved and never released due to the expensive cost of touchscreen technology in the early 1990s. 1994 FIRST X/Y PROJECTED CAPACITANCE TOUCHSCREEN PATENT - An x/y multiplexed, projected capacitance, multiple input proximity detector and touchpad / touchscreen, was invented by British inventor Ron Binstead. This was capable of multi-touch and could accurately and reliably sense fingers through thick plastic and glass overlays. Together with the inventors earlier patent, this was very similar to an Apple patent taken out 10 years later in 2004. 1994 FIRST WIRE BASED PROJECTED CAPACITANCE - Stumpe and Beck's touchscreens (1972/1977 - already cited), used opaque conductive copper tracks that obscured about 50% of the screen (80 micron track / 80 micron space). The advent of projected capacitance in 1984, however, with its improved sensing capability, indicated that most of these tracks could be eliminated. This proved to be so, and led to the invention of a wire based touchscreen in 1994, where one 25 micron diameter, insulation coated wire replaced about 30 of these 80 micron wide tracks, and could also accurately sense fingers through thick glass. Screen masking, caused by the copper, was reduced from 50% to less than 0.5%. The use of fine wire meant that very large touchscreens, several meters wide, could be plotted onto a thin polyester support film with a simple x/y pen plotter, eliminating the need for expensive and complicated sputter coating, laser ablation, screen printing or etching. The resulting, incredibly flexible, touchscreen film, less than 100 microns thick, could be attached by static or non-setting weak adhesive to one side of a sheet of glass, for sensing through that glass. Early versions of this device were controlled by the PIC16C54 microchip. 1994 FIRST PUB GAME WITH TOUCHSCREEN - Appearing in pubs in 1994, JPM's Monopoly SWP (skill with prizes) was the first machine to use touch screen technology instead of buttons (see Quiz machine / History). It used a 14 inch version of this newly invented wire based projected capacitance touchscreen and had 64 sensing areas - the wiring pattern being similar to that shown in the lower diagram. The zig-zag pattern was introduced to minimize visual reflections and prevent Moire interference between the wires and the monitor line scans. About 600 of these were sold for this purpose, retailing at £50 apiece, which was very cheap for the time. Working through very thick glass made it ideal for operation in a "hostile" environment, such as a pub. Although reflected light from the copper wires was noticeable under certain lighting conditions, this problem was eliminated by using tinted glass. The reflection issue was later resolved by using finer (10 micron diameter), dark coated wires. Throughout the following decade JPM continued to use touchscreens for many other games such as "Cluedo" and "Who wants to be a Millionaire". 1998 PROJECTED CAPACITANCE LICENSES - This technology was licensed four years later to Romag Glass Products - later to become Zytronic Displays, and Visual Planet in 2003 (see page 4). 2004 MOBILE MULTI-TOUCH PROJECTED CAPACITANCE PATENT - Apple patents its multi-touch capacitive touchscreen for mobile devices. 2004 VIDEO GAMES WITH TOUCHSCREENS - Touchscreens were not be popularly used for video games until the release of the Nintendo DS in 2004. 2007 MOBILE PHONE WITH CAPACITANCE - The first mobile phone with a capacitive touchscreen was LG Prada, released in May 2007 (which was before the first iPhone released). By 2009, touchscreen-enabled mobile phones were becoming trendy and quickly gaining popularity in both basic and advanced devices. In Quarter-4 2009 for the first time, a majority of smartphones (i.e. not all mobile phones) shipped with touchscreens over non-touch. 2013 RESISTIVE VERSUS PROJECTED CAPACITANCE SALES - In 2007, 93% of touchscreens shipped were resistive and only 4% were projected capacitance. In 2013, 3% of touchscreens shipped were resistive and 96% were projected capacitance (see page 5). 2015 FORCE SENSING TOUCHSCREENS - Until recently, most consumer touchscreens could only sense one point of contact at a time, and few have had the capability to sense how hard one is touching. This has changed with the commercialization of multi-touch technology, and the Apple Watch being released with a force-sensitive display in April 2015. 2015 FIRST DIAGONALLY "WIRED" TOUCHSCREEN PATENT - A new diagonal "wiring" arrangement was invented by British inventor Ron Binstead, for use with resistive and multi-touch projected capacitance touchscreens - all the I/O elements coming from just one edge, and no bussed wires or "dead zone" round the other three edges (see top image on right). Touch resolution is almost doubled compared to x/y multiplexing. For example, 16 x/y I/Os create a maximum of 64 sensing element intersections, whereas 16 diagonal I/Os create 120 intersections. 2015 BISTATE PROJECTED CAPACITANCE - When used as a Projected Capacitance touchscreen, in mutual capacitance mode, diagonal wiring requires each I/O line to be capable of switching between two states (bistate), an output some of the time and an input at other times. I/Os are inputs most of the time, but, once every scan, one of the I/Os has to take its turn at being an output, the remaining input I/Os sensing any signals it generates. The I/O lines, therefore, may have to change from input to output, and vice versa, many times a second. This new design won an Electronics Weekly Elektra Award in 2017. 2021 FIRST "INFINITELY WIDE" TOUCHSCREEN PATENT - With standard x/y array touchscreens, the length of the horizontal sensing elements increases as the width of the touchscreen increases. Eventually, a limit is hit where the resistance gets so great that the touchscreen can no longer function properly. The patent describes how the use of diagonal elements ensures that the length of any element never exceeds 1.414 times the height formula_0 of the touchscreen, no matter how wide it is. This could be reduced to 1.15 times the height, if opposing diagonal elements intersect at 60 degrees instead of 90 degrees. The elongated touchscreen could be controlled by a single processor, or the distant ends could be controlled totally independently by different processors, linked by a synchronizing processor in the overlapping middle section. The number of unique intersections could be increased by allowing individual sensing elements to run in two opposing directions - as shown in the diagram. Technologies. There are a number of touchscreen technologies, with different methods of sensing touch. Resistive. A resistive touchscreen panel is composed of several thin layers, the most important of which are two transparent electrically resistive layers facing each other with a thin gap between them. The top layer (the layer that is touched) has a coating on the underside surface; just beneath it is a similar resistive layer on top of its substrate. One layer has conductive connections along its sides, while the other along the top and bottom. A voltage is applied to one layer and sensed by the other. When an object, such as a fingertip or stylus tip, presses down onto the outer surface, the two layers touch to become connected at that point. The panel then behaves as a pair of voltage dividers, one axis at a time. By rapidly switching between each layer, the position of pressure on the screen can be detected. Resistive touch is used in restaurants, factories, and hospitals due to its high tolerance for liquids and contaminants. A major benefit of resistive-touch technology is its low cost. Additionally, they may be used with gloves on, or by using anything rigid as a finger substitute, as only sufficient pressure is necessary for the touch to be sensed. Disadvantages include the need to press down, and a risk of damage by sharp objects. Resistive touchscreens also suffer from poorer contrast, due to having additional reflections (i.e. glare) from the layers of material placed over the screen. This type of touchscreen has been used by Nintendo in the DS family, the 3DS family, and the Wii U GamePad. Due to their simple structure, with very few inputs, resistive touchscreens are mainly used for single touch operation, although some two touch versions (often described as multi-touch) are available. However, there are some true multi-touch resistive touchscreens available. These need many more inputs, and rely on x/y multiplexing to keep the I/O count down. One example of a true multi-touch resistive touchscreen can detect 10 fingers at the same time. This has 80 I/O connections. These are possibly split 34 x inputs / 46 y outputs, forming a standard 3:4 aspect ratio touchscreen with 1564 x/y intersecting touch sensing nodes. Tri-state multiplexing could have been used instead of x/y multiplexing. This would have reduced the I/O count from 80 to 60 while creating 1770 unique touch sensing nodes, with no need for a bezel, and with all inputs coming from just one edge. Surface acoustic wave. Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen panel. When the panel is touched, a portion of the wave is absorbed. The change in ultrasonic waves is processed by the controller to determine the position of the touch event. Surface acoustic wave touchscreen panels can be damaged by outside elements. Contaminants on the surface can also interfere with the functionality of the touchscreen. SAW devices have a wide range of applications, including delay lines, filters, correlators and DC to DC converters. Capacitive touchscreen. A capacitive touchscreen panel consists of an insulator, such as glass, coated with a transparent conductor, such as indium tin oxide (ITO). As the human body is also an electrical conductor, touching the surface of the screen results in a distortion of the screen's electrostatic field, measurable as a change in capacitance. Different technologies may be used to determine the location of the touch. The location is then sent to the controller for processing. Some touchscreens use silver instead of ITO, as ITO causes several environmental problems due to the use of indium. The controller is typically a complementary metal–oxide–semiconductor (CMOS) application-specific integrated circuit (ASIC) chip, which in turn usually sends the signals to a CMOS digital signal processor (DSP) for processing. Unlike a resistive touchscreen, some capacitive touchscreens cannot be used to detect a finger through electrically insulating material, such as gloves. This disadvantage especially affects usability in consumer electronics, such as touch tablet PCs and capacitive smartphones in cold weather when people may be wearing gloves. It can be overcome with a special capacitive stylus, or a special-application glove with an embroidered patch of conductive thread allowing electrical contact with the user's fingertip. A low-quality switching-mode power supply unit with an accordingly unstable, noisy voltage may temporarily interfere with the precision, accuracy and sensitivity of capacitive touch screens. Some capacitive display manufacturers continue to develop thinner and more accurate touchscreens. Those for mobile devices are now being produced with 'in-cell' technology, such as in Samsung's Super AMOLED screens, that eliminates a layer by building the capacitors inside the display itself. This type of touchscreen reduces the visible distance between the user's finger and what the user is touching on the screen, reducing the thickness and weight of the display, which is desirable in smartphones. A simple parallel-plate capacitor has two conductors separated by a dielectric layer. Most of the energy in this system is concentrated directly between the plates. Some of the energy spills over into the area outside the plates, and the electric field lines associated with this effect are called fringing fields. Part of the challenge of making a practical capacitive sensor is to design a set of printed circuit traces which direct fringing fields into an active sensing area accessible to a user. A parallel-plate capacitor is not a good choice for such a sensor pattern. Placing a finger near fringing electric fields adds conductive surface area to the capacitive system. The additional charge storage capacity added by the finger is known as finger capacitance, or CF. The capacitance of the sensor without a finger present is known as parasitic capacitance, or CP. Surface capacitance. In this basic technology, only one side of the insulator is coated with a conductive layer. A small voltage is applied to the layer, resulting in a uniform electrostatic field. When a conductor, such as a human finger, touches the uncoated surface, a capacitor is dynamically formed. The sensor's controller can determine the location of the touch indirectly from the change in the capacitance as measured from the four corners of the panel. As it has no moving parts, it is moderately durable but has limited resolution, is prone to false signals from parasitic capacitive coupling, and needs calibration during manufacture. It is therefore most often used in simple applications such as industrial controls and kiosks. Although some standard capacitance detection methods are projective, in the sense that they can be used to detect a finger through a non-conductive surface, they are very sensitive to fluctuations in temperature, which expand or contract the sensing plates, causing fluctuations in the capacitance of these plates. These fluctuations result in a lot of background noise, so a strong finger signal is required for accurate detection. This limits applications to those where the finger directly touches the sensing element or is sensed through a relatively thin non-conductive surface. Projected capacitance. Projected capacitive touch (PCT; also PCAP) technology is a variant of capacitive touch technology but where sensitivity to touch, accuracy, resolution and speed of touch have been greatly improved by the use of a simple form of artificial intelligence. This intelligent processing enables finger sensing to be projected, accurately and reliably, through very thick glass and even double glazing. Projected capacitance is a method for accurately detecting and tracking a particular variable, or group of variables (such as finger(s)), by: a) using a simple form of artificial intelligence to develop a profile of the capacitance changing effects expected for that variable, b) specifically looking for such changes, and c) eliminating measured capacitance changes that do not match this profile, attributable to global variables (such as temperature/humidity, dirt build-up, electrical noise), and local variables (such as rain drops, partial shade and hands/elbows). Capacitance sensors may be discrete - possibly (but not necessarily) in a regular array, or they may be multiplexed. Assumptions. In practice, various assumptions are made, such as: - a) fingers will not be touching the screen at "power-up", b) a finger will not be on the same spot for more than a fixed period of time, and c) fingers will not be touching everywhere at the same time. a) If a finger IS touching the screen at "power-up", then, as soon as it is removed a large "anti-touch" capacitance change will be detected. This signals to the processor to reset the touch thresholds and store new "no touch" values for each input. b) Long-term drift compensation is used to gradually raise or lower these thresholds (trending eventually to "no-touch"). This compensates for global changes in temperature and humidity. It also eliminates the possibility of any position appearing to be touched for too long, due to some "non-finger" event. This might be caused, for example, by a wet leaf landing on, and sticking to the screen. c) When a decision is to be made about the validity of one or more touches, then, assumption c) means that the average value, of changes measured for some of the inputs with the smallest change, can be used to "offset" the touch thresholds of the inputs in contention. This minimizes the influence of hands and arms. By these and other means, the processor is constantly fine tuning the touch thresholds, and tweaking the touch sensitivity of each input. This enables very small changes, caused only by fingers, to be accurately detected through thick overlays, or several centimeters of air. When a conductive object, such as a finger, comes into contact with a PCT panel, it distorts the local electrostatic field at that point. This is measurable as a change in capacitance. If a finger bridges the gap between two of the "tracks", the charge field is further interrupted and detected by the controller. The capacitance can be changed and measured at every individual point on the grid. This system is able to accurately track touches. Due to the top layer of a PCT being glass, it is sturdier than less-expensive resistive touch technology. Unlike traditional capacitive touch technology, it is possible for a PCT system to sense a passive stylus or gloved fingers. Moisture on the surface of the panel, high humidity, or collected dust are not a problem, especially with 'fine wire' based touchscreens due to the fact that wire based touchscreens have a very low 'parasitic' capacitance, and there is a greater distance between neighboring conductors. Projected capacitance has "long term drift compensation" built in. This minimizes the effects of slowly changing environmental factors, such as the build-up of dirt and effects caused by changes in the weather. Drops of rain have little effect, but flowing water, and especially flowing sea water (due to its electrical conductivity), can cause short term issues. A high frequency (RF) signal, possibly from 100 kHz to 1 MHz, is imposed on one track at a time, and appropriate capacitance measurements are taken ( as described later in this article). This process is repeated until all the tracks have been sampled. Conductive tracks are often transparent, one example being Indium tin oxide (ITO), a transparent electrical conductor, but these conductive tracks can be made of very fine, non-transparent metal mesh or individual fine wires. Projected capacitance touchscreen layout.. Layout can vary depending on whether a single finger is to be detected or multiple fingers. In order to detect many fingers at the same time, some modern PCT touch screens are composed of thousands of discrete keys, each key being linked individually to the edge of the touch screen. This is enabled by etching an electrode grid pattern in a transparent conductive coating on one side of a sheet of glass or plastic. To reduce the number of input tracks, most PCT touch screens use multiplexing. This enables, for example, 100 (n) discrete key inputs to be reduced to 20 formula_1 when using x/y multiplexing, or 15 formula_2 if using bistate multiplexing or tri-state multiplexing. Capacitance multiplexing requires a grid of intersecting, but electrically isolated conductive tracks. This can be achieved in many different ways. One way is by creating parallel conductive tracks on one side of a plastic film, and similar parallel tracks on the other side, orientated at 90 degrees to the first side. Another way is to etch tracks on separate sheets of glass, and join these sheets, with tracks at right angles to each other, face to face using a thin non-conductive, adhesive interlayer. A simple alternative is to embed an x/y or diagonal grid of very fine, insulation coated conductive wires in a thin polyester film. This film can then be attached to one side of a sheet of glass, for operation through the glass. Touch resolution and the number of fingers that can be detected simultaneously is determined by the number of cross-over points (x * y). If x + y = n, then the maximum possible number of cross-overs is (n/2)2. However, the number of cross-over points can be almost doubled by using a diagonal lattice layout ( see Lattice/diagonal touchscreen diagrams ) where, instead of x elements only ever crossing y elements, each conductive element crosses every other element. Under these circumstances the maximum number of cross-overs is (n2-n)/2. All the connector inputs come from just one edge. See video (above) of raw data from 32 input diagonally wired touchscreen. Diagonal touchscreen arrays. In 2015, a new diagonal array, suited to a range of touchscreen and keypad technologies, was invented and patented by Ron Binstead of Binstead Designs Ltd. The diagram (left) shows how a group of 6 parallel conductive elements, folded back on themselves (at right angles), can create a triangular array of 15 unique intersections. An x/y array, with 6 conductive elements, would only have created a maximum of 9 unique intersections. Although conductive elements are normally connected to a terminal at one end of the conductor, the left diagram shows that these folded elements can be terminated at the fold, thereby forming split (or bifurcated) elements. A square/rectangular diagonal array can be formed by double folding the parallel conductors - see diagram on the right. Two connectors could be fitted at the folds, on opposite sides of the array. Alternatively, a single connector can be fitted at one end of the array - as shown in the diagram. Cylindrical touchscreens. Diagonal sensing elements can also be formed into a seamless cylindrical array. The diagram on the right shows a split, 9 I/O, bi-state cylindrical layout, with 36 unique intersections - all the I/O lines connected to one end of the cylinder (a standard x/y array would require horizontal I/O lines entering the side of the cylinder). Unsplit diagonal sensing elements can also be formed into cylinders, but 9 I/Os would only create 20 (5x4) unique intersections. These cylinders can be physically transformed into complex 3 dimensional shapes, by a range of different methods, such as blow molding, vacuum forming, etc.. A similar layout is possible for a complementary cylindrical LED display - see Charlieplexing#Diagonal arrays. Infinitely wide touchscreens. Touchscreen width is normally restricted by the resistance of the conductive material used. As the width of an x/y touchscreen is increased, eventually the resistance of the horizontal conductors becomes too great for the touchscreen to work properly. "Infinitely" wide touchscreens are possible, however, when using diagonal wiring, because track lengths are always 1.414 x the height of the touchscreen formula_0, and are independent of touchscreen width (see diagram on right). There are two types of PCT: mutual capacitance and self-capacitance Mutual capacitance. An electrical signal, imposed on one electrical conductor, can be capacitively "sensed" by another electrical conductor that is in very close proximity, but electrically isolated—a feature that is exploited in mutual capacitance touchscreens. In a mutual capacitive sensor array, the "mutual" crossing of one electrical conductor with another electrical conductor, but with no direct electrical contact, forms a capacitor (see touchscreen#Construction). High frequency voltage pulses are applied to these conductors, one at a time. These pulses capacitively couple to every conductor that intersects it. Bringing a finger or conductive stylus close to the surface of the sensor changes the local electrostatic field, which in turn reduces the capacitance between these intersecting conductors. Any significant change in the strength of the signal sensed is used to determine if a finger is present or not at an intersection. The capacitance change at every intersection on the grid can be measured to accurately determine one or more touch locations. Mutual capacitance allows multi-touch operation where multiple fingers, palms or styli can be accurately tracked at the same time.The greater the number of intersections, the better the touch resolution and the more independent fingers that can be detected. This indicates a distinct advantage of diagonal wiring over standard x/y wiring, since diagonal wiring creates nearly twice the number of intersections. A 30 i/o, 16×14 x/y array, for example, would have 224 of these intersections / capacitors, and a 30 i/o diagonal lattice array could have 435 intersections. Each trace of an x/y mutual capacitance array only has one function, it is either an input or an output. The horizontal traces may be transmitters while the vertical traces are sensors, or vice versa. The traces in a diagonal mutual capacitance array, however, have to continuously change their functionality, "on the fly", by a process called bi-state multiplexing or Tri-state multiplexing. Some of the time a trace will be an output, at another time it will be an input or "grounded". A "look-up" table can be used to simplify this process. By slightly distorting the conductors in a "n" I/O diagonal matrix, the equivalent of a (n-1) by (n/2) array is formed. After address decoding, this can then be processed as a standard x/y array. Self-capacitance. Self-capacitance sensors can have the same X/Y or diagonal grid layout as mutual capacitance sensors, but, with self-capacitance all the traces usually operate independently, with no interaction between different traces. Along with several other methods, the extra capacitive load of a finger on a trace electrode may be measured by a current meter, or by the change in frequency of an RC oscillator. Traces are sensed, one after the other until all the traces have been sensed. A finger may be detected anywhere along the whole length of a trace (even "off-screen"), but there is no indication where the finger is along that trace. If, however, a finger is also detected along another intersecting trace, then it is assumed that the finger position is at the intersection of the two traces. This allows for the speedy and accurate detection of a single finger. There is, however, ambiguity if more than one finger is to be detected. Two fingers may have four possible detection positions, only two of which are true, the other two being "ghosts." However, by selectively de-sensitizing any touch-points in contention, conflicting results are easily resolved. This enables self-capacitance to be used for two touch operation. Although mutual capacitance is simpler for multi-touch, multi-touch can be achieved using self-capacitance. If the trace being sensed is intersected by another trace that has a "desensitizing" signal on it, then that intersection is insensitive to touch. By imposing such a "desensitizing" signal on all the intersecting traces, except one, along the trace being sensed, then just a short length of that trace will be sensitive to touch. By selecting a sequence of these sensing sections along the trace, it is possible to determine the accurate position of multiple fingers along the one trace. This process can then be repeated for all the other traces until the whole screen has been scanned. Self-capacitive touch screen layers are used on mobile phones such as the Sony Xperia Sola, the Samsung Galaxy S4, Galaxy Note 3, Galaxy S5, and Galaxy Alpha. Self-capacitance is far more sensitive than mutual capacitance and is mainly used for single touch, simple gesturing and proximity sensing where the finger does not even have to touch the glass surface. Mutual capacitance is mainly used for multitouch applications. Many touchscreen manufacturers use both self and mutual capacitance technologies in the same product, thereby combining their individual benefits. Self-capacitance vs. mutual capacitance. When using a 16 x 14 X/Y array to determine the position of a single finger by self-capacitance, 30 (i.e. 16 + 14) capacitance measurements are required. The finger being determined to be at the intersection of the strongest of the 16 x measurements and the strongest of the 14 y measurements. However, when using mutual capacitance, every intersection may have to be measured, making a total of 224 (i.e. 16 x 14) capacitance measurements. In this example, therefore, mutual capacitance requires nearly 7 times as many measurements as self-capacitance to detect the position of a finger. Many applications, such as selecting items from a list or menu, require just one finger, and self-capacitance is eminently suitable for such applications, due to the relatively low processing load, simpler processing method, the ability to sense through thick dielectric materials or air, and the possibility of reducing the number of inputs required, through repeat track layouts. For many other applications, however, such as for expanding / contracting items on the screen and for other gestures, two or more fingers need to be tracked. Two fingers can be detected and tracked accurately using self-capacitance, but this does involve a few extra calculations, and 4 extra capacitance measurements to eliminate the 2 "ghost" positions. One method is to undertake a full self-capacitance scan, to detect the 4 ambiguous finger positions, then use just 4 targeted mutual capacitance measurements to discover which two of the 4 positions are valid and which 2 are not. This gives a total of 34 measurements—still far less than the 224 required when using mutual capacitance alone. With 3 fingers, 9 disambiguations are required; with 4 fingers, 16 disambiguations etc. With more fingers, it may be decided that the process of disambiguation is too unwieldy. If sufficient processing power is available, the switch can then be made to full mutual capacitance scanning. Use of stylus on capacitive screens. Capacitive touchscreens do not necessarily need to be operated by a finger, but until recently the special styli required could be quite expensive to purchase. The cost of this technology has fallen greatly in recent years and capacitive styli are now widely available for a nominal charge, and often given away free with mobile accessories. These consist of an electrically conductive shaft with a soft conductive rubber tip, thereby resistively connecting the fingers to the tip of the stylus. Infrared grid. An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of the touch. A major benefit of such a system is that it can detect essentially any opaque object including a finger, gloved finger, stylus or pen. It is generally used in outdoor applications and POS systems that cannot rely on a conductor (such as a bare finger) to activate the touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not require any patterning on the glass which increases durability and optical clarity of the overall system. Infrared touchscreens are sensitive to dirt and dust that can interfere with the infrared beams, and suffer from parallax in curved surfaces and accidental press when the user hovers a finger over the screen while searching for the item to be selected. Infrared acrylic projection. A translucent acrylic sheet is used as a rear-projection screen to display information. The edges of the acrylic sheet are illuminated by infrared LEDs, and infrared cameras are focused on the back of the sheet. Objects placed on the sheet are detectable by the cameras. When the sheet is touched by the user, frustrated total internal reflection results in leakage of infrared light which peaks at the points of maximum pressure, indicating the user's touch location. Microsoft's PixelSense tablets use this technology. Optical imaging. Optical touchscreens are a relatively modern development in touchscreen technology, in which two or more image sensors (such as CMOS sensors) are placed around the edges (mostly the corners) of the screen. Infrared backlights are placed in the sensor's field of view on the opposite side of the screen. A touch blocks some lights from the sensors, and the location and size of the touching object can be calculated (see visual hull). This technology is growing in popularity due to its scalability, versatility, and affordability for larger touchscreens. Dispersive signal technology. Introduced in 2002 by 3M, this system detects a touch by using sensors to measure the piezoelectricity in the glass. Complex algorithms interpret this information and provide the actual location of the touch. The technology is unaffected by dust and other outside elements, including scratches. Since there is no need for additional elements on screen, it also claims to provide excellent optical clarity. Any object can be used to generate touch events, including gloved fingers. A downside is that after the initial touch, the system cannot detect a motionless finger. However, for the same reason, resting objects do not disrupt touch recognition. Acoustic pulse recognition. The key to this technology is that a touch at any one position on the surface generates a sound wave in the substrate which then produces a unique combined signal as measured by three or more tiny transducers attached to the edges of the touchscreen. The digitized signal is compared to a list corresponding to every position on the surface, determining the touch location. A moving touch is tracked by rapid repetition of this process. Extraneous and ambient sounds are ignored since they do not match any stored sound profile. The technology differs from other sound-based technologies by using a simple look-up method rather than expensive signal-processing hardware. As with the dispersive signal technology system, a motionless finger cannot be detected after the initial touch. However, for the same reason, the touch recognition is not disrupted by any resting objects. The technology was created by SoundTouch Ltd in the early 2000s, as described by the patent family EP1852772, and introduced to the market by Tyco International's Elo division in 2006 as Acoustic Pulse Recognition. The touchscreen used by Elo is made of ordinary glass, giving good durability and optical clarity. The technology usually retains accuracy with scratches and dust on the screen. The technology is also well suited to displays that are physically larger. Construction. There are several principal ways to build a touchscreen. The key goals are to recognize one or more fingers touching a display, to interpret the command that this represents, and to communicate the command to the appropriate application. Multi-touch projected capacitance screens A very simple, low cost way to make a multi-touch projected capacitance touchscreen, is to sandwich an x/y or diagonal matrix of fine, insulation coated copper or tungsten wires between two layers of clear polyester film. This creates an array of proximity sensing micro-capacitors. One of these micro-capacitors every 10 to 15 mm is probably sufficient spacing if fingers are relatively widely spaced apart, but very high discrimination multi-touch may need a micro-capacitor every 5 or 6 mm. A similar system can be used for ultra-high resolution sensing, such as fingerprint sensing. Fingerprint sensors require a micro-capacitor spacing of about 44 to 50 microns. The touchscreens can be manufactured at home, using readily available tools and materials, or it can be done industrially. First, a "continuous-trace" wiring pattern is generated using a simple CAD system. The wire is threaded through a plotter pen and plotted directly, as one continuous wire, onto a thin sheet of adhesive coated, clear polyester film (such as "window film"), using a standard, low cost x/y pen plotter. After plotting, the single wire is gently cut into individual sections with a sharp scalpel, taking care not to damage the film. A second identical polyester film is laminated over the first film. The resulting touchscreen film is then trimmed to shape, and a connector is retro-fitted. The end product is extremely flexible, being about 75 microns thick (about the thickness of a human hair). It can even be creased without loss of functionality. The film can be mounted on, or behind non-conducting (or slightly conducting) surfaces. Usually, it is mounted behind a sheet of glass up to 12 mm thick (or more), for sensing through the glass. This method is suitable for a wide range of touchscreen sizes from very small to several meters wide - or even wider, if using a diagonally wired matrix. The end product is environmentally friendly as it uses recyclable polyester, and minute quantities of copper wire. The film could even have a second life as another product, such as drawing film, or wrapping film. Unlike some other touchscreen technologies, no complex processes or rare materials are used. For non-touchscreen applications, other plastics (e.g. vinyl or ABS) may be used. The film can be blow molded or heat formed into complex three dimensional shapes, such as bottles, globes or car dashboards. Alternatively, the wires can be embedded in thick plastic such as fiber glass or carbon fiber body panels. Single touch resistive touchscreens In the resistive approach, which used to be the most popular technique, there are typically four layers: When a user touches the surface, the system records the change in the electric current that flows through the display. Dispersive signal Dispersive signal technology measures the piezoelectric effect—the voltage generated when mechanical force is applied to a material that occurs chemically when a strengthened glass substrate is touched. Infrared There are two infrared-based approaches. In one, an array of sensors detects a finger touching or almost touching the display, thereby interrupting infrared light beams projected over the screen. In the other, bottom-mounted infrared cameras record heat from screen touches. In each case, the system determines the intended command based on the controls showing on the screen at the time and the location of the touch. Development. The development of multi-touch screens facilitated the tracking of more than one finger on the screen; thus, operations that require more than one finger are possible. These devices also allow multiple users to interact with the touchscreen simultaneously. With the growing use of touchscreens, the cost of touchscreen technology is routinely absorbed into the products that incorporate it and is nearly eliminated. Touchscreen technology has demonstrated reliability and is found in airplanes, automobiles, gaming consoles, machine control systems, appliances, and handheld display devices including cellphones; the touchscreen market for mobile devices was projected to produce US$5 billion by 2009. The ability to accurately point on the screen itself is also advancing with the emerging graphics tablet-screen hybrids. Polyvinylidene fluoride (PVDF) plays a major role in this innovation due its high piezoelectric properties, which allow the tablet to sense pressure, making such things as digital painting behave more like paper and pencil. TapSense, announced in October 2011, allows touchscreens to distinguish what part of the hand was used for input, such as the fingertip, knuckle and fingernail. This could be used in a variety of ways, for example, to copy and paste, to capitalize letters, to activate different drawing modes, etc. Ergonomics and usage. Touchscreen enable. For touchscreens to be effective input devices, users must be able to accurately select targets and avoid accidental selection of adjacent targets. The design of touchscreen interfaces should reflect technical capabilities of the system, ergonomics, cognitive psychology and human physiology. Guidelines for touchscreen designs were first developed in the 2000s, based on early research and actual use of older systems, typically using infrared grids—which were highly dependent on the size of the user's fingers. These guidelines are less relevant for the bulk of modern touch devices which use capacitive or resistive touch technology. From the mid-2000s, makers of operating systems for smartphones have promulgated standards, but these vary between manufacturers, and allow for significant variation in size based on technology changes, so are unsuitable from a human factors perspective. Much more important is the accuracy humans have in selecting targets with their finger or a pen stylus. The accuracy of user selection varies by position on the screen: users are most accurate at the center, less so at the left and right edges, and least accurate at the top edge and especially the bottom edge. The R95 accuracy (required radius for 95% target accuracy) varies from in the center to in the lower corners. Users are subconsciously aware of this, and take more time to select targets which are smaller or at the edges or corners of the touchscreen. This user inaccuracy is a result of parallax, visual acuity and the speed of the feedback loop between the eyes and fingers. The precision of the human finger alone is much, much higher than this, so when assistive technologies are provided—such as on-screen magnifiers—users can move their finger (once in contact with the screen) with precision as small as 0.1 mm (0.004 in). Hand position, digit used and switching. Users of handheld and portable touchscreen devices hold them in a variety of ways, and routinely change their method of holding and selection to suit the position and type of input. There are four basic types of handheld interaction: Use rates vary widely. While two-thumb tapping is encountered rarely (1–3%) for many general interactions, it is used for 41% of typing interaction. In addition, devices are often placed on surfaces (desks or tables) and tablets especially are used in stands. The user may point, select or gesture in these cases with their finger or thumb, and vary use of these methods. Combined with haptics. Touchscreens are often used with haptic response systems. A common example of this technology is the vibratory feedback provided when a button on the touchscreen is tapped. Haptics are used to improve the user's experience with touchscreens by providing simulated tactile feedback, and can be designed to react immediately, partly countering on-screen response latency. Research from the University of Glasgow (Brewster, Chohan, and Brown, 2007; and more recently Hogan) demonstrates that touchscreen users reduce input errors (by 20%), increase input speed (by 20%), and lower their cognitive load (by 40%) when touchscreens are combined with haptics or tactile feedback. On top of this, a study conducted in 2013 by Boston College explored the effects that touchscreens haptic stimulation had on triggering psychological ownership of a product. Their research concluded that a touchscreens ability to incorporate high amounts of haptic involvement resulted in customers feeling more endowment to the products they were designing or buying. The study also reported that consumers using a touchscreen were willing to accept a higher price point for the items they were purchasing. Customer service. Touchscreen technology has become integrated into many aspects of customer service industry in the 21st century. The restaurant industry is a good example of touchscreen implementation into this domain. Chain restaurants such as Taco Bell, Panera Bread, and McDonald's offer touchscreens as an option when customers are ordering items off the menu. While the addition of touchscreens is a development for this industry, customers may choose to bypass the touchscreen and order from a traditional cashier. To take this a step further, a restaurant in Bangalore has attempted to completely automate the ordering process. Customers sit down to a table embedded with touchscreens and order off an extensive menu. Once the order is placed it is sent electronically to the kitchen. These types of touchscreens fit under the Point of Sale (POS) systems mentioned in the lead section. "Gorilla arm". Extended use of gestural interfaces without the ability of the user to rest their arm is referred to as "gorilla arm". It can result in fatigue, and even repetitive stress injury when routinely used in a work setting. Certain early pen-based interfaces required the operator to work in this position for much of the workday. Allowing the user to rest their hand or arm on the input device or a frame around it is a solution for this in many contexts. This phenomenon is often cited as an example of movements to be minimized by proper ergonomic design. Unsupported touchscreens are still fairly common in applications such as ATMs and data kiosks, but are not an issue as the typical user only engages for brief and widely spaced periods. Fingerprints. Touchscreens can suffer from the problem of fingerprints on the display. This can be mitigated by the use of materials with optical coatings designed to reduce the visible effects of fingerprint oils. Most modern smartphones have oleophobic coatings, which lessen the amount of oil residue. Another option is to install a matte-finish anti-glare screen protector, which creates a slightly roughened surface that does not easily retain smudges. Glove touch. Capacitive touchscreens rarely work when the user wears gloves. The thickness of the glove and the material they are made of play a significant role on that and the ability of a touchscreen to pick up a touch. Some devices have a mode which increases the sensitivity of the touchscreen. This allows the touchscreen to be used more reliably with gloves, but can also result in unreliable and phantom inputs. However, thin gloves such as medical gloves are thin enough for users to wear when using touchscreens; mostly applicable to medical technology and machines. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left\\lceil H \\sqrt{2} \\right\\rfloor" }, { "math_id": 1, "text": " \\left\\lceil 2 \\sqrt{n} \\right\\rfloor" }, { "math_id": 2, "text": " \\left\\lceil 1.5 \\sqrt{n} \\right\\rfloor" } ]
https://en.wikipedia.org/wiki?curid=667206
66726751
1 Chronicles 11
First Book of Chronicles, chapter 11 1 Chronicles 11 is the eleventh chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains the accounts of David's installation as the king of Israel, the conquest of Jerusalem, and a list of David's heroes. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30). Text. This chapter was originally written in the Hebrew language. It is divided into 47 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Structure. 1 Chronicles 11 and 12 combine a 'variety of chronologically and geographically disparate lists' to establish the unity of "all Israel" (north and south), with their unanimous recognition of David's kingship. The outer framework consists of David's anointing at Hebron (1 Chronicles 11:1–3; 12:38–40) to enclose the lists of the warriors who attended the festivities (11:10–47; 12:23–38). The inner framework comprises the lists of David's forces while at Ziklag (12:1–7; 12:19–22) to enclose the warriors who joined him at “the stronghold” (12:8–18). David, king of Israel (11:1–3). The report concerning David's crowning in Hebron can be found in the books of Samuel, but the Chronicler also add some notes. "Then all Israel gathered together to David at Hebron and said, “Behold, we are your bone and flesh."" "Also, in time past, even when Saul was king, you were the one who led Israel out and brought them in; and the Lord your God said to you, ‘You shall shepherd My people Israel, and be ruler over My people Israel.’" Verse 2. This is the only place in the Chronicles that Saul was stated as king. "So all the elders of Israel came to the king at Hebron, and David made a covenant with them at Hebron before the Lord. And they anointed David king over Israel, according to the word of the Lord by Samuel." David conquers Jerusalem (11:4–9). The section is a rework of the report in , with the removal of obscure and unclear terms and insertion of unique details, such as the role of Joab in Jerusalem's capture. "Now David said, “Whoever attacks the Jebusites first shall be chief and captain.” And Joab the son of Zeruiah went up first, and became chief. " Verse 6. This verse contains a play on words: whoever attack "first" (Hebrew: , "rishon") will be "chief" (Hebrew: , "rosh"), Joab went up "first" and became "chief", although he was not listed among David's mighty men (–(). David's mighty men (11:10–47). Verses 10–41 conform with 2 Samuel 23:8–39 (with some spelling differences), whereas verses 42–47 are unique to the Chronicles. Without clear historical context, it is unclear whether the list refers to the period before or after David's accession to the throne. This passage consists of three parts (similar to the list in 2 Samuel): The purpose of the list is to portray David as a 'divinely chosen leader' with strong support from various groups in northern and southern Israel. "Now these are the chiefs of David's mighty men, who gave him strong support in his kingdom, together with all Israel, to make him king, according to the word of the Lord concerning Israel." Verse 10. Here and in the Chronicler underlines that David's kingdom encompasses all Israel as a fulfillment of YHWH's pledge to Israel, although this promise is not directly cited, References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66726751
6672748
Causal model
Conceptual model in philosophy of science In metaphysics, a causal model (or structural causal model) is a conceptual model that describes the causal mechanisms of a system. Several types of causal notation may be used in the development of a causal model. Causal models can improve study designs by providing clear rules for deciding which independent variables need to be included/controlled for. They can allow some questions to be answered from existing observational data without the need for an interventional study such as a randomized controlled trial. Some interventional studies are inappropriate for ethical or practical reasons, meaning that without a causal model, some hypotheses cannot be tested. Causal models can help with the question of "external validity" (whether results from one study apply to unstudied populations). Causal models can allow data from multiple studies to be merged (in certain circumstances) to answer questions that cannot be answered by any individual data set. Causal models have found applications in signal processing, epidemiology and machine learning. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Definition. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Causal models are mathematical models representing causal relationships within an individual system or population. They facilitate inferences about causal relationships from statistical data. They can teach us a good deal about the epistemology of causation, and about the relationship between causation and probability. They have also been applied to topics of interest to philosophers, such as the logic of counterfactuals, decision theory, and the analysis of actual causation. —  Judea Pearl defines a causal model as an ordered triple formula_0, where U is a set of exogenous variables whose values are determined by factors outside the model; V is a set of endogenous variables whose values are determined by factors within the model; and E is a set of structural equations that express the value of each endogenous variable as a function of the values of the other variables in U and V. History. Aristotle defined a taxonomy of causality, including material, formal, efficient and final causes. Hume rejected Aristotle's taxonomy in favor of counterfactuals. At one point, he denied that objects have "powers" that make one a cause and another an effect. Later he adopted "if the first object had not been, the second had never existed" ("but-for" causation). In the late 19th century, the discipline of statistics began to form. After a years-long effort to identify causal rules for domains such as biological inheritance, Galton introduced the concept of mean regression (epitomized by the sophomore slump in sports) which later led him to the non-causal concept of correlation. As a positivist, Pearson expunged the notion of causality from much of science as an unprovable special case of association and introduced the correlation coefficient as the metric of association. He wrote, "Force as a cause of motion is exactly the same as a tree god as a cause of growth" and that causation was only a "fetish among the inscrutable arcana of modern science". Pearson founded "Biometrika" and the Biometrics Lab at University College London, which became the world leader in statistics. In 1908 Hardy and Weinberg solved the problem of trait stability that had led Galton to abandon causality, by resurrecting Mendelian inheritance. In 1921 Wright's path analysis became the theoretical ancestor of causal modeling and causal graphs. He developed this approach while attempting to untangle the relative impacts of heredity, development and environment on guinea pig coat patterns. He backed up his then-heretical claims by showing how such analyses could explain the relationship between guinea pig birth weight, "in utero" time and litter size. Opposition to these ideas by prominent statisticians led them to be ignored for the following 40 years (except among animal breeders). Instead scientists relied on correlations, partly at the behest of Wright's critic (and leading statistician), Fisher. One exception was Burks, a student who in 1926 was the first to apply path diagrams to represent a mediating influence ("mediator") and to assert that holding a mediator constant induces errors. She may have invented path diagrams independently. In 1923, Neyman introduced the concept of a potential outcome, but his paper was not translated from Polish to English until 1990. In 1958 Cox warned that controlling for a variable Z is valid only if it is highly unlikely to be affected by independent variables. In the 1960s, Duncan, Blalock, Goldberger and others rediscovered path analysis. While reading Blalock's work on path diagrams, Duncan remembered a lecture by Ogburn twenty years earlier that mentioned a paper by Wright that in turn mentioned Burks. Sociologists originally called causal models structural equation modeling, but once it became a rote method, it lost its utility, leading some practitioners to reject any relationship to causality. Economists adopted the algebraic part of path analysis, calling it simultaneous equation modeling. However, economists still avoided attributing causal meaning to their equations. Sixty years after his first paper, Wright published a piece that recapitulated it, following Karlin et al.'s critique, which objected that it handled only linear relationships and that robust, model-free presentations of data were more revealing. In 1973 Lewis advocated replacing correlation with but-for causality (counterfactuals). He referred to humans' ability to envision alternative worlds in which a cause did or not occur, and in which an effect appeared only following its cause. In 1974 Rubin introduced the notion of "potential outcomes" as a language for asking causal questions. In 1983 Cartwright proposed that any factor that is "causally relevant" to an effect be conditioned on, moving beyond simple probability as the only guide. In 1986 Baron and Kenny introduced principles for detecting and evaluating mediation in a system of linear equations. As of 2014 their paper was the 33rd most-cited of all time. That year Greenland and Robins introduced the "exchangeability" approach to handling confounding by considering a counterfactual. They proposed assessing what would have happened to the treatment group if they had not received the treatment and comparing that outcome to that of the control group. If they matched, confounding was said to be absent. Ladder of causation. Pearl's causal metamodel involves a three-level abstraction he calls the ladder of causation. The lowest level, Association (seeing/observing), entails the sensing of regularities or patterns in the input data, expressed as correlations. The middle level, Intervention (doing), predicts the effects of deliberate actions, expressed as causal relationships. The highest level, Counterfactuals (imagining), involves constructing a theory of (part of) the world that explains why specific actions have specific effects and what happens in the absence of such actions. Association. One object is associated with another if observing one changes the probability of observing the other. Example: shoppers who buy toothpaste are more likely to also buy dental floss. Mathematically: formula_1 or the probability of (purchasing) floss given (the purchase of) toothpaste. Associations can also be measured via computing the correlation of the two events. Associations have no causal implications. One event could cause the other, the reverse could be true, or both events could be caused by some third event (unhappy hygienist shames shopper into treating their mouth better ). Intervention. This level asserts specific causal relationships between events. Causality is assessed by experimentally performing some action that affects one of the events. Example: after doubling the price of toothpaste, what would be the new probability of purchasing? Causality cannot be established by examining history (of price changes) because the price change may have been for some other reason that could itself affect the second event (a tariff that increases the price of both goods). Mathematically: formula_2 where "do" is an operator that signals the experimental intervention (doubling the price). The operator indicates performing the minimal change in the world necessary to create the intended effect, a "mini-surgery" on the model with as little change from reality as possible. Counterfactuals. The highest level, counterfactual, involves consideration of an alternate version of a past event, or what would happen under different circumstances for the same experimental unit. For example, what is the probability that, if a store had doubled the price of floss, the toothpaste-purchasing shopper would still have bought it? formula_3 Counterfactuals can indicate the existence of a causal relationship. Models that can answer counterfactuals allow precise interventions whose consequences can be predicted. At the extreme, such models are accepted as physical laws (as in the laws of physics, e.g., inertia, which says that if force is not applied to a stationary object, it will not move). Causality. Causality vs correlation. Statistics revolves around the analysis of relationships among multiple variables. Traditionally, these relationships are described as correlations, associations without any implied causal relationships. Causal models attempt to extend this framework by adding the notion of causal relationships, in which changes in one variable cause changes in others. Twentieth century definitions of causality relied purely on probabilities/associations. One event (formula_4) was said to cause another if it raises the probability of the other (formula_5). Mathematically this is expressed as: formula_6. Such definitions are inadequate because other relationships (e.g., a common cause for formula_4 and formula_5) can satisfy the condition. Causality is relevant to the second ladder step. Associations are on the first step and provide only evidence to the latter. A later definition attempted to address this ambiguity by conditioning on background factors. Mathematically: formula_7, where formula_8 is the set of background variables and formula_9 represents the values of those variables in a specific context. However, the required set of background variables is indeterminate (multiple sets may increase the probability), as long as probability is the only criterion. Other attempts to define causality include Granger causality, a statistical hypothesis test that causality (in economics) can be assessed by measuring the ability to predict the future values of one time series using prior values of another time series. Types. A cause can be necessary, sufficient, contributory or some combination. Necessary. For "x" to be a necessary cause of "y", the presence of "y" must imply the prior occurrence of "x". The presence of "x", however, does not imply that "y" will occur. Necessary causes are also known as "but-for" causes, as in "y" would not have occurred but for the occurrence of "x". Sufficient causes. For "x" to be a sufficient cause of "y", the presence of "x" must imply the subsequent occurrence of "y". However, another cause "z" may independently cause "y". Thus the presence of "y" does not require the prior occurrence of "x". Contributory causes. For "x" to be a contributory cause of "y", the presence of "x" must increase the likelihood of "y". If the likelihood is 100%, then "x" is instead called sufficient. A contributory cause may also be necessary. Model. Causal diagram. A causal diagram is a directed graph that displays causal relationships between variables in a causal model. A causal diagram includes a set of variables (or nodes). Each node is connected by an arrow to one or more other nodes upon which it has a causal influence. An arrowhead delineates the direction of causality, e.g., an arrow connecting variables formula_10 and formula_11 with the arrowhead at formula_11 indicates that a change in formula_10 causes a change in formula_11 (with an associated probability). A "path" is a traversal of the graph between two nodes following causal arrows. Causal diagrams include causal loop diagrams, directed acyclic graphs, and Ishikawa diagrams. Causal diagrams are independent of the quantitative probabilities that inform them. Changes to those probabilities (e.g., due to technological improvements) do not require changes to the model. Model elements. Causal models have formal structures with elements with specific properties. Junction patterns. The three types of connections of three nodes are linear chains, branching forks and merging colliders. Chain. Chains are straight line connections with arrows pointing from cause to effect. In this model, formula_11 is a mediator in that it mediates the change that formula_10 would otherwise have on formula_12. formula_13 Fork. In forks, one cause has multiple effects. The two effects have a common cause. There exists a (non-causal) spurious correlation between formula_10 and formula_12 that can be eliminated by conditioning on formula_11 (for a specific value of formula_11). formula_14 "Conditioning on formula_11" means "given formula_11" (i.e., given a value of formula_11). An elaboration of a fork is the confounder: formula_15 In such models, formula_11 is a common cause of formula_10 and formula_12 (which also causes formula_10), making formula_11 the confounder. Collider. In colliders, multiple causes affect one outcome. Conditioning on formula_11 (for a specific value of formula_11) often reveals a non-causal negative correlation between formula_10 and formula_12. This negative correlation has been called collider bias and the "explain-away" effect as formula_11 explains away the correlation between formula_10 and formula_12. The correlation can be positive in the case where contributions from both formula_10 and formula_12 are necessary to affect formula_11. formula_16 Node types. Mediator. A mediator node modifies the effect of other causes on an outcome (as opposed to simply affecting the outcome). For example, in the chain example above, formula_11 is a mediator, because it modifies the effect of formula_10 (an indirect cause of formula_12) on formula_12 (the outcome). Confounder. A confounder node affects multiple outcomes, creating a positive correlation among them. Instrumental variable. An "instrumental variable" is one that: Regression coefficients can serve as estimates of the causal effect of an instrumental variable on an outcome as long as that effect is not confounded. In this way, instrumental variables allow causal factors to be quantified without data on confounders. For example, given the model: formula_17 formula_18 is an instrumental variable, because it has a path to the outcome formula_5 and is unconfounded, e.g., by formula_19. In the above example, if formula_18 and formula_4 take binary values, then the assumption that formula_20 does not occur is called "monotonicity". Refinements to the technique include creating an instrument by conditioning on other variable to block the paths between the instrument and the confounder and combining multiple variables to form a single instrument. Mendelian randomization. Definition: Mendelian randomization uses measured variation in genes of known function to examine the causal effect of a modifiable exposure on disease in observational studies. Because genes vary randomly across populations, presence of a gene typically qualifies as an instrumental variable, implying that in many cases, causality can be quantified using regression on an observational study. Associations. Independence conditions. Independence conditions are rules for deciding whether two variables are independent of each other. Variables are independent if the values of one do not directly affect the values of the other. Multiple causal models can share independence conditions. For example, the models formula_13 and formula_14 have the same independence conditions, because conditioning on formula_11 leaves formula_10 and formula_12 independent. However, the two models do not have the same meaning and can be falsified based on data (that is, if observational data show an association between formula_10 and formula_12 after conditioning on formula_11, then both models are incorrect). Conversely, data cannot show which of these two models are correct, because they have the same independence conditions. Conditioning on a variable is a mechanism for conducting hypothetical experiments. Conditioning on a variable involves analyzing the values of other variables for a given value of the conditioned variable. In the first example, conditioning on formula_11 implies that observations for a given value of formula_11 should show no dependence between formula_10 and formula_12. If such a dependence exists, then the model is incorrect. Non-causal models cannot make such distinctions, because they do not make causal assertions. Confounder/deconfounder. An essential element of correlational study design is to identify potentially confounding influences on the variable under study, such as demographics. These variables are controlled for to eliminate those influences. However, the correct list of confounding variables cannot be determined "a priori". It is thus possible that a study may control for irrelevant variables or even (indirectly) the variable under study. Causal models offer a robust technique for identifying appropriate confounding variables. Formally, Z is a confounder if "Y is associated with Z via paths not going through X". These can often be determined using data collected for other studies. Mathematically, if formula_21 X and Y are confounded (by some confounder variable Z). Earlier, allegedly incorrect definitions of confounder include: The latter is flawed in that given that in the model: formula_22 Z matches the definition, but is a mediator, not a confounder, and is an example of controlling for the outcome. In the model formula_23 Traditionally, B was considered to be a confounder, because it is associated with X and with Y but is not on a causal path nor is it a descendant of anything on a causal path. Controlling for B causes it to become a confounder. This is known as M-bias. Backdoor adjustment. For analysing the causal effect of X on Y in a causal model all confounder variables must be addressed (deconfounding). To identify the set of confounders, (1) every noncausal path between X and Y must be blocked by this set; (2) without disrupting any causal paths; and (3) without creating any spurious paths. Definition: a backdoor path from variable X to Y is any path from X to Y that starts with an arrow pointing to X. Definition: Given an ordered pair of variables (X,Y) in a model, a set of confounder variables Z satisfies the backdoor criterion if (1) no confounder variable Z is a descendent of X and (2) all backdoor paths between X and Y are blocked by the set of confounders. If the backdoor criterion is satisfied for (X,Y), X and Y are deconfounded by the set of confounder variables. It is not necessary to control for any variables other than the confounders. The backdoor criterion is a sufficient but not necessary condition to find a set of variables Z to decounfound the analysis of the causal effect of X on y. When the causal model is a plausible representation of reality and the backdoor criterion is satisfied, then partial regression coefficients can be used as (causal) path coefficients (for linear relationships). formula_24 Frontdoor adjustment. If the elements of a blocking path are all unobservable, the backdoor path is not calculable, but if all forward paths from formula_25 have elements formula_26 where no open paths connect formula_27, then formula_18, the set of all formula_26s, can measure formula_28. Effectively, there are conditions where formula_18 can act as a proxy for formula_4. Definition: a frontdoor path is a direct causal path for which data is available for all formula_29, formula_18 intercepts all directed paths formula_4 to formula_5, there are no unblocked paths from formula_18 to formula_5, and all backdoor paths from formula_18 to formula_5 are blocked by formula_4. The following converts a do expression into a do-free expression by conditioning on the variables along the front-door path. formula_30 Presuming data for these observable probabilities is available, the ultimate probability can be computed without an experiment, regardless of the existence of other confounding paths and without backdoor adjustment. Interventions. Queries. Queries are questions asked based on a specific model. They are generally answered via performing experiments (interventions). Interventions take the form of fixing the value of one variable in a model and observing the result. Mathematically, such queries take the form (from the example): formula_31 where the "do" operator indicates that the experiment explicitly modified the price of toothpaste. Graphically, this blocks any causal factors that would otherwise affect that variable. Diagramatically, this erases all causal arrows pointing at the experimental variable. More complex queries are possible, in which the do operator is applied (the value is fixed) to multiple variables. Do calculus. The do calculus is the set of manipulations that are available to transform one expression into another, with the general goal of transforming expressions that contain the do operator into expressions that do not. Expressions that do not include the do operator can be estimated from observational data alone, without the need for an experimental intervention, which might be expensive, lengthy or even unethical (e.g., asking subjects to take up smoking). The set of rules is complete (it can be used to derive every true statement in this system). An algorithm can determine whether, for a given model, a solution is computable in polynomial time. Rules. The calculus includes three rules for the transformation of conditional probability expressions involving the do operator. Rule 1. Rule 1 permits the addition or deletion of observations.: formula_32 in the case that the variable set Z blocks all paths from W to Y and all arrows leading into X have been deleted. Rule 2. Rule 2 permits the replacement of an intervention with an observation or vice versa.: formula_33 in the case that Z satisfies the back-door criterion. Rule 3. Rule 3 permits the deletion or addition of interventions.: formula_34 in the case where no causal paths connect X and Y. Extensions. The rules do not imply that any query can have its do operators removed. In those cases, it may be possible to substitute a variable that is subject to manipulation (e.g., diet) in place of one that is not (e.g., blood cholesterol), which can then be transformed to remove the do. Example: formula_35 Counterfactuals. Counterfactuals consider possibilities that are not found in data, such as whether a nonsmoker would have developed cancer had they instead been a heavy smoker. They are the highest step on Pearl's causality ladder. Potential outcome. Definition: A potential outcome for a variable Y is "the value Y would have taken for individual "u", had X been assigned the value x". Mathematically: formula_36 or formula_37. The potential outcome is defined at the level of the individual "u." The conventional approach to potential outcomes is data-, not model-driven, limiting its ability to untangle causal relationships. It treats causal questions as problems of missing data and gives incorrect answers to even standard scenarios. Causal inference. In the context of causal models, potential outcomes are interpreted causally, rather than statistically. The first law of causal inference states that the potential outcome formula_38 can be computed by modifying causal model M (by deleting arrows into X) and computing the outcome for some "x". Formally: formula_39 Conducting a counterfactual. Examining a counterfactual using a causal model involves three steps. The approach is valid regardless of the form of the model relationships, linear or otherwise. When the model relationships are fully specified, point values can be computed. In other cases (e.g., when only probabilities are available) a probability-interval statement, such as non-smoker "x" would have a 10-20% chance of cancer, can be computed. Given the model: formula_40 the equations for calculating the values of A and C derived from regression analysis or another technique can be applied, substituting known values from an observation and fixing the value of other variables (the counterfactual). Abduct. Apply abductive reasoning (logical inference that uses observation to find the simplest/most likely explanation) to estimate "u", the proxy for the unobserved variables on the specific observation that supports the counterfactual. Compute the probability of "u" given the propositional evidence. Act. For a specific observation, use the do operator to establish the counterfactual (e.g., "m"=0), modifying the equations accordingly. Predict. Calculate the values of the output ("y") using the modified equations. Mediation. Direct and indirect (mediated) causes can only be distinguished via conducting counterfactuals. Understanding mediation requires holding the mediator constant while intervening on the direct cause. In the model formula_41 M mediates X's influence on Y, while X also has an unmediated effect on Y. Thus M is held constant, while do(X) is computed. The Mediation Fallacy instead involves conditioning on the mediator if the mediator and the outcome are confounded, as they are in the above model. For linear models, the indirect effect can be computed by taking the product of all the path coefficients along a mediated pathway. The total indirect effect is computed by the sum of the individual indirect effects. For linear models mediation is indicated when the coefficients of an equation fitted without including the mediator vary significantly from an equation that includes it. Direct effect. In experiments on such a model, the controlled direct effect (CDE) is computed by forcing the value of the mediator M (do(M = 0)) and randomly assigning some subjects to each of the values of X (do(X=0), do(X=1), ...) and observing the resulting values of Y. formula_42 Each value of the mediator has a corresponding CDE. However, a better experiment is to compute the natural direct effect. (NDE) This is the effect determined by leaving the relationship between X and M untouched while intervening on the relationship between X and Y. formula_43 For example, consider the direct effect of increasing dental hygienist visits (X) from every other year to every year, which encourages flossing (M). Gums (Y) get healthier, either because of the hygienist (direct) or the flossing (mediator/indirect). The experiment is to continue flossing while skipping the hygienist visit. Indirect effect. The indirect effect of X on Y is the "increase we would see in Y while holding X constant and increasing M to whatever value M would attain under a unit increase in X". Indirect effects cannot be "controlled" because the direct path cannot be disabled by holding another variable constant. The natural indirect effect (NIE) is the effect on gum health (Y) from flossing (M). The NIE is calculated as the sum of (floss and no-floss cases) of the difference between the probability of flossing given the hygienist and without the hygienist, or: formula_44 The above NDE calculation includes counterfactual subscripts (formula_45). For nonlinear models, the seemingly obvious equivalence formula_46 does not apply because of anomalies such as threshold effects and binary values. However, formula_47 works for all model relationships (linear and nonlinear). It allows NDE to then be calculated directly from observational data, without interventions or use of counterfactual subscripts. Transportability. Causal models provide a vehicle for integrating data across datasets, known as transport, even though the causal models (and the associated data) differ. E.g., survey data can be merged with randomized, controlled trial data. Transport offers a solution to the question of external validity, whether a study can be applied in a different context. Where two models match on all relevant variables and data from one model is known to be unbiased, data from one population can be used to draw conclusions about the other. In other cases, where data is known to be biased, reweighting can allow the dataset to be transported. In a third case, conclusions can be drawn from an incomplete dataset. In some cases, data from studies of multiple populations can be combined (via transportation) to allow conclusions about an unmeasured population. In some cases, combining estimates (e.g., P(W|X)) from multiple studies can increase the precision of a conclusion. Do-calculus provides a general criterion for transport: A target variable can be transformed into another expression via a series of do-operations that does not involve any "difference-producing" variables (those that distinguish the two populations). An analogous rule applies to studies that have relevantly different participants. Bayesian network. Any causal model can be implemented as a Bayesian network. Bayesian networks can be used to provide the inverse probability of an event (given an outcome, what are the probabilities of a specific cause). This requires preparation of a conditional probability table, showing all possible inputs and outcomes with their associated probabilities. For example, given a two variable model of Disease and Test (for the disease) the conditional probability table takes the form: According to this table, when a patient does not have the disease, the probability of a positive test is 12%. While this is tractable for small problems, as the number of variables and their associated states increase, the probability table (and associated computation time) increases exponentially. Bayesian networks are used commercially in applications such as wireless data error correction and DNA analysis. Invariants/context. A different conceptualization of causality involves the notion of invariant relationships. In the case of identifying handwritten digits, digit shape controls meaning, thus shape and meaning are the invariants. Changing the shape changes the meaning. Other properties do not (e.g., color). This invariance should carry across datasets generated in different contexts (the non-invariant properties form the context). Rather than learning (assessing causality) using pooled data sets, learning on one and testing on another can help distinguish variant from invariant properties. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle U, V, E\\rangle" }, { "math_id": 1, "text": "P (\\mathrm{floss} | \\mathrm{toothpaste}) " }, { "math_id": 2, "text": "P (\\mathrm{floss} | do(\\mathrm{toothpaste})) " }, { "math_id": 3, "text": "P (\\mathrm{floss} | \\mathrm{toothpaste}, 2*\\mathrm{price}) " }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "Y" }, { "math_id": 6, "text": "P (Y| X) > P(Y) " }, { "math_id": 7, "text": "P (Y | X, K = k) > P(Y|K=k) " }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "B" }, { "math_id": 12, "text": "C" }, { "math_id": 13, "text": "A \\rightarrow B \\rightarrow C" }, { "math_id": 14, "text": "A \\leftarrow B \\rightarrow C" }, { "math_id": 15, "text": "A \\leftarrow B \\rightarrow C \\rightarrow A " }, { "math_id": 16, "text": "A \\rightarrow B \\leftarrow C" }, { "math_id": 17, "text": "Z \\rightarrow X \\rightarrow Y \\leftarrow U \\rightarrow X" }, { "math_id": 18, "text": "Z" }, { "math_id": 19, "text": "U" }, { "math_id": 20, "text": "Z = 0, X = 1" }, { "math_id": 21, "text": "P(Y|X) \\ne P(Y|do(X))" }, { "math_id": 22, "text": "X \\rightarrow Z \\rightarrow Y" }, { "math_id": 23, "text": "X \\leftarrow A \\rightarrow B \\leftarrow C \\rightarrow Y" }, { "math_id": 24, "text": "P(Y|do(X)) = \\textstyle \\sum_{z} \\displaystyle P(Y|X, Z=z) P(Z=z)" }, { "math_id": 25, "text": "X\\to Y" }, { "math_id": 26, "text": "z" }, { "math_id": 27, "text": "z\\to Y" }, { "math_id": 28, "text": "P(Y|do(X))" }, { "math_id": 29, "text": "z\\in Z" }, { "math_id": 30, "text": "P(Y|do(X)) = \\textstyle \\sum_{z} \\left[\\displaystyle P(Z=z|X) \\textstyle \\sum_{x} \\displaystyle P(Y|X=x, Z=z) P(X=x)\\right]" }, { "math_id": 31, "text": "P (\\text{floss} \\vline do(\\text{toothpaste})) " }, { "math_id": 32, "text": "P(Y|do(X), Z, W) = P(Y|do(X),Z)" }, { "math_id": 33, "text": "P(Y|do(X),Z) = P(Y|X,Z)" }, { "math_id": 34, "text": "P(Y|do(X)) = P(Y)" }, { "math_id": 35, "text": "P(\\text{Heart disease} |do(\\text{blood cholesterol})) = P(\\text{Heart disease}|do(\\text{diet}))" }, { "math_id": 36, "text": "Y_{X = x}(u)" }, { "math_id": 37, "text": "Y_x(u)" }, { "math_id": 38, "text": "Y_X(u) " }, { "math_id": 39, "text": "Y_X(u) = Y_{Mx}(u)" }, { "math_id": 40, "text": "Y \\leftarrow X \\rightarrow M \\rightarrow Y \\leftarrow U " }, { "math_id": 41, "text": "Y \\leftarrow M \\leftarrow X \\rightarrow Y " }, { "math_id": 42, "text": "CDE(0) = P(Y=1|do(X=1), do(M=0)) - P(Y=1|do(X=0), do(M=0)) " }, { "math_id": 43, "text": "NDE = P(Y_{M=M0}=1|do(X=1)) - P(Y_{M=M0}=1|do(X=0)) " }, { "math_id": 44, "text": "NIE = \\sum_m[P(M=m|X=1)-P(M=m|X=0)] x x P(Y=1|X=0,M=m) " }, { "math_id": 45, "text": "Y_{M=M0} " }, { "math_id": 46, "text": "\\mathsf{Total \\ effect = Direct \\ effect + Indirect \\ effect} " }, { "math_id": 47, "text": "\\mathsf{Total \\ effect}(X=0 \\rightarrow X = 1) = NDE(X=0 \\rightarrow X = 1) - \\ NIE(X=1 \\rightarrow X=0) " } ]
https://en.wikipedia.org/wiki?curid=6672748
66730347
Video matting
Video matting is a technique for separating the video into two or more layers, usually foreground and background, and generating alpha mattes which determine blending of the layers. The technique is very popular in video editing because it allows to substitute the background, or process the layers individually. Video matting methods. Problem definition. When combining two images the alpha matte is utilized, also known as the transparency map. In the case of digital video, the alpha matte is a sequence of images. The matte can serve as a binary mask, defining which of the image parts are visible. In a more complicated case it enables smooth blending of the images, the alpha matte is used as the transparency map of the top image. Film production has known alpha matting since the very creation of filmmaking. The mattes were drawn by hand. Nowadays, the process can be automatized with computer algorithms. The basic matting problem is defined as following: given an image formula_0, compute the foreground formula_1, background formula_2 and alpha matte formula_3, such that the equation formula_4 holds true. This equation has trivial solution formula_5, formula_6, formula_2 is any image. Thus, usually an additional trimap must be provided as input. The trimap specifies background, foreground, and uncertain pixels, which will be decomposed into foreground and background by the matting method. The main criteria for video matting methods from a user perspective are following: Methods description. The first known video matting method was developed in 2001. The method utilizes optical flow for trimap propagation and a Bayesian image matting technique which is applied to each image separately. Video SnapCut, which later was incorporated in Adobe After Effects as Roto Brush tool, was developed in 2009. The method makes use of local classifiers for binary image segmentation near the target object's boundary. The results of the segmentation are propagated to the next frame using optical flow, and an image matting algorithm is applied. A method from 2011 was also included in Adobe After Effects as Refine Edge tool. The propagation of trimap with optical flow was enhanced with control points along the object edge. The method uses per-image matting, but temporal coherence was improved with a temporal filter. Finally, a deep learning method was developed for image matting in 2017. It overcomes most traditional methods. Benchmarking. Video matting is a rapidly-evolving field with many practical applications. However, in order to compare the quality of the methods, they must be tested on a benchmark. The benchmark consists of a dataset with test sequences and a result comparison methodology. Currently there exists one major video matting online benchmark, which uses chroma keying and stop motion for ground truth estimation. After method submission, the rating for each method is derived from objective metrics. As objective metrics do not represent human perception of quality, a subjective survey is necessary to provide adequate comparison. Practical use. Object cutout. Video matting methods are required in video editing software. The most common application is cutting out and transferring an object into another scene. The tool allows users to cut out a moving object by interactively painting areas that must or must not belong to the object, or specifying complete trimaps as input. There are several software implementations: To enhance the speed and quality of matting, some methods use additional data. For example, time-of-flight cameras had been explored in real-time matting systems. Background replacement. Another application of video matting is background matting, which is very popular in online video calls. A Zoom plugin had been developed, and Skype announced Background Replace in June 2020. Video matting methods also allow to apply video effects only to background or foreground. 3D video editing. Video matting is crucial in 2D to 3D conversion, where the alpha matte is used to correctly process transparent objects. It is also employed in stereo to multiview conversion. Video completion. Closely related to matting is video completion after removal of an object in the video. While matting is used to separate the video into several layers, completion allows to fill gaps with plausible contents from the video after removing one of the layers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "I=AF+(1-A)B" }, { "math_id": 5, "text": "A=1" }, { "math_id": 6, "text": "F=I" } ]
https://en.wikipedia.org/wiki?curid=66730347
6673051
Gnomon (figure)
Figure that, added to a given figure, makes a larger figure of the same shape In geometry, a gnomon is a plane figure formed by removing a similar parallelogram from a corner of a larger parallelogram; or, more generally, a figure that, added to a given figure, makes a larger figure of the same shape. Building figurate numbers. Figurate numbers were a concern of Pythagorean mathematics, and Pythagoras is credited with the notion that these numbers are generated from a "gnomon" or basic unit. The gnomon is the piece which needs to be added to a figurate number to transform it to the next bigger one. For example, the gnomon of the square number is the odd number, of the general form 2"n" + 1, "n" = 1, 2, 3, ... . The square of size 8 composed of gnomons looks like this:&lt;br&gt;&lt;br&gt; formula_0&lt;br&gt;&lt;br&gt; To transform from the "n-square" (the square of size "n") to the ("n" + 1)-square, one adjoins 2"n" + 1 elements: one to the end of each row ("n" elements), one to the end of each column ("n" elements), and a single one to the corner. For example, when transforming the 7-square to the 8-square, we add 15 elements; these adjunctions are the 8s in the above figure. This gnomonic technique also provides a proof that the sum of the first "n" odd numbers is "n"2; the figure illustrates 1 + 3 + 5 + 7 + 9 + 11 + 13 + 15 64 82. Applying the same technique to a multiplication table gives the Nicomachus theorem, proving that each squared triangular number is a sum of cubes. Isosceles triangles. In an acute isosceles triangle, it is possible to draw a similar but smaller triangle, one of whose sides is the base of the original triangle. The gnomon of these two similar triangles is the triangle remaining when the smaller of the two similar isosceles triangles is removed from the larger one. The gnomon is itself isosceles if and only if the ratio of the sides to the base of the original isosceles triangle, and the ratio of the base to the sides of the gnomon, is the golden ratio, in which case the acute isosceles triangle is the golden triangle and its gnomon is the golden gnomon. Conversely, the acute golden triangle can be the gnomon of the obtuse golden triangle in an exceptional reciprocal exchange of roles Metaphor and symbolism. A metaphor based around the geometry of a gnomon plays an important role in the literary analysis of James Joyce's "Dubliners", involving both a play on words between "paralysis" and "parallelogram", and the geometric meaning of a gnomon as something fragmentary, diminished from its completed shape. Gnomon shapes are also prominent in "Arithmetic Composition I", an abstract painting by Theo van Doesburg. There is also a very short geometric fairy tale illustrated by animations where gnomons play the role of invaders. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "~~~~~~~~\\begin{matrix}\n1&2&3&4&5&6&7&8\\\\\n2&2&3&4&5&6&7&8\\\\\n3&3&3&4&5&6&7&8\\\\\n4&4&4&4&5&6&7&8\\\\\n5&5&5&5&5&6&7&8\\\\\n6&6&6&6&6&6&7&8\\\\\n7&7&7&7&7&7&7&8\\\\\n8&8&8&8&8&8&8&8\n\\end{matrix}" } ]
https://en.wikipedia.org/wiki?curid=6673051
66732110
Buffett indicator
Aggregate stock market valuation metric The Buffett indicator (or the Buffett metric, or the Market capitalization-to-GDP ratio) is a valuation multiple used to assess how expensive or cheap the aggregate stock market is at a given point in time. It was proposed as a metric by investor Warren Buffett in 2001, who called it "probably the best single measure of where valuations stand at any given moment", and its modern form compares the capitalization of the US Wilshire 5000 index to US GDP. It is widely followed by the financial media as a valuation measure for the US market in both its absolute, and de-trended forms. The indicator set an all-time high during the so-called "everything bubble", crossing the 200% level in February 2021; a level that Buffett warned if crossed, was "playing with fire". History. On 10 December 2001, Buffett proposed the metric in a "Fortune" essay co-authored with journalist Carol Loomis. In the essay, Buffett presented a chart going back 80 years that showed the value of all "publicly traded securities" in the US as a percentage of "US GNP". Buffett said of the metric: "Still, it is probably the best single measure of where valuations stand at any given moment. And as you can see, nearly two years ago the ratio rose to an unprecedented level. That should have been a very strong warning signal". Buffett explained that for the annual return of US securities to materially exceed the annual growth of US GNP for a protracted period of time: "you need to have the line go straight off the top of the chart. That won't happen". Buffett finished the essay by outlining the levels he believed the metric showed favorable or poor times to invest: "For me, the message of that chart is this: If the percentage relationship falls to the 70% or 80% area, buying stocks is likely to work very well for you. If the ratio approaches 200%–as it did in 1999 and a part of 2000–you are playing with fire". Buffett's metric became known as the "Buffett Indicator", and has continued to receive widespread attention in the financial media, and in modern finance textbooks. In 2018, finance author Mark Hulbert writing in the "Wall Street Journal", listed the Buffett indicator as one of his "Eight Best Predictors of the Long-Term Market". A study by two European academics published in May, 2022 found the Buffett Indicator "explains a large fraction of ten-year return variation for the majority of countries outside the United States". The study examined 10-year periods in fourteen developed markets, in most cases with data starting in 1973. The Buffett Indicator forecasted an average of 83% of returns across all nations and periods, though the predictive value ranged from a low of 42% to as high as 93% depending on the specific nation. Accuracy was lower in nations with smaller stock markets. Theory. Buffett acknowledged that his metric was a simple one and thus had "limitations", however the underlying theoretical basis for the indicator, particularly in the US, is considered reasonable. For example, studies have shown a consistent and strong annual correlation between US GDP growth, and US corporate profit growth, and which has increased materially since the Great Recession of 2007–2009. GDP captures effects where a given industry's margins increase materially for a period, but the effect of reduced wages and costs, dampening margins in other industries. The same studies show a poor annual correlation between US GDP growth and US equity returns, underlining Buffett's belief that when equity prices get ahead of corporate profits (via the GNP/GDP proxy), poor returns will follow. The indicator has also been advocated for its ability to reduce the effects of "aggressive accounting" or "adjusted profits", that distort the value of corporate profits in the price–earnings ratio or EV/EBITDA ratio metrics; and that it is not affected by share buybacks (which don't affect aggregate corporate profits). The Buffett indicator has been calculated for most international stock markets, however, caveats apply as other markets can have less stable compositions of listed corporations (e.g. the Saudi Arabia metric was materially impacted by the 2018 listing of Aramco), or a significantly higher/lower composition of private vs public firms (e.g. Germany vs. Switzerland), and therefore comparisons "across" international markets using the indicator as a "comparative" measure of valuation are not appropriate. The Buffett indicator has also been calculated for industries (but also noting that it is not relevant for "cross industry" valuation comparison). Trending. There is evidence that the Buffett indicator has trended upwards over time, particularly post 1995, and the lows registered in 2009 would have registered as average readings from the 1950–1995 era. Reasons proposed include that GDP might not capture all the overseas profits of US multinationals (e.g. use of tax havens or tax structures by large US technology and life sciences multinationals), or that the profitability of US companies has structurally increased (e.g. due to increased concentration of technology companies), thus justifying a higher ratio; although that may also revert over time. Other commentators have highlighted that the omission by metric of corporate debt, could also be having an effect. Formula. Buffett's original chart used US GNP as the divisor, which captures the domestic and international activity of all US resident entities even if based abroad, however, many modern Buffett metrics use US GDP as the metric. US GDP has historically been within 1 percent of US GNP, and is more readily available (other international markets have greater variation between GNP and GDP). Buffett's original chart used the Federal Reserve Economic Data (FRED) database from the Federal Reserve Bank of St. Louis for "corporate equities", as it went back for over 80 years; however, many modern Buffett metrics simply use the main S&amp;P 500 index, or the broader Wilshire 5000 index instead. A common modern formula for the US market, which is expressed as a percentage, is: formula_0 The choice of how GDP is calculated (e.g. deflator), can materially affect the "absolute value" of the ratio; for example, the Buffett indicator calculated by the Federal Reserve Bank of St. Louis peaks at 118% in Q1 2000, while the version calculated by Wilshire Associates peaks at 137% in Q1 2000, while the versions following Buffett's original technique, peak at very close to 160% in Q1 2000. Records. Using Buffett's original calculation basis in his 2001 article, but with GDP, the metric has had the following lows and highs from 1950 to February 2021: Using the more common modern Buffett indicator with the Wilshire 5000 and US GDP, the metric has had the following lows and highs from 1970 to February 2021: De-trended data of Buffett's original calculation basis (see above) has had the following lows and highs from 1950 to February 2021 (expressed a % deviation from mean): Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Buffett\\ indicator} = \\frac{\\operatorname{Wilshire\\ 5000\\ capitalization}}{\\operatorname{US\\ GDP}}\\times 100" } ]
https://en.wikipedia.org/wiki?curid=66732110
6673551
Sample-continuous process
In mathematics, a sample-continuous process is a stochastic process whose sample paths are almost surely continuous functions. Definition. Let (Ω, Σ, P) be a probability space. Let "X" : "I" × Ω → "S" be a stochastic process, where the index set "I" and state space "S" are both topological spaces. Then the process "X" is called sample-continuous (or almost surely continuous, or simply continuous) if the map "X"("ω") : "I" → "S" is continuous as a function of topological spaces for P-almost all "ω" in "Ω". In many examples, the index set "I" is an interval of time, [0, "T"] or [0, +∞), and the state space "S" is the real line or "n"-dimensional Euclidean space R"n". formula_0 is "not" sample-continuous. In fact, it is surely discontinuous.
[ { "math_id": 0, "text": "\\begin{cases} X_{t} \\sim \\mathrm{Unif} (\\{X_{t-1} - 1, X_{t-1} + 1\\}), & t \\mbox{ an integer;} \\\\ X_{t} = X_{\\lfloor t \\rfloor}, & t \\mbox{ not an integer;} \\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=6673551
6673876
Law (stochastic processes)
In mathematics, the law of a stochastic process is the measure that the process induces on the collection of functions from the index set into the state space. The law encodes a lot of information about the process; in the case of a random walk, for example, the law is the probability distribution of the possible trajectories of the walk. Definition. Let (Ω, "F", P) be a probability space, "T" some index set, and ("S", Σ) a measurable space. Let "X" : "T" × Ω → "S" be a stochastic process (so the map formula_0 is an ("S", Σ)-measurable function for each "t" ∈ "T"). Let "S""T" denote the collection of all functions from "T" into "S". The process "X" (by way of currying) induces a function Φ"X" : Ω → "S""T", where formula_1 The law of the process "X" is then defined to be the pushforward measure formula_2 on "S""T". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_{t} : \\Omega \\to S : \\omega \\mapsto X (t, \\omega)" }, { "math_id": 1, "text": "\\left( \\Phi_{X} (\\omega) \\right) (t) := X_{t} (\\omega)." }, { "math_id": 2, "text": "\\mathcal{L}_{X} := \\left( \\Phi_{X} \\right)_{*} ( \\mathbf{P} ) = \\mathbf P(\\Phi_X^{-1}[\\cdot])" } ]
https://en.wikipedia.org/wiki?curid=6673876
6674542
Kolmogorov extension theorem
Consistent set of finite-dimensional distributions will define a stochastic process In mathematics, the Kolmogorov extension theorem (also known as Kolmogorov existence theorem, the Kolmogorov consistency theorem or the Daniell-Kolmogorov theorem) is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process. It is credited to the English mathematician Percy John Daniell and the Russian mathematician Andrey Nikolaevich Kolmogorov. Statement of the theorem. Let formula_0 denote some interval (thought of as "time"), and let formula_1. For each formula_2 and finite sequence of distinct times formula_3, let formula_4 be a probability measure on formula_5 Suppose that these measures satisfy two consistency conditions: 1. for all permutations formula_6 of formula_7 and measurable sets formula_8, formula_9 2. for all measurable sets formula_8,formula_10 formula_11 Then there exists a probability space formula_12 and a stochastic process formula_13 such that formula_14 for all formula_15, formula_2 and measurable sets formula_8, i.e. formula_16 has formula_4 as its finite-dimensional distributions relative to times formula_17. In fact, it is always possible to take as the underlying probability space formula_18 and to take for formula_16 the canonical process formula_19. Therefore, an alternative way of stating Kolmogorov's extension theorem is that, provided that the above consistency conditions hold, there exists a (unique) measure formula_20 on formula_21 with marginals formula_4 for any finite collection of times formula_17. Kolmogorov's extension theorem applies when formula_0 is uncountable, but the price to pay for this level of generality is that the measure formula_20 is only defined on the product σ-algebra of formula_21, which is not very rich. Explanation of the conditions. The two conditions required by the theorem are trivially satisfied by any stochastic process. For example, consider a real-valued discrete-time stochastic process formula_16. Then the probability formula_22 can be computed either as formula_23 or as formula_24. Hence, for the finite-dimensional distributions to be consistent, it must hold that formula_25. The first condition generalizes this statement to hold for any number of time points formula_26, and any control sets formula_27. Continuing the example, the second condition implies that formula_28. Also this is a trivial condition that will be satisfied by any consistent family of finite-dimensional distributions. Implications of the theorem. Since the two conditions are trivially satisfied for any stochastic process, the power of the theorem is that no other conditions are required: For any reasonable (i.e., consistent) family of finite-dimensional distributions, there exists a stochastic process with these distributions. The measure-theoretic approach to stochastic processes starts with a probability space and defines a stochastic process as a family of functions on this probability space. However, in many applications the starting point is really the finite-dimensional distributions of the stochastic process. The theorem says that provided the finite-dimensional distributions satisfy the obvious consistency requirements, one can always identify a probability space to match the purpose. In many situations, this means that one does not have to be explicit about what the probability space is. Many texts on stochastic processes do, indeed, assume a probability space but never state explicitly what it is. The theorem is used in one of the standard proofs of existence of a Brownian motion, by specifying the finite dimensional distributions to be Gaussian random variables, satisfying the consistency conditions above. As in most of the definitions of Brownian motion it is required that the sample paths are continuous almost surely, and one then uses the Kolmogorov continuity theorem to construct a continuous modification of the process constructed by the Kolmogorov extension theorem. General form of the theorem. The Kolmogorov extension theorem gives us conditions for a collection of measures on Euclidean spaces to be the finite-dimensional distributions of some formula_29-valued stochastic process, but the assumption that the state space be formula_29 is unnecessary. In fact, any collection of measurable spaces together with a collection of inner regular measures defined on the finite products of these spaces would suffice, provided that these measures satisfy a certain compatibility relation. The formal statement of the general theorem is as follows. Let formula_0 be any set. Let formula_30 be some collection of measurable spaces, and for each formula_31, let formula_32 be a Hausdorff topology on formula_33. For each finite subset formula_34, define formula_35. For subsets formula_36, let formula_37 denote the canonical projection map formula_38. For each finite subset formula_39, suppose we have a probability measure formula_40 on formula_41 which is inner regular with respect to the product topology (induced by the formula_42) on formula_43. Suppose also that this collection formula_44 of measures satisfies the following compatibility relation: for finite subsets formula_45, we have that formula_46 where formula_47 denotes the pushforward measure of formula_48 induced by the canonical projection map formula_49. Then there exists a unique probability measure formula_50 on formula_51 such that formula_52 for every finite subset formula_53. As a remark, all of the measures formula_54 are defined on the product sigma algebra on their respective spaces, which (as mentioned before) is rather coarse. The measure formula_50 may sometimes be extended appropriately to a larger sigma algebra, if there is additional structure involved. Note that the original statement of the theorem is just a special case of this theorem with formula_55 for all formula_56, and formula_57 for formula_58. The stochastic process would simply be the canonical process formula_59, defined on formula_60 with probability measure formula_61. The reason that the original statement of the theorem does not mention inner regularity of the measures formula_62 is that this would automatically follow, since Borel probability measures on Polish spaces are automatically Radon. This theorem has many far-reaching consequences; for example it can be used to prove the existence of the following, among others: History. According to John Aldrich, the theorem was independently discovered by British mathematician Percy John Daniell in the slightly different setting of integration theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "n \\in \\mathbb{N}" }, { "math_id": 2, "text": "k \\in \\mathbb{N}" }, { "math_id": 3, "text": "t_{1}, \\dots, t_{k} \\in T" }, { "math_id": 4, "text": "\\nu_{t_{1} \\dots t_{k}}" }, { "math_id": 5, "text": "(\\mathbb{R}^{n})^{k}." }, { "math_id": 6, "text": "\\pi" }, { "math_id": 7, "text": "\\{ 1, \\dots, k \\}" }, { "math_id": 8, "text": "F_{i} \\subseteq \\mathbb{R}^{n}" }, { "math_id": 9, "text": "\\nu_{t_{\\pi (1)} \\dots t_{\\pi (k)}} \\left( F_{\\pi (1)} \\times \\dots \\times F_{ \\pi(k)} \\right) = \\nu_{t_{1} \\dots t_{k}} \\left( F_{1} \\times \\dots \\times F_{k} \\right);" }, { "math_id": 10, "text": "m \\in \\mathbb{N}" }, { "math_id": 11, "text": "\\nu_{t_{1} \\dots t_{k}} \\left( F_{1} \\times \\dots \\times F_{k} \\right) = \\nu_{t_{1} \\dots t_{k}, t_{k + 1}, \\dots , t_{k+m}} \\left( F_{1} \\times \\dots \\times F_{k} \\times \\underbrace{\\mathbb{R}^{n} \\times \\dots \\times \\mathbb{R}^{n}}_{m} \\right)." }, { "math_id": 12, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 13, "text": "X : T \\times \\Omega \\to \\mathbb{R}^{n}" }, { "math_id": 14, "text": "\\nu_{t_{1} \\dots t_{k}} \\left( F_{1} \\times \\dots \\times F_{k} \\right) = \\mathbb{P} \\left( X_{t_{1}} \\in F_{1}, \\dots, X_{t_{k}} \\in F_{k} \\right)" }, { "math_id": 15, "text": "t_{i} \\in T" }, { "math_id": 16, "text": "X" }, { "math_id": 17, "text": "t_{1} \\dots t_{k}" }, { "math_id": 18, "text": "\\Omega = (\\mathbb{R}^n)^T" }, { "math_id": 19, "text": "X\\colon (t,Y) \\mapsto Y_t" }, { "math_id": 20, "text": "\\nu" }, { "math_id": 21, "text": "(\\mathbb{R}^n)^T" }, { "math_id": 22, "text": "\\mathbb{P}(X_1 >0, X_2<0)" }, { "math_id": 23, "text": "\\nu_{1,2}( \\mathbb{R}_+ \\times \\mathbb{R}_-)" }, { "math_id": 24, "text": "\\nu_{2,1}( \\mathbb{R}_- \\times \\mathbb{R}_+)" }, { "math_id": 25, "text": "\\nu_{1,2}( \\mathbb{R}_+ \\times \\mathbb{R}_-) = \\nu_{2,1}( \\mathbb{R}_- \\times \\mathbb{R}_+)" }, { "math_id": 26, "text": "t_i" }, { "math_id": 27, "text": "F_i" }, { "math_id": 28, "text": "\\mathbb{P}(X_1>0) = \\mathbb{P}(X_1>0, X_2 \\in \\mathbb{R})" }, { "math_id": 29, "text": "\\mathbb{R}^{n}" }, { "math_id": 30, "text": " \\{ (\\Omega_t, \\mathcal{F}_t) \\}_{t \\in T} " }, { "math_id": 31, "text": " t \\in T " }, { "math_id": 32, "text": " \\tau_t" }, { "math_id": 33, "text": " \\Omega_t" }, { "math_id": 34, "text": "J \\subset T" }, { "math_id": 35, "text": "\\Omega_J := \\prod_{t\\in J} \\Omega_t" }, { "math_id": 36, "text": "I \\subset J \\subset T" }, { "math_id": 37, "text": "\\pi^J_I: \\Omega_J \\to \\Omega_I" }, { "math_id": 38, "text": " \\omega \\mapsto \\omega|_I " }, { "math_id": 39, "text": " F \\subset T" }, { "math_id": 40, "text": " \\mu_F " }, { "math_id": 41, "text": " \\Omega_F " }, { "math_id": 42, "text": "\\tau_t" }, { "math_id": 43, "text": "\\Omega_F " }, { "math_id": 44, "text": "\\{\\mu_F\\}" }, { "math_id": 45, "text": "F \\subset G \\subset T" }, { "math_id": 46, "text": "\\mu_F = (\\pi^G_F)_* \\mu_G" }, { "math_id": 47, "text": "(\\pi^G_F)_* \\mu_G" }, { "math_id": 48, "text": " \\mu_G" }, { "math_id": 49, "text": "\\pi^G_F" }, { "math_id": 50, "text": "\\mu" }, { "math_id": 51, "text": "\\Omega_T " }, { "math_id": 52, "text": "\\mu_F=(\\pi^T_F)_* \\mu" }, { "math_id": 53, "text": "F \\subset T" }, { "math_id": 54, "text": "\\mu_F,\\mu" }, { "math_id": 55, "text": "\\Omega_t = \\mathbb{R}^n " }, { "math_id": 56, "text": "t \\in T" }, { "math_id": 57, "text": " \\mu_{\\{t_1,...,t_k\\}}=\\nu_{t_1 \\dots t_k}" }, { "math_id": 58, "text": " t_1,...,t_k \\in T" }, { "math_id": 59, "text": " (\\pi_t)_{t \\in T}" }, { "math_id": 60, "text": "\\Omega=(\\mathbb{R}^n)^T" }, { "math_id": 61, "text": "P=\\mu" }, { "math_id": 62, "text": "\\nu_{t_1\\dots t_k}" } ]
https://en.wikipedia.org/wiki?curid=6674542
667479
Buffon's needle problem
Question in geometric probability In probability theory, Buffon's needle problem is a question first posed in the 18th century by Georges-Louis Leclerc, Comte de Buffon: Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two strips? Buffon's needle was the earliest problem in geometric probability to be solved; it can be solved using integral geometry. The solution for the sought probability p, in the case where the needle length l is not greater than the width t of the strips, is formula_0 This can be used to design a Monte Carlo method for approximating the number π, although that was not the original motivation for de Buffon's question. The seemingly unusual appearance of π in this expression occurs because the underlying probability distribution function for the needle orientation is rotationally symmetric. Solution. The problem in more mathematical terms is: Given a needle of length l dropped on a plane ruled with parallel lines t units apart, what is the probability that the needle will lie across a line upon landing? Let x be the distance from the center of the needle to the closest parallel line, and let θ be the acute angle between the needle and one of the parallel lines. The uniform probability density function (PDF) of x between 0 and is formula_1 Here, "x" 0 represents a needle that is centered directly on a line, and "x" represents a needle that is perfectly centered between two lines. The uniform PDF assumes the needle is equally likely to fall anywhere in this range, but could not fall outside of it. The uniform probability density function of θ between 0 and is formula_2 Here, "θ" 0 represents a needle that is parallel to the marked lines, and "θ" radians represents a needle that is perpendicular to the marked lines. Any angle within this range is assumed an equally likely outcome. The two random variables, x and θ, are independent, so the joint probability density function is the product formula_3 The needle crosses a line if formula_4 Now there are two cases. Case 1: Short needle ("l" ≤ "t"). Integrating the joint probability density function gives the probability that the needle will cross a line: formula_5 Case 2: Long needle ("l" &gt; "t"). Suppose "l" &gt; "t". In this case, integrating the joint probability density function, we obtain: formula_6 where "m"("θ") is the minimum between sin "θ" and . Thus, performing the above integration, we see that, when "l" &gt; "t", the probability that the needle will cross at least one line is formula_7 or formula_8 In the second expression, the first term represents the probability of the angle of the needle being such that it will always cross at least one line. The right term represents the probability that the needle falls at an angle where its position matters, and it crosses the line. Alternatively, notice that whenever θ has a value such that "l" sin "θ" ≤ "t", that is, in the range 0 ≤ "θ" ≤ arcsin, the probability of crossing is the same as in the short needle case. However if "l" sin "θ" &gt; "t", that is, arcsin &lt; "θ" ≤ the probability is constant and is equal to 1. formula_9 Using elementary calculus. The following solution for the "short needle" case, while equivalent to the one above, has a more visual flavor, and avoids iterated integrals. We can calculate the probability P as the product of two probabilities: "P" "P"1 · "P"2, where "P"1 is the probability that the center of the needle falls close enough to a line for the needle to possibly cross it, and "P"2 is the probability that the needle actually crosses the line, given that the center is within reach. Looking at the illustration in the above section, it is apparent that the needle can cross a line if the center of the needle is within units of either side of the strip. Adding + from both sides and dividing by the whole width t, we obtain "P"1 Now, we assume that the center is within reach of the edge of the strip, and calculate "P"2. To simplify the calculation, we can assume that formula_10. Let x and θ be as in the illustration in this section. Placing a needle's center at x, the needle will cross the vertical axis if it falls within a range of 2"θ" radians, out of π radians of possible orientations. This represents the gray area to the left of x in the figure. For a fixed x, we can express θ as a function of x: "θ"("x") arccos("x"). Now we can let x range from 0 to 1, and integrate: formula_11 Multiplying both results, we obtain "P" "P"1 · "P"2 as above. There is an even more elegant and simple method of calculating the "short needle case". The end of the needle farthest away from any one of the two lines bordering its region must be located within a horizontal (perpendicular to the bordering lines) distance of "l" cos "θ" (where θ is the angle between the needle and the horizontal) from this line in order for the needle to cross it. The farthest this end of the needle can move away from this line horizontally in its region is t. The probability that the farthest end of the needle is located no more than a distance "l" cos "θ" away from the line (and thus that the needle crosses the line) out of the total distance t it can move in its region for 0 ≤ "θ" ≤ is given by formula_12 Without integrals. The short-needle problem can also be solved without any integration, in a way that explains the formula for p from the geometric fact that a circle of diameter t will cross the distance t strips always (i.e. with probability 1) in exactly two spots. This solution was given by Joseph-Émile Barbier in 1860 and is also referred to as "Buffon's noodle". Estimating π. In the first, simpler case above, the formula obtained for the probability P can be rearranged to formula_13 Thus, if we conduct an experiment to estimate P, we will also have an estimate for π. Suppose we drop n needles and find that h of those needles are crossing lines, so P is approximated by the fraction . This leads to the formula: formula_14 In 1901, Italian mathematician Mario Lazzarini performed Buffon's needle experiment. Tossing a needle 3,408 times, he obtained the well-known approximation for π, accurate to six decimal places. Lazzarini's "experiment" is an example of confirmation bias, as it was set up to replicate the already well-known approximation of (in fact, there is no better rational approximation with fewer than five digits in the numerator and denominator, see also Milü), yielding a more accurate "prediction" of π than would be expected from the number of trials, as follows: Lazzarini chose needles whose length was of the width of the strips of wood. In this case, the probability that the needles will cross the lines is . Thus if one were to drop n needles and get x crossings, one would estimate π as formula_15 So if Lazzarini was aiming for the result , he needed n and x such that formula_16 or equivalently, formula_17 To do this, one should pick n as a multiple of 213, because then is an integer; one then drops n needles, and hopes for exactly "x" successes. If one drops 213 needles and happens to get 113 successes, then one can triumphantly report an estimate of π accurate to six decimal places. If not, one can just do 213 more trials and hope for a total of 226 successes; if not, just repeat as necessary. Lazzarini performed 3,408 213 × 16 trials, making it seem likely that this is the strategy he used to obtain his "estimate". The above description of strategy might even be considered charitable to Lazzarini. A statistical analysis of intermediate results he reported for fewer tosses leads to a very low probability of achieving such close agreement to the expected value all through the experiment. This makes it very possible that the "experiment" itself was never physically performed, but based on numbers concocted from imagination to match statistical expectations, but too well, as it turns out. Dutch science journalist Hans van Maanen argues, however, that Lazzarini's article was never meant to be taken too seriously as it would have been pretty obvious for the readers of the magazine (aimed at school teachers) that the apparatus that Lazzarini said to have built cannot possibly work as described. Laplace's extension (short needle case). Now consider the case where the plane contains two sets of parallel lines orthogonal to one another, creating a standard perpendicular grid. We aim to find the probability that the needle intersects at least one line on the grid. Let a and b be the sides of the rectangle that contains the midpoint of the needle whose length is l. Since this is the short needle case, "l" &lt; "a", "l" &lt; "b". Let ("x","y") mark the coordinates of the needle's midpoint and let φ mark the angle formed by the needle and the x-axis. Similar to the examples described above, we consider x, y, φ to be independent uniform random variables over the ranges 0 ≤ "x" ≤ "a", 0 ≤ "y" ≤ "b", − ≤ "φ" ≤. To solve such a problem, we first compute the probability that the needle crosses no lines, and then we take its compliment. We compute this first probability by determining the volume of the domain where the needle crosses no lines and then divide that by the volume of all possibilities, V. We can easily see that "V" "πab". Now let "V"* be the volume of possibilities where the needle does not intersect any line. Developed by J.V. Uspensky, formula_18 where "F"("φ") is the region where the needle does not intersect any line given an angle φ. To determine "F"("φ"), let's first look at the case for the horizontal edges of the bounding rectangle. The total side length is a and the midpoint must not be within cos "φ" of either endpoint of the edge. Thus, the total allowable length for no intersection is "a" − 2( cos "φ") or simply just "a" − "l" cos "φ". Equivalently, for the vertical edges with length b, we have "b" ± "l" sin "φ". The ± accounts for the cases where φ is positive or negative. Taking the positive case and then adding the absolute value signs in the final answer for generality, we get formula_19 Now we can compute the following integral: formula_20 Thus, the probability that the needle does not intersect any line is formula_21 And finally, if we want to calculate the probability, P, that the needle does intersect at least one line, we need to subtract the above result from 1 to compute its compliment, yielding formula_22. Comparing estimators of π. As mentioned above, Buffon's needle experiment can be used to estimate π. This fact holds for Laplace's extension too since π shows up in that answer as well. The following question then naturally arises and is discussed by E.F. Schuster in 1974. Is Buffon's experiment or Laplace's a better estimator of the value of π? Since in Laplace's extension there are two sets of parallel lines, we compare N drops when there is a grid (Laplace), and 2"N" drops in Buffon's original experiment. Let A be the event that the needle intersects a horizontal line (parallel to the x-axis) formula_23 and let B be the event that the needle intersects a vertical line (parallel to the y-axis) formula_24 For simplicity in the algebraic formulation ahead, let "a" "b" "t" 2"l" such that the original result in Buffon's problem is "P"("A") "P"("B") . Furthermore, let "N" 100 drops. Now let us examine "P"("AB") for Laplace's result, that is, the probability the needle intersects both a horizontal and a vertical line. We know that formula_25 From the above section, "P"("A"′"B"′), or the probability that the needle intersects no lines is formula_26 We can solve for "P"("A"′"B") and "P"("AB"′) using the following method: formula_27 Solving for "P"("A"′"B") and "P"("AB"′) and plugging that into the original definition for "P"("AB") a few lines above, we get formula_28 Although not necessary to the problem, it is now possible to see that "P"("A"′"B") "P"("AB"′) . With the values above, we are now able to determine which of these estimators is a better estimator for π. For the Laplace variant, let p̂ be the estimator for the probability that there is a line intersection such that formula_29. We are interested in the variance of such an estimator to understand the usefulness or efficiency of it. To compute the variance of p̂, we first compute Var("xn" + "yn") where formula_30 Solving for each part individually, formula_31 We know from the previous section that formula_32 yielding formula_33 Thus, formula_34 Returning to the original problem of this section, the variance of estimator p̂ is formula_35 Now let us calculate the number of drops, M, needed to achieve the same variance as 100 drops over perpendicular lines. If "M" &lt; 200 then we can conclude that the setup with only parallel lines is more efficient than the case with perpendicular lines. Conversely if M is equal to or more than 200, than Buffon's experiment is equally or less efficient, respectively. Let q̂ be the estimator for Buffon's original experiment. Then, formula_36 and formula_37 Solving for M, formula_38 Thus, it takes 222 drops with only parallel lines to have the same certainty as 100 drops in Laplace's case. This isn't actually surprising because of the observation that Cov("xn","yn") &lt; 0. Because xn and yn are negatively correlated random variables, they act to reduce the total variance in the estimator that is an average of the two of them. This method of variance reduction is known as the antithetic variates method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p=\\frac{2}{\\pi} \\cdot \\frac{l}{t}." }, { "math_id": 1, "text": "f_X(x)=\n\\begin{cases}\n\\dfrac{2}{t} &:\\ 0 \\le x \\le \\dfrac{t}{2}\\\\[4px]\n0 &: \\text{elsewhere.}\n\\end{cases}\n" }, { "math_id": 2, "text": "f_\\Theta(\\theta)=\n\\begin{cases}\n\\dfrac{2}{\\pi} &:\\ 0 \\le \\theta \\le \\dfrac{\\pi}{2}\\\\[4px]\n0 &: \\text{elsewhere.}\n\\end{cases}\n" }, { "math_id": 3, "text": "f_{X,\\Theta}(x,\\theta)=\n\\begin{cases}\n\\dfrac{4}{t\\pi} &:\\ 0 \\le x \\le \\dfrac{t}{2}, \\ 0 \\le \\theta \\le \\dfrac{\\pi}{2}\\\\[4px]\n0 &: \\text{elsewhere.}\n\\end{cases}\n" }, { "math_id": 4, "text": "x \\le \\frac {l}{2} \\sin\\theta." }, { "math_id": 5, "text": "P = \\int_{\\theta=0}^\\frac{\\pi}{2} \\int_{x=0}^{\\frac{l}{2}\\sin \\theta} \\frac{4}{t\\pi}\\,dx\\,d\\theta = \\frac{2 l}{t\\pi}." }, { "math_id": 6, "text": "\\int_{\\theta=0}^\\frac{\\pi}{2} \\int_{x=0}^{m(\\theta)} \\frac{4}{t\\pi}\\,dx\\,d\\theta," }, { "math_id": 7, "text": "P = \\frac{2l}{t\\pi} - \\frac{2}{t\\pi}\\left(\\sqrt{l^2 - t^2} + t\\arcsin \\frac{t}{l} \\right)+1" }, { "math_id": 8, "text": "P = \\frac{2}{\\pi} \\arccos \\frac{t}{l} + \\frac{2}{\\pi}\\cdot\\frac{l}{t} \\left(1 - \\sqrt{1 - \\left( \\frac t l \\right)^2 } \\right). " }, { "math_id": 9, "text": "\\begin{align}\nP &= \\left(\\int_{\\theta=0}^{\\arcsin \\frac{t}{l}} \\int_{x=0}^{\\frac{l}{2}\\sin\\theta} \\frac{4}{t\\pi}dxd{\\theta}\\right) + \\left(\\int_{\\arcsin\\frac{t}{l}}^\\frac{\\pi}{2} \\frac{2}{\\pi}d{\\theta}\\right) \\\\[6px]\n&= \\frac{2l}{t\\pi} - \\frac{2}{t\\pi}\\left(\\sqrt{l^2 - t^2} + t\\arcsin \\frac{t}{l}\\right) + 1\n\\end{align}" }, { "math_id": 10, "text": "l = 2" }, { "math_id": 11, "text": "\\begin{align}\nP_2 &= \\int_0^1 \\frac{2\\theta(x)}{\\pi}\\,dx \\\\[6px]\n&= \\frac{2}{\\pi}\\int_0^1 \\cos^{-1}(x)\\,dx \\\\[6px]\n&= \\frac{2}{\\pi}\\cdot 1 = \\frac{2}{\\pi}.\n\\end{align}" }, { "math_id": 12, "text": "\\begin{align}\nP &= \\frac{\\displaystyle\\int_0^\\frac{\\pi}{2} l\\cos\\theta \\, d\\theta}{\\displaystyle\\int_0^\\frac{\\pi}{2} t \\, d\\theta} \\\\[6px]\n&= \\frac l t\\cdot\\frac{\\displaystyle\\int_0^\\frac{\\pi}{2} \\cos\\theta \\, d\\theta}{\\displaystyle\\int_0^\\frac{\\pi}{2} d\\theta} \\\\[6px]\n&= \\frac l t\\cdot\\frac{1}{\\,\\frac{\\pi}{2}\\,} \\\\[6px]\n&=\\frac{2l}{t\\pi}.\n\\end{align}" }, { "math_id": 13, "text": "\\pi = \\frac{2l}{tP}." }, { "math_id": 14, "text": "\\pi \\approx \\frac{2l\\cdot n}{t h}." }, { "math_id": 15, "text": "\\pi \\approx \\frac 53 \\cdot \\frac nx" }, { "math_id": 16, "text": "\\frac{355}{113} = \\frac 53 \\cdot \\frac nx," }, { "math_id": 17, "text": "x = \\frac{113 n}{213}." }, { "math_id": 18, "text": "V^* = \\int_{-\\frac{\\pi}{2}}^{\\frac{\\pi}{2}} F(\\varphi)\\,d\\varphi" }, { "math_id": 19, "text": "F(\\varphi) =(a - l\\cos\\varphi)(b - l\\sin\\varphi) = ab - bl \\cos \\varphi - al|\\sin \\varphi|+ \\tfrac{1}{2}l^2|\\sin 2\\varphi|." }, { "math_id": 20, "text": "V^* = \\int_{-\\frac{\\pi}{2}}^{\\frac{\\pi}{2}} F(\\varphi)\\,d\\varphi = \\pi ab-2bl-2al+l^2 ." }, { "math_id": 21, "text": "\\frac{V^*}{V}= \\frac{\\pi ab-2bl-2al+l^2}{\\pi a b} = 1 - \\frac{2l(a+b)-l^2}{\\pi a b}." }, { "math_id": 22, "text": "P = \\frac{2l(a+b)-l^2}{\\pi a b}" }, { "math_id": 23, "text": " x = \n\\begin{cases}\n1 &: \\text{intersection occurs}\\\\\n0 &: \\text{no intersection}\n\\end{cases}\n" }, { "math_id": 24, "text": " y = \n\\begin{cases}\n1 &: \\text{intersection occurs}\\\\\n0 &: \\text{no intersection}\n\\end{cases}\n" }, { "math_id": 25, "text": "P(AB) = 1 - P(AB') - P(A'B) - P(A'B')." }, { "math_id": 26, "text": "P(A'B') = 1 - \\frac{2l(a+b)-l^2}{\\pi a b} = 1 - \\frac{2l(4l)-l^2}{4 l^2 \\pi} = 1 - \\frac{7}{4\\pi} ." }, { "math_id": 27, "text": "\\begin{align}\nP(A) &= \\frac{1}{\\pi} = P(AB) + P(AB') \\\\[4px]\nP(B) &= \\frac{1}{\\pi} = P(AB) + P(A'B).\n\\end{align}" }, { "math_id": 28, "text": " P(AB) = 1 - 2\\left(\\frac{1}{\\pi}-P(AB)\\right) - \\left(1 - \\frac{7}{4\\pi}\\right) = \\frac{1}{4\\pi}" }, { "math_id": 29, "text": "\\hat{p} = \\frac{1}{100}\\sum_{n=1}^{100}\\frac{x_n+y_n}{2}" }, { "math_id": 30, "text": "\\operatorname{Var}(x_n + y_n) = \\operatorname{Var}(x_n) +\\operatorname{Var}(y_n) + 2\\operatorname{Cov}(x_n,y_n)." }, { "math_id": 31, "text": "\\begin{align}\n\\operatorname{Var}(x_n) = \\operatorname{Var}(y_n) &= \\sum_{i = 1}^2 p_i\\bigl(x_i-\\mathbb{E}(x_i)\\bigr)^2 \\\\[6px]\n&= P(x_i=1)\\left(1-\\frac{1}{\\pi}\\right)^2 + P(x_i=0)\\left(0-\\frac{1}{\\pi}\\right)^2 \\\\[6px]\n&= \\frac{1}{\\pi}\\left(1-\\frac{1}{\\pi}\\right)^2 + \\left(1-\\frac{1}{\\pi}\\right)\\left(-\\frac{1}{\\pi}\\right)^2 = \\frac{1}{\\pi}\\left(1-\\frac{1}{\\pi}\\right). \\\\[12px]\n\n\\operatorname{Cov}(x_n,y_n) &= \\mathbb{E}(x_ny_n) - \\mathbb{E}(x_n)\\mathbb{E}(y_n)\n\\end{align}" }, { "math_id": 32, "text": "\\mathbb{E}(x_ny_n) = P(AB) = \\frac{1}{4\\pi}" }, { "math_id": 33, "text": "\\operatorname{Cov}(x_n,y_n) = \\frac{1}{4\\pi} - \\frac{1}{\\pi}\\cdot\\frac{1}{\\pi} = \\frac{\\pi-4}{4\\pi^2} < 0" }, { "math_id": 34, "text": " \\operatorname{Var}(x_n+y_n) = \\frac{1}{\\pi}\\left(1-\\frac{1}{\\pi}\\right) + \\frac{1}{\\pi}\\left(1-\\frac{1}{\\pi}\\right) + 2\\left(\\frac{\\pi-4}{4\\pi^2}\\right) = \\frac{5\\pi - 8}{2\\pi^2} " }, { "math_id": 35, "text": "\\operatorname{Var}(\\hat{p}) = \\frac{1}{200^2}(100)\\left(\\frac{5\\pi - 8}{2\\pi^2}\\right) \\approx 0.000\\,976." }, { "math_id": 36, "text": " \\hat{q} = \\frac{1}{M}\\sum_{m=1}^M x_m " }, { "math_id": 37, "text": " \\operatorname{Var}(\\hat{q}) = \\frac{1}{M^2}(M)\\operatorname{Var}(x_m) = \\frac{1}{M}\\cdot\\frac{1}{\\pi}\\left(1-\\frac{1}{\\pi}\\right) \\approx \\frac{0.217}{M}" }, { "math_id": 38, "text": " \\frac{0.217}{M} = 0.000\\,976 \\implies M \\approx 222." } ]
https://en.wikipedia.org/wiki?curid=667479
66755516
Bitwise trie with bitmap
A bitwise trie is a special form of trie where each node with its child-branches represents a bit sequence of one or more bits of a key. A bitwise trie with bitmap uses a bitmap to denote valid child branches. Tries and bitwise tries. A trie is a type of search tree where – unlike for example a B-tree – keys are not stored in the nodes but in the path to leaves. The key is distributed across the tree structure. In a "classic" trie, each node with its child-branches represents one symbol of the alphabet of one position (character) of a key. In bitwise tries, keys are treated as bit-sequence of some binary representation and each node with its child-branches represents the value of a sub-sequence of this bit-sequence to form a binary tree (the sub-sequence contains only one bit) or n-ary tree (the sub-sequence contains multiple bits). To give an example that explains the difference between "classic" tries and bitwise tries: For numbers as keys, the alphabet for a trie could consist of the symbols '0' .. '9' to represent digits of a number in the decimal system and the nodes would have up to 10 possible children. There are multiple straight forward approaches to implement such a trie as physical data structure. To state two: These approaches get worse for larger alphabets, if, for example, the key is a string of Unicode characters. Treating the key as bit-sequence allows to have a fixed cardinality per node. Bitwise trie with bitmap. Bagwell presented a time and space efficient solution for tries named Array Mapped Tree (AMT). The Hash array mapped trie (HAMT) is based on AMT. The compact trie node representation uses a bitmap to mark every valid branch – a bitwise trie with bitmap. The AMT uses eight 32-bit bitmaps per node to represent a 256-ary trie that is able to represent an 8 bit sequence per node. With 64-Bit-CPUs (64-bit computing) a variation is to have a 64-ary trie with only one 64-bit bitmap per node that is able to represent a 6 bit sequence. To determine the index of the child pointer of a node for such a given 6-bit value, the amount of preceding child pointers has to be calculated. It turns out that this can be implemented quite efficiently. Node traversal. long bitMap = mem[nodeIdx]; long bitPos = 1L « value; // 6-bit-value if ((bitMap &amp; bitPos) == 0) return false; // not found long childNodeIdx = mem[nodeIdx + 1 + Long.bitCount(bitMap &amp; (bitPos - 1))]; The offset to find the index based on the current node index is the amount of least significant bits set in the bitmap before the target position plus one for the bitmap. The amount of least significant bits set can be calculated efficiently with constant time complexity using simple bit operations and a CTPOP (count population) operation that determines the number of set bits, which is available as Long.bitCount() in Java. CTPOP itself can be implemented quite efficiently using a "bit-hack" and many modern CPUs even provide CTPOP as a dedicated instruction treated by compilers as intrinsic function. CTPOP bit-hack implementation. int bitCount(long x) x -= ((x »&gt; 1) &amp; 0x5555555555555555L); x = (x &amp; 0x3333333333333333L) + ((x »&gt; 2) &amp; 0x3333333333333333L); x = (x + (x »&gt; 4)) &amp; 0x0f0f0f0f0f0f0f0fL; x += (x »&gt; 8); x += (x »&gt; 16); x += (x »&gt; 32); return x &amp; 0x7f; How to use this principle for an universal index for database and Information retrieval applications is described in. A specialized and simplified solution demonstrating these concepts is shown below for an implementation of a 32-Bit-Integer set. Set implementation example. Physical data structure. In this example implementation for a bitwise trie with bitmap, nodes are placed in an array of long (64-bit) integers. A node is identified by the position (index) in that array. The index of the root node marks the root of the trie. Nodes are allocated from unused space in that array, extending the array if necessary. In addition, nodes, that are replaced, are collected in free lists and their space is recycled. Without this recycling, the data structure can be used to implement a persistent data structure by just keeping the previous root index and never overriding existing nodes but always creating a copy of a changed node. Leaf nodes are inlined: Instead of having a child-pointer to a leaf node, the bitmap of the leaf node itself is stored. public class BBTrieSet { long[] mem; long[] freeLists; long freeIdx; long root; long count; // maximum node size is 1 (bitMap) + 64 (child pointers or leaf values) + 1 as arrays are zero based final static int FREE_LIST_SIZE = 1+64+1; final static int KNOWN_EMPTY_NODE = 0; final static int KNOWN_DELETED_NODE = 1; final static int HEADER_SIZE = 2; // KNOWN_EMPTY_NODE, KNOWN_DELETED_NODE public BBTrieSet(int size) { mem = new long[size]; freeLists = new long[FREE_LIST_SIZE]; freeIdx = HEADER_SIZE; root = KNOWN_EMPTY_NODE; count = 0; private long allocate(int size) { long free = freeLists[size]; if (free != 0) { // requested size available in free list, re-link and return head freeLists[size] = mem[(int) free]; return free; else { // expansion required? if (freeIdx + size &gt; mem.length) { // increase by 25% and assure this is enough int currSize = mem.length; int newSize = currSize + Math.max(currSize / 4, size); mem = Arrays.copyOf(mem, newSize); long idx = freeIdx; freeIdx += size; return idx; private long allocateInsert(long nodeIdx, int size, int childIdx) { long newNodeRef = allocate(size + 1); int a = (int) newNodeRef; int b = (int) nodeIdx; // copy with gap for child for (int j = 0; j &lt; childIdx; j++) mem[a++] = mem[b++]; a++; // inserted for (int j = childIdx; j &lt; size; j++) mem[a++] = mem[b++]; deallocate(nodeIdx, size); return newNodeRef; private long allocateDelete(long nodeIdx, int size, int childIdx) { long newNodeRef = allocate(size - 1); // copy with child removed int a = (int) newNodeRef; int b = (int) nodeIdx; for (int j = 0; j &lt; childIdx; j++) mem[a++] = mem[b++]; b++; // removed for (int j = childIdx + 1; j &lt; size; j++) mem[a++] = mem[b++]; deallocate(nodeIdx, size); return newNodeRef; private void deallocate(long idx, int size) { if (idx == KNOWN_EMPTY_NODE) return; // keep our known empty node // add to head of free-list mem[(int) idx] = freeLists[size]; freeLists[size] = idx; private long createLeaf(byte[] key, int off, int len) { long newNodeRef = allocate(2); int a = (int) newNodeRef; mem[a++] = 1L « key[len - 2]; mem[a] = 1L « key[len - 1]; // value len -= 3; while (len &gt;= off) { long newParentNodeRef = allocate(2); a = (int) newParentNodeRef; mem[a++] = 1L « key[len--]; mem[a] = newNodeRef; newNodeRef = newParentNodeRef; return newNodeRef; private long insertChild(long nodeRef, long bitMap, long bitPos, int idx, long value) { int size = Long.bitCount(bitMap); long newNodeRef = allocateInsert(nodeRef, size + 1, idx + 1); mem[(int) newNodeRef] = bitMap | bitPos; mem[(int) newNodeRef+ 1 + idx] = value; return newNodeRef; private long removeChild(long nodeRef, long bitMap, long bitPos, int idx) { int size = Long.bitCount(bitMap); if (size &gt; 1) { // node still has other children / leaves long newNodeRef = allocateDelete(nodeRef, size + 1, idx + 1); mem[(int) newNodeRef] = bitMap &amp; ~bitPos; return newNodeRef; else { // node is now empty, remove it deallocate(nodeRef, size + 1); return KNOWN_DELETED_NODE; public long size() { return count; Set operations. Contains key. The get method tests, if a key is part of the set. The key is delivered as byte[] where each byte represents one 6-bit bit sequence of the key – so only 6 of the 8 bits per byte are used. public boolean get(byte[] key, int len) { if (root == KNOWN_EMPTY_NODE) return false; long nodeRef = root; int off = 0; for (;;) { long bitMap = mem[(int) nodeRef]; long bitPos = 1L « key[off++]; // mind the ++ if ((bitMap &amp; bitPos) == 0) return false; // not found long value = mem[(int) nodeRef + 1 + Long.bitCount(bitMap &amp; (bitPos - 1))]; if (off == len - 1) { // at leaf long bitPosLeaf = 1L « key[off]; return ((value &amp; bitPosLeaf) != 0); else { // child pointer nodeRef = value; Set (add) key. public boolean set(byte[] key, int len) { long nodeRef = set(root, key, 0, len); if (nodeRef != KNOWN_EMPTY_NODE) { // denotes change count++; root = nodeRef; return true; else return false; private long set(long nodeRef, byte[] key, int off, int len) { long bitMap = mem[(int) nodeRef]; long bitPos = 1L « key[off++]; // mind the ++ int idx = Long.bitCount(bitMap &amp; (bitPos - 1)); if ((bitMap &amp; bitPos) == 0) { // child not present yet long value; if (off == len - 1) value = 1L « key[off]; else value = createLeaf(key, off, len); return insertChild(nodeRef, bitMap, bitPos, idx, value); else { // child present long value = mem[(int) nodeRef + 1 + idx]; if (off == len - 1) { // at leaf long bitPosLeaf = 1L « key[off]; if ((value &amp; bitPosLeaf) == 0) { // update leaf bitMap mem[(int) nodeRef + 1 + idx] = value | bitPosLeaf; return nodeRef; else // key already present return KNOWN_EMPTY_NODE; else { // not at leaf, recursion long childNodeRef = value; long newChildNodeRef = set(childNodeRef, key, off, len); if (newChildNodeRef == KNOWN_EMPTY_NODE) return KNOWN_EMPTY_NODE; if (newChildNodeRef != childNodeRef) mem[(int) nodeRef + 1 + idx] = newChildNodeRef; return nodeRef; Clear (remove) key. public boolean clear(byte[] key, int len) { long nodeRef = clear(root, key, 0, len); if (nodeRef != KNOWN_EMPTY_NODE) { count--; if (nodeRef == KNOWN_DELETED_NODE) root = KNOWN_EMPTY_NODE; else root = nodeRef; return true; else return false; public long clear(long nodeRef, byte[] key, int off, int len) { if (root == KNOWN_EMPTY_NODE) return KNOWN_EMPTY_NODE; long bitMap = mem[(int) nodeRef]; long bitPos = 1L « key[off++]; // mind the ++ if ((bitMap &amp; bitPos) == 0) { // child not present, key not found return KNOWN_EMPTY_NODE; else { // child present int idx = Long.bitCount(bitMap &amp; (bitPos - 1)); long value = mem[(int) nodeRef + 1 + idx]; if (off == len - 1) { // at leaf long bitPosLeaf = 1L « key[off]; if ((value &amp; bitPosLeaf) == 0) // key not present return KNOWN_EMPTY_NODE; else { // clear bit in leaf value = value &amp; ~bitPosLeaf; if (value != 0) { // leaf still has some bits set, keep leaf but update mem[(int) nodeRef + 1 + idx] = value; return nodeRef; else return removeChild(nodeRef, bitMap, bitPosLeaf, idx); else { // not at leaf long childNodeRef = value; long newChildNodeRef = clear(childNodeRef, key, off, len); if (newChildNodeRef == KNOWN_EMPTY_NODE) return KNOWN_EMPTY_NODE; if (newChildNodeRef == KNOWN_DELETED_NODE) return removeChild(nodeRef, bitMap, bitPos, idx); if (newChildNodeRef != childNodeRef) mem[(int) nodeRef + 1 + idx] = newChildNodeRef; return nodeRef; Set operators. Set operators for intersection (and), union (or) and difference (minus) are feasible using a flyweight pattern as shown below. An interface represents physical nodes and "virtual" result nodes of an operator. Instances of this interface are created on demand during a trie traversal. Compound expressions, involving more than one operator, can be expressed directly by combining these operators as an operator can be used as argument (input) for another operator. Flyweight interface. public interface BBTrieNode { public long getBitMap(); public long getBitMapLeaf(long bitPos); public BBTrieNode getChildNode(long bitPos); public static class BBTrieNodeMem implements BBTrieNode { long nodeRef; long[] mem; BBTrieNodeMem child; public BBTrieNodeMem(long nodeRef, long[] mem) { this.nodeRef = nodeRef; this.mem = mem; @Override public long getBitMap() { return mem[(int) nodeRef]; @Override public long getBitMapLeaf(long bitPos) { int idx = Long.bitCount(getBitMap() &amp; (bitPos - 1)); long value = mem[(int) nodeRef + 1 + idx]; return value; @Override public BBTrieNode getChildNode(long bitPos) { int idx = Long.bitCount(getBitMap() &amp; (bitPos - 1)); long value = mem[(int) nodeRef + 1 + idx]; return new BBTrieNodeMem(value, mem); Intersection (AND). The intersection operator is very efficient as it automatically performs pruning even over subexpressions. Nonrelevant child nodes don't have to be accessed because the bitmap and a bitwise AND operation allows to determine the result upfront. For example, calculating formula_4, the subexpression formula_5 would not be materialized as intermediate result. public static class BBTrieAnd implements BBTrieNode { BBTrieNode nodeA; BBTrieNode nodeB; long bitMapA; long bitMapB; public BBTrieAnd(BBTrieNode nodeA, BBTrieNode nodeB) { this.nodeA = nodeA; this.nodeB = nodeB; bitMapA = nodeA.getBitMap(); bitMapB = nodeB.getBitMap(); public long getBitMap() { return bitMapA &amp; bitMapB; // this gives a nice optimization (pruning) public long getBitMapLeaf(long bitPos) { return nodeA.getBitMapLeaf(bitPos) &amp; nodeB.getBitMapLeaf(bitPos); public BBTrieNode getChildNode(long bitPos) { BBTrieNode childNodeA = nodeA.getChildNode(bitPos); BBTrieNode childNodeB = nodeB.getChildNode(bitPos); return new BBTrieAnd(childNodeA, childNodeB); Union (OR). public static class BBTrieOr implements BBTrieNode { BBTrieNode nodeA; BBTrieNode nodeB; long bitMapA; long bitMapB; public BBTrieOr(BBTrieNode nodeA, BBTrieNode nodeB) { this.nodeA = nodeA; this.nodeB = nodeB; bitMapA = nodeA.getBitMap(); bitMapB = nodeB.getBitMap(); public long getBitMap() { return bitMapA | bitMapB; public long getBitMapLeaf(long bitPos) { return nodeA.getBitMapLeaf(bitPos) | nodeB.getBitMapLeaf(bitPos); public BBTrieNode getChildNode(long bitPos) { if ((bitMapA &amp; bitPos) != 0) { BBTrieNode childNodeA = nodeA.getChildNode(bitPos); if ((bitMapB &amp; bitPos) != 0) { BBTrieNode childNodeB = nodeB.getChildNode(bitPos); return new BBTrieOr(childNodeA, childNodeB); else return childNodeA; // optimization, no more or-node required else { BBTrieNode childNodeB = nodeB.getChildNode(bitPos); return childNodeB; // optimization, no more or-node required Difference (MINUS). public static class BBTrieMinus implements BBTrieNode { BBTrieNode nodeA; BBTrieNode nodeB; long bitMapA; long bitMapB; public BBTrieMinus(BBTrieNode nodeA, BBTrieNode nodeB) { this.nodeA = nodeA; this.nodeB = nodeB; bitMapA = nodeA.getBitMap(); bitMapB = nodeB.getBitMap(); public long getBitMap() { return bitMapA; // bitMapB not useful here public long getBitMapLeaf(long bitPos) { long childBitMapA = nodeA.getBitMapLeaf(bitPos); if ((bitMapB &amp; bitPos) == 0) return childBitMapA; long childBitMapB = nodeB.getBitMapLeaf(bitPos); return childBitMapA &amp; ~childBitMapB; public BBTrieNode getChildNode(long bitPos) { BBTrieNode childNodeA = nodeA.getChildNode(bitPos); if ((bitMapB &amp; bitPos) == 0) return childNodeA; // optimization, no more minus-node required BBTrieNode childNodeB = nodeB.getChildNode(bitPos); return new BBTrieMinus(childNodeA, childNodeB); Ranges. Using the virtual node approach, range queries can be accomplished by intersecting a range generating virtual trie (see below) with another operator. So to determine which numbers of a set, say formula_6, lay in certain range, say [10..50], instead of iterating through the set and checking each entry, this is performed by evaluating formula_7. public static class BBTrieIntRange implements BBTrieNode { private long bitMap; private int a, b; private int x, y; private int level; public BBTrieIntRange(int a, int b) { this(a, b, 5); private BBTrieIntRange (int a, int b, int level) { this.a = a; this.b = b; this.level = level; x = (int) (a »&gt; (level * 6)) &amp; 0x3F; y = (int) (b »&gt; (level * 6)) &amp; 0x3F; // bit hack for: for (int i = x; i &lt;= y; i++) bitSet |= (1L « i); bitMap = 1L « y; bitMap |= bitMap - 1; bitMap &amp;= ~((1L « x) - 1); public long getBitMap() { return bitMap; public long getBitMapLeaf(long bitPos) { // simple solution for readability (not that efficient as for each call a child is created again) return getChildNode(bitPos).getBitMap(); public BBTrieIntRange getChildNode(long bitPos) { int bitNum = Long.numberOfTrailingZeros(bitPos); if (x == y) return new BBTrieIntRange(a, b, level - 1); else if (bitNum == x) return new BBTrieIntRange(a, ~0x0, level - 1); else if (bitNum == y) return new BBTrieIntRange(0, b, level - 1); else return new BBTrieIntRange(0, ~0x0, level - 1); Usage example. The example shows the usage with 32-bit integers as keys. public class BBTrieSetSample { public interface Visitor { public void visit(byte[] key, int keyLen); public static void visit(BBTrieNode node, Visitor visitor, byte[] key, int off, int len) { long bitMap = node.getBitMap(); if (bitMap == 0) return; long bits = bitMap; while (bits != 0) { long bitPos = bits &amp; -bits; bits ^= bitPos; // get rightmost bit and clear it int bitNum = Long.numberOfTrailingZeros(bitPos); key[off] = (byte) bitNum; if (off == len - 2) { long value = node.getBitMapLeaf(bitPos); long bits2 = value; while (bits2 != 0) { long bitPos2 = bits2 &amp; -bits2; bits2 ^= bitPos2; int bitNum2 = Long.numberOfTrailingZeros(bitPos2); key[off+1] = (byte) bitNum2; visitor.visit(key, off + 2); else { BBTrieNode childNode = node.getChildNode(bitPos); visit(childNode, visitor, key, off + 1, len); public static int set6Int(byte[] b, int value) { int pos = 0; b[pos ] = (byte) ((value »&gt; 30) &amp; 0x3F); b[pos + 1] = (byte) ((value »&gt; 24) &amp; 0x3F); b[pos + 2] = (byte) ((value »&gt; 18) &amp; 0x3F); b[pos + 3] = (byte) ((value »&gt; 12) &amp; 0x3F); b[pos + 4] = (byte) ((value »&gt; 6) &amp; 0x3F); b[pos + 5] = (byte) (value &amp; 0x3F); return 6; public static int get6Int(byte[] b) { int pos = 0; return ((b[pos ] &amp; 0x3F) « 30) | ((b[pos + 1] &amp; 0x3F) « 24) | ((b[pos + 2] &amp; 0x3F) « 18) | ((b[pos + 3] &amp; 0x3F) « 12) | ((b[pos + 4] &amp; 0x3F) « 6) | (b[pos + 5] &amp; 0x3F); public static void main(String[] args) { BBTrieSet trie1 = new BBTrieSet(100); BBTrieSet trie2 = new BBTrieSet(100); byte[] key = new byte[64]; int len; final int KEY_LEN_INT = set6Int(key, 1); // 6 int[] test = new int[] { 10, 20, 30, 40, 50, 30, 60, 61, 62, 63 }; for (int i = 0; i &lt; test.length; i++) { len = set6Int(key, test[i]); boolean change = trie1.set(key, len); System.out.println("set: " + test[i] + ", " + change); System.out.println("trie1 size: " + trie1.size()); BBTrieSetOps.visit(new BBTrieNodeMem(trie1.root, trie1.mem), new BBTrieSetOps.Visitor() { @Override public void visit(byte[] key, int keyLen) { System.out.println("Visitor: "+ get6Int(key) + ", " + keyLen); }, key, 0, KEY_LEN_INT); test = new int[] { 10, 25, 30, 40, 45, 50, 55, 60 }; for (int i = 0; i &lt; test.length; i++) { len = set6Int(key, test[i]); boolean contained = trie1.get(key, len); System.out.println("contained: " + test[i] + ", " + contained); test = new int[] { 10, 20, 30, 40, 45, 50, 55, 60, 61, 62, 63 }; for (int i = 0; i &lt; test.length; i++) { len = set6Int(key, test[i]); boolean change = trie1.clear(key, len); System.out.println("cleared: " + test[i] + ", " + change); BBTrieSetOps.visit(new BBTrieNodeMem(trie1.root, trie1.mem), new BBTrieSetOps.Visitor() { @Override public void visit(byte[] key, int keyLen) { System.out.print(get6Int(key) + " "); }, key, 0, KEY_LEN_INT); System.out.println(); System.out.println("trie1 size: " + trie1.size()); for (int i = 0; i &lt;= 50; i++) { len = set6Int(key, i); trie1.set(key, len); System.out.println("set: " + i); System.out.println("trie1 size: " + trie1.size()); for (int i = 25; i &lt;= 75; i++) { len = set6Int(key, i); trie2.set(key, len); System.out.println("set: " + i); System.out.println("trie2 size: " + trie2.size()); // AND example BBTrieNode result = new BBTrieAnd( new BBTrieNodeMem(trie1.root, trie1.mem), new BBTrieNodeMem(trie2.root, trie2.mem)); BBTrieSetOps.visit(result, new BBTrieSetOps.Visitor() { @Override public void visit(byte[] key, int keyLen) { System.out.println("Visitor AND result: " + get6Int(key)); }, key, 0, KEY_LEN_INT); References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Sigma" }, { "math_id": 1, "text": "O(|M|)" }, { "math_id": 2, "text": "|M|" }, { "math_id": 3, "text": "O(|M| \\cdot \\log |\\Sigma|)" }, { "math_id": 4, "text": "\\{1,2,3\\}\\cap(\\{2,3,4\\}\\cup\\{5,6,7\\})=\\{2,3\\}" }, { "math_id": 5, "text": "\\{2,3,4\\}\\cup\\{5,6,7\\}=\\{2,3,4,5,6,7\\}" }, { "math_id": 6, "text": "\\{10, 20, 30, 40, 50, 60, 61, 62, 63\\}" }, { "math_id": 7, "text": "\\{10, 20, 30, 40, 50, 60, 61, 62, 63\\}\\cap\\{10, .., 50\\}" } ]
https://en.wikipedia.org/wiki?curid=66755516
667678
Side-channel attack
Any attack based on information gained from the implementation of a computer system In computer security, a side-channel attack is any attack based on extra information that can be gathered because of the fundamental way a computer protocol or algorithm is implemented, rather than flaws in the design of the protocol or algorithm itself (e.g. flaws found in a cryptanalysis of a cryptographic algorithm) or minor, but potentially devastating, mistakes or oversights in the implementation. (Cryptanalysis also includes searching for side-channel attacks.) Timing information, power consumption, electromagnetic leaks, and sound are examples of extra information which could be exploited to facilitate side-channel attacks. Some side-channel attacks require technical knowledge of the internal operation of the system, although others such as differential power analysis are effective as black-box attacks. The rise of Web 2.0 applications and software-as-a-service has also significantly raised the possibility of side-channel attacks on the web, even when transmissions between a web browser and server are encrypted (e.g. through HTTPS or WiFi encryption), according to researchers from Microsoft Research and Indiana University. Attempts to break a cryptosystem by deceiving or coercing people with legitimate access are not typically considered side-channel attacks: see social engineering and rubber-hose cryptanalysis. General classes of side-channel attack include: In all cases, the underlying principle is that physical effects caused by the operation of a cryptosystem ("on the side") can provide useful extra information about secrets in the system, for example, the cryptographic key, partial state information, full or partial plaintexts and so forth. The term cryptophthora (secret degradation) is sometimes used to express the degradation of secret key material resulting from side-channel leakage. Examples. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;cache side-channel attack works by monitoring security critical operations such as AES T-table entry or modular exponentiation or multiplication or memory accesses. The attacker then is able to recover the secret key depending on the accesses made (or not made) by the victim, deducing the encryption key. Also, unlike some of the other side-channel attacks, this method does not create a fault in the ongoing cryptographic operation and is invisible to the victim. In 2017, two CPU vulnerabilities (dubbed Meltdown and Spectre) were discovered, which can use a cache-based side channel to allow an attacker to leak memory contents of other processes and the operating system itself. A timing attack watches data movement into and out of the CPU or memory on the hardware running the cryptosystem or algorithm. Simply by observing variations in how long it takes to perform cryptographic operations, it might be possible to determine the entire secret key. Such attacks involve statistical analysis of timing measurements and have been demonstrated across networks. A power-analysis attack can provide even more detailed information by observing the power consumption of a hardware device such as CPU or cryptographic circuit. These attacks are roughly categorized into simple power analysis (SPA) and differential power analysis (DPA). One example is Collide+Power, which affects nearly all CPUs. Other examples use machine learning approaches. Fluctuations in current also generate radio waves, enabling attacks that analyze measurements of electromagnetic (EM) emanations. These attacks typically involve similar statistical techniques as power-analysis attacks. A deep-learning-based side-channel attack, using the power and EM information across multiple devices has been demonstrated with the potential to break the secret key of a different but identical device in as low as a single trace. Historical analogues to modern side-channel attacks are known. A recently declassified NSA document reveals that as far back as 1943, an engineer with Bell telephone observed decipherable spikes on an oscilloscope associated with the decrypted output of a certain encrypting teletype. According to former MI5 officer Peter Wright, the British Security Service analyzed emissions from French cipher equipment in the 1960s. In the 1980s, Soviet eavesdroppers were suspected of having planted bugs inside IBM Selectric typewriters to monitor the electrical noise generated as the type ball rotated and pitched to strike the paper; the characteristics of those signals could determine which key was pressed. Power consumption of devices causes heating, which is offset by cooling effects. Temperature changes create thermally induced mechanical stress. This stress can create low level acoustic emissions from operating CPUs (about 10 kHz in some cases). Recent research by Shamir et al. has suggested that information about the operation of cryptosystems and algorithms can be obtained in this way as well. This is an acoustic cryptanalysis attack. If the surface of the CPU chip, or in some cases the CPU package, can be observed, infrared images can also provide information about the code being executed on the CPU, known as a thermal-imaging attack. An optical side-channel attack examples include gleaning information from the hard disk activity indicator to reading a small number of photons emitted by transistors as they change state. Allocation-based side channels also exist and refer to the information that leaks from the allocation (as opposed to the use) of a resource such as network bandwidth to clients that are concurrently requesting the contended resource. Countermeasures. Because side-channel attacks rely on the relationship between information emitted (leaked) through a side channel and the secret data, countermeasures fall into two main categories: (1) eliminate or reduce the release of such information and (2) eliminate the relationship between the leaked information and the secret data, that is, make the leaked information unrelated, or rather "uncorrelated", to the secret data, typically through some form of randomization of the ciphertext that transforms the data in a way that can be undone after the cryptographic operation (e.g., decryption) is completed. Under the first category, displays with special shielding to lessen electromagnetic emissions, reducing susceptibility to TEMPEST attacks, are now commercially available. Power line conditioning and filtering can help deter power-monitoring attacks, although such measures must be used cautiously, since even very small correlations can remain and compromise security. Physical enclosures can reduce the risk of surreptitious installation of microphones (to counter acoustic attacks) and other micro-monitoring devices (against CPU power-draw or thermal-imaging attacks). Another countermeasure (still in the first category) is to jam the emitted channel with noise. For instance, a random delay can be added to deter timing attacks, although adversaries can compensate for these delays by averaging multiple measurements (or, more generally, using more measurements in the analysis). When the amount of noise in the side channel increases, the adversary needs to collect more measurements. Another countermeasure under the first category is to use security analysis software to identify certain classes of side-channel attacks that can be found during the design stages of the underlying hardware itself. Timing attacks and cache attacks are both identifiable through certain commercially available security analysis software platforms, which allow for testing to identify the attack vulnerability itself, as well as the effectiveness of the architectural change to circumvent the vulnerability. The most comprehensive method to employ this countermeasure is to create a Secure Development Lifecycle for hardware, which includes utilizing all available security analysis platforms at their respective stages of the hardware development lifecycle. In the case of timing attacks against targets whose computation times are quantized into discrete clock cycle counts, an effective countermeasure against is to design the software to be isochronous, that is to run in an exactly constant amount of time, independently of secret values. This makes timing attacks impossible. Such countermeasures can be difficult to implement in practice, since even individual instructions can have variable timing on some CPUs. One partial countermeasure against simple power attacks, but not differential power-analysis attacks, is to design the software so that it is "PC-secure" in the "program counter security model". In a PC-secure program, the execution path does not depend on secret values. In other words, all conditional branches depend only on public information. (This is a more restrictive condition than isochronous code, but a less restrictive condition than branch-free code.) Even though multiply operations draw more power than NOP on practically all CPUs, using a constant execution path prevents such operation-dependent power differences (differences in power from choosing one branch over another) from leaking any secret information. On architectures where the instruction execution time is not data-dependent, a PC-secure program is also immune to timing attacks. Another way in which code can be non-isochronous is that modern CPUs have a memory cache: accessing infrequently used information incurs a large timing penalty, revealing some information about the frequency of use of memory blocks. Cryptographic code designed to resist cache attacks attempts to use memory in only a predictable fashion (like accessing only the input, outputs and program data, and doing so according to a fixed pattern). For example, data-dependent table lookups must be avoided because the cache could reveal which part of the lookup table was accessed. Other partial countermeasures attempt to reduce the amount of information leaked from data-dependent power differences. Some operations use power that is correlated to the number of 1 bits in a secret value. Using a constant-weight code (such as using Fredkin gates or dual-rail encoding) can reduce the leakage of information about the Hamming weight of the secret value, although exploitable correlations are likely to remain unless the balancing is perfect. This "balanced design" can be approximated in software by manipulating both the data and its complement together. Several "secure CPUs" have been built as asynchronous CPUs; they have no global timing reference. While these CPUs were intended to make timing and power attacks more difficult, subsequent research found that timing variations in asynchronous circuits are harder to remove. A typical example of the second category (decorrelation) is a technique known as "blinding". In the case of RSA decryption with secret exponent formula_0 and corresponding encryption exponent formula_1 and modulus formula_2, the technique applies as follows (for simplicity, the modular reduction by "m" is omitted in the formulas): before decrypting, that is, before computing the result of formula_3 for a given ciphertext formula_4, the system picks a random number formula_5 and encrypts it with public exponent formula_1 to obtain formula_6. Then, the decryption is done on formula_7 to obtain formula_8. Since the decrypting system chose formula_5, it can compute its inverse modulo formula_2 to cancel out the factor formula_5 in the result and obtain formula_3, the actual result of the decryption. For attacks that require collecting side-channel information from operations with data "controlled by the attacker", blinding is an effective countermeasure, since the actual operation is executed on a randomized version of the data, over which the attacker has no control or even knowledge. A more general countermeasure (in that it is effective against all side-channel attacks) is the masking countermeasure. The principle of masking is to avoid manipulating any sensitive value formula_4 directly, but rather manipulate a sharing of it: a set of variables (called "shares") formula_9 such that formula_10 (where formula_11 is the XOR operation). An attacker must recover all the values of the shares to get any meaningful information. Recently, white-box modeling was utilized to develop a low-overhead generic circuit-level countermeasure against both EM as well as power side-channel attacks. To minimize the effects of the higher-level metal layers in an IC acting as more efficient antennas, the idea is to embed the crypto core with a signature suppression circuit, routed locally within the lower-level metal layers, leading towards both power and EM side-channel attack immunity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "e" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "y^d" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "r^e" }, { "math_id": 7, "text": "y \\cdot r^e" }, { "math_id": 8, "text": "{(y \\cdot r^e)}^d = y^d \\cdot r^{e\\cdot d} = y^d \\cdot r" }, { "math_id": 9, "text": "y_1, ..., y_d" }, { "math_id": 10, "text": "y = y_1 \\oplus ... \\oplus y_d" }, { "math_id": 11, "text": "\\oplus" } ]
https://en.wikipedia.org/wiki?curid=667678
66768404
Candido's identity
__notoc__ Candido's identity, named after the Italian mathematician Giacomo Candido, is an identity for real numbers. It states that for two arbitrary real numbers formula_0 and formula_1 the following equality holds: formula_2 The identity however is not restricted to real numbers but holds in every commutative ring. Candido originally devised the identity to prove the following identity for Fibonacci numbers: formula_3 Proof. A straightforward algebraic proof can be attained by simply completely expanding both sides of the equation. The identity however can also be interpreted geometrically. In this case it states that the area of square with side length formula_4 equals twice the sum of areas of three squares with side lengths formula_5, formula_6 and formula_7. This allows for the following proof due to Roger B. Nelsen:
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "\\left[x^2+y^2+(x+y)^2\\right]^2=2[x^4+y^4+(x+y)^4] " }, { "math_id": 3, "text": "(f_n^2+f_{n+1}^2+f_{n+2}^2)^2=2(f_n^4+f_{n+1}^4+f_{n+2}^4) " }, { "math_id": 4, "text": "x^2+y^2+(x+y)^2" }, { "math_id": 5, "text": "x^2" }, { "math_id": 6, "text": "y^2" }, { "math_id": 7, "text": "(x+y)^2" } ]
https://en.wikipedia.org/wiki?curid=66768404
66773986
Agreeable subset
Concept in the field of computational social choice An agreeable subset is a subset of items that is considered, by all people in a certain group, to be at least as good as its complement. Finding a small agreeable subset is a problem in computational social choice. An example situation in which this problem arises is when a family goes on a trip and has to decide which items to take. Since their car is limited in size, they cannot pick all items, so they have to agree on a subset of items which are most important. If they manage to find a subset of items such that all family members agree that it is at least as good as the subset of items remaining at home, then this subset is called "agreeable". Another use case is when the citizens in some city want to elect a committee from a given pool of candidates, such that all citizens agree that the subset of elected candidates is at least as good as the subset of non-elected ones. Subject to that, the committee size should be as small as possible. Definitions. Agreeable subset. There is a set "S" containing "m" objects. There are "n" agents who have to choose a subset of "S". Each agent is characterized by a preference-relation on subsets of "S". The preference-relation is assumed to be "monotone" - an agent always weakly prefers a set to all its subsets. A subset "T" of "S" is called agreeable if all agents prefer "T" to "S"\"T". If an agent's preference relation is represented by a subadditive utility function "u", then for any agreeable subset "T", u("T") ≥ u("S")/2. As an example, suppose there are two objects - bread and wine, and two agents - Alice and George. The preference-relation of Alice is {bread,wine} &gt; {bread} &gt; {wine} &gt; {}. If the preference-relation of George is the same, then there are two agreeable subsets: {bread,wine} and {bread}. But if George's preference-relation is {bread,wine} &gt; {wine} &gt; {bread} &gt; {}, then the only agreeable subset is {bread,wine}. Necessarily-agreeable subset. If the agents' preference relations on the subsets are given, it is easy to check whether a subset is agreeable. But often, only the agents' preference relations on "individual objects" are given. In this case, it is often assumed that the agents' preferences are not only monotone but also responsive. A subset "T" of "S" is called necessarily agreeable if all agents prefer "T" to "S"\"T" according to the responsive set extension of their preferences on individual objects. A closely related property of subsets is: To satisfy property (*), the subset "T" should contain the best object in "S"; at least two of the three best objects in "S"; at least three of the five best objects in "S"; etc. If a subset "T" satisfies (*) for all agents, then it is necessarily-agreeable. The converse implication holds if the agents' preference relations on indivisible objects are strict. Worst-case bounds on agreeable subset size. What is the smallest agreeable subset that we can find? Agreeable subsets. Consider first a single agent. In some cases, an agreeable subset should contain at least formula_0 objects. An example is when all "m" objects are identical. Moreover, there always exists an agreeable subset containing formula_0 objects. This follows from the following lemma: (this is because S\"V"1 contains "V"2 and S\"V"2 contains "V"1 and the preferences are monotone). This can be generalized: For any "n" agents and "m" objects, there always exists an agreeable subset of size formula_1, and it is tight (for some preferences this is the smallest size of an agreeable subset). The proof for two agents is constructive. The proof for "n" agents uses a Kneser graph. Let formula_2, and let "G" be the Kneser graph formula_3, that is, the graph whose vertices are all subsets of "m"-"k" objects, and two subsets are connected iff they are disjoint. If there is a vertex "V" such that all agents prefer S\"V" to "V", then S\"V" is an agreeable subset of size "k". Otherwise, we can define a color for each agent and color each vertex "V" of "G" with an agent who prefers "V" to S\V. By the theorem on chromatic number of Kneser graphs, the chromatic number of "G" is formula_4; this means that, in the "n"-coloring just defined, there are two adjacent vertices with the same color. In other words, there are two disjoint subsets such that, a single agent "i" prefers each of them to its complement. But this contradicts the above lemma. Hence there must be an agreeable subset of size "k". When there are at most three agents, and their preferences are responsive, an agreeable subset of size formula_1 can be computed in polynomial time, using polynomially-many queries of the form "which of these two subsets is better?". When there are any number of agents with additive utilities, or a constant number of agents with monotone utilities, an agreeable subset of size formula_1 can be found in polynomial time using results from consensus halving. Necessarily-agreeable subsets. When there are two agents with responsive preferences, a "necessarily"-agreeable subset of size formula_1 exists and can be computed in polynomial time. When there are "n" ≥ 3 agents with responsive preferences, a necessarily-agreeable subset of this size might not exist. However, there always exists a necessarily-agreeable subset of size formula_5, and such a set can be computed in polynomial time. On the other hand, for every "m" which is a power of 3, there exist ordinal preferences of 3 agents such that every necessarily-agreeable subset has size at least formula_6. Both proofs use theorems on Discrepancy of permutations. There exists a randomized algorithm that computes a necessarily-agreeable subset of size formula_7. Computing a smallest agreeable subset. In many cases, there may exist an agreeable subset that is much smaller than the worst-case upper bound. For agents with general monotone preferences, there is no algorithm that computes a smallest agreeable set using a polynomial number of queries. Moreover, for every constant "c", there is no algorithm that makes at most "mc"/8 queries and finds an agreeable subset with expected size at most "m"/("c" log "m") of the minimum, even with only one agent. This is tight: there exists a polynomial-time algorithm that finds an agreeable subset with size at most O("m" / log "m") of the minimum. Even for agents with additive utilities, deciding whether there exists an agreeable subset of size "m"/2 is NP-hard; the proof is by reduction from the balanced partition problem. For any fixed of additive agents, there exists a pseudopolynomial time for this problem; but if the number of agents is not fixed, then the problem is strongly NP-hard. There exists a polynomial-time O(log "n") approximation algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lceil m/2 \\rceil" }, { "math_id": 1, "text": "\\bigg\\lfloor \\frac{m+n}{2} \\bigg\\rfloor" }, { "math_id": 2, "text": "k := \\bigg\\lfloor \\frac{m+n}{2} \\bigg\\rfloor" }, { "math_id": 3, "text": "KG(m, m-k)" }, { "math_id": 4, "text": "m-2(m-k)+2 = n+1" }, { "math_id": 5, "text": "m/2+(n+1)\\lceil 4 n \\log{m} \\rceil" }, { "math_id": 6, "text": "m/2+(\\log_3{m})/4" }, { "math_id": 7, "text": "m/2+O(\\sqrt{m})" } ]
https://en.wikipedia.org/wiki?curid=66773986
6678342
Open-circuit voltage
Concept in circuit analysis Open-circuit voltage (abbreviated as OCV or VOC) is the difference of electrical potential between two terminals of an electronic device when disconnected from any circuit. There is no external load connected. No external electric current flows between the terminals. Alternatively, the open-circuit voltage may be thought of as the voltage that must be applied to a solar cell or a battery to stop the current. It is sometimes given the symbol Voc. In network analysis this voltage is also known as the Thévenin voltage. The open-circuit voltages of batteries and solar cells are often quoted under particular conditions (state-of-charge, illumination, temperature, etc.). The potential difference mentioned for batteries and cells is usually the open-circuit voltage. The value of the open-circuit voltage of a transducer equals its electromotive force (emf), which is the maximum potential difference it can produce when not providing current. Example. Consider the circuit: If we want to find the open-circuit voltage across the 5Ω resistor, first disconnect it from the circuit: Find the equivalent resistance in loop 1 to find the current in loop 1. Use Ohm’s law with that current to find the potential drop across the resistance C. Note that since no current is flowing through resistor B, there is no potential drop across it, so it does not affect the open-circuit voltage. The open-circuit voltage is the potential drop across the resistance C, which is: formula_0 This is just an example. Many other ways can be used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{C}{C+A}\\ 100\\ V_\\sim\\ ." } ]
https://en.wikipedia.org/wiki?curid=6678342
66786706
Oka–Weil theorem
Uniform approximation theorem in mathematics In mathematics, especially the theory of several complex variables, the Oka–Weil theorem is a result about the uniform convergence of holomorphic functions on Stein spaces due to Kiyoshi Oka and André Weil. Statement. The Oka–Weil theorem states that if "X" is a Stein space and "K" is a compact formula_0-convex subset of "X", then every holomorphic function in an open neighborhood of "K" can be approximated uniformly on "K" by holomorphic functions on formula_0 (i.e. by polynomials). Applications. Since Runge's theorem may not hold for several complex variables, the Oka–Weil theorem is often used as an approximation theorem for several complex variables. The Behnke–Stein theorem was originally proved using the Oka–Weil theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{O}(X)" } ]
https://en.wikipedia.org/wiki?curid=66786706
66788562
1 Chronicles 12
First Book of Chronicles, chapter 12 1 Chronicles 12 is the twelfth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains the list of people who joined David: before his coronation (verses 1–22) and after he was made king in Hebron (verses 23–40). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30). Text. This chapter was originally written in the Hebrew language. It is divided into 40 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Structure. 1 Chronicles 11 and 12 combine a 'variety of chronologically and geographically disparate lists' to establish the unity of "all Israel" (north and south), with their unanimous recognition of David's kingship. The outer framework consists of David's anointing at Hebron (1 Chronicles 11:1–3; 12:38–40) to enclose the lists of the warriors who attended the festivities (11:10–47; 12:23–38). The inner framework comprises the lists of David's forces while at Ziklag (12:1–7; 12:19–22) to enclose the warriors who joined him at “the stronghold” (12:8–18). The mighty men join David (12:1–22). The first set of lists of the chapter contains the warriors joining David before he became king. It divides into four sections: the Benjaminites who came to David in Ziklag (verses 1–8); the Gadites who came to David's mountain stronghold (verses 9–16), as well as the people of Benjamin and Judah (verses 17–19); the people of Manasseh came to David in Ziklag (verses 20–22). Although only four tribes were mentioned, the structure clearly points to the conclusion in verse 22, that David got much support. The account of Manassites summarizes some incidents in 1 Samuel 28–30. "1Now these are they that came to David to Ziklag, while he yet kept himself close because of Saul the son of Kish: and they were among the mighty men, helpers of the war. 2They were armed with bows, and could use both the right hand and the left in hurling stones and shooting arrows out of a bow, even of Saul's brethren of Benjamin." Verses 1–2. This passage suggests that some people of Benjamin defected to David while Saul was still reigning (cf. ). "3 The chief was Ahiezer, then Joash, the sons of Shemaah the Gibeathite; and Jeziel and Pelet, the sons of Azmaveth, and Beracah and Jehu the Anathothite, "4 and Ishmaiah the Gibeonite, a mighty man among the thirty, and over the thirty. Then Jeremiah, Jahaziel, Johanan, Jozabad the Gederathite," "Then the Spirit clothed Amasai, chief of the thirty, and he said," "“We are yours, O David," "and with you, O son of Jesse!" "Peace, peace to you," "and peace to your helpers!" "For your God helps you.”" "Then David received them and made them officers of his troops." Verse 18. Amasai's prophetic words, with "peace" (, "shā-lōm") mentioned three times, speak of David's closeness with his supporters at the beginning of his rise to power. This relationship would be dissolved when the kingdom was divided in the time of Rehoboam's reign (). "And some from Manasseh defected to David when he was going with the Philistines to battle against Saul; but they did not help them, for the lords of the Philistines sent him away by agreement, saying, "He may defect to his master Saul and endanger our heads."" Verse 19. The battle between Saul and the Philistines was mentioned in chapter 10 and the case of David not involved in that battle was summarized from . David's army at Hebron (12:23–40). The subsequent list is bracketed by brief accounts of David's coronation in Hebron (verses 23, 38–40); structured as a kind of military census. David was accepted as king by all people with all their hearts (verse 38), followed by great feasts of joy, unique to the Chronicles (cf. e.g. ; ). The three-day celebration involves many foodstuffs brought by donkeys, mules, camels and oxen from three northern tribes: Issachar, Zebulun and Naphtali, such as bread, fig-cakes, raisins and wine. The mood is described in the word "joy" (or "rejoicing"; verse 40) which later appears in Hezekiah's Passover festival (, ) and post-exilic worship festivals (, 17; ). "And these are the numbers of the bands that were ready armed to the war, and came to David to Hebron, to turn the kingdom of Saul to him, according to the word of the Lord." Verse 23. Saul's kingdom was passed on peacefully to David in Hebron (cf. 1 Chronicles 10:14–11:3). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66788562
66797211
Blondel's experiments
Blondel's experiments are a series of experiments performed by physicist André Blondel in 1914 in order to determine what was the most general law of electromagnetic induction. In fact, noted Blondel, ""Significant discussions have been raised repeatedly on the question of what is the most general law of induction: we should consider the electromotive force" ("e.m.f.") "as the product of any variation of magnetic flux" ("formula_0") "surrounding a conductor or of the fact that the conductor sweeps part of this flux?"". In the first case Blondel referred to Faraday-Neumann law, which is often considered the most general law, while in the second case he referred to Lorentz force. Normally experiments to verify the first case consist of measuring the induced current in a closed conducting circuit, "concatenated" to the magnetic induction field formula_1 of a magnet, with formula_1 varying in time, while for the verification of the second case usually we measure the induced current in a closed circuit of variable shape or moving by cutting perpendicularly a field formula_1 constant. The second case, however, is due to a variation of the magnetic flux formula_2, not so much because the intensity of formula_1 varies, but because the surface formula_3 crossed by the field varies. Blondel, on the other hand, devised "a new device which consists in varying the total magnetic flux passing through a coil, by a continuous variation of the number of turns of this coil""." In this way formula_1 and formula_3 are constant for each coil, but the total flux varies with the number of coils affected by the field formula_1. It follows that, given the flux formula_0 concatenated to a single loop and formula_4 the total number of loops, by Faraday-Neumann's law, the resulting electromotive force is: formula_5 i.e. dependent on the variation of the number of turns in time. Blondel tested four configurations of his apparatus in which he demonstrates that a change in flux does not always generate an "e.m.f." in a circuit concatenated to it, concluding that the Faraday-Neumann law cannot be the general law. Apparatus description. The apparatus consists of an electromagnet "E", whose U-shaped core terminates in two large parallel plates "P" and "P"' . Two induction coils "B" generate the magnetic field in "E". Between the two plates there is a rotating wooden drum "T" on which an insulated electric wire is wound. The wire exits from the center of the drum and connects with a ring "b" integral with the drum and of negligible diameter with respect to the drum itself. A sliding contact "f" electrically connects the wire to a galvanometer "G", by means of a resistor "R" so that current can flow even when the drum is rotating. To the galvanometer is connected, in a specular way to the first, another drum "T"' which is connected to a motor "M", able to rotate the drum "T"' at adjustable speed. Finally, the electric wire passing through the center of both drums, after a certain number of windings around one of them, reaches the other drum, closing the circuit. When motor "M" starts up, it can increase the number of coils wrapped around "T"' by decreasing those around "T" or vice versa. Blondel connects the wire through "f" to the wire wound on "T" in four different ways, making equally distinct experiments. The four experiments. First experiment. The wire wound on "T" is connected directly to the rotation shaft on which rests the sliding contact "f", through the conducting ring "b", of negligible diameter, as shown in the figure. Connecting the drum "T"' to the motor "M" it quickly reaches a constant speed and so does the other drum "T". Maintaining this speed for about a minute, the galvanometer needle moves, indicating the presence of an electromotive force ("e.m.f."). Second experiment. The wire wound on "T" is connected to a conducting ring of diameter equal to that of the drum "T" and integral with it. The contact "f" runs along the edge of the ring which turns with the drum. So compared to the previous experiment "f", instead of being connected with the center of the coil is connected at a point as far from the center as the radius of the coil itself. In this case the galvanometer shows that "e.m.f." induced during drum rotation is zero, unlike what could be expected having in mind Faraday's original experiment. Since Blondel feared that it could be objected that the result is due to the fact that, during rotation, the circuit between "f" and the point of attachment of the coil wire to the ring may follow two different paths that partially neutralize each other, he makes a third experiment. Third experiment. The wire wound on "T" is connected, by means of a sliding contact coming out from the edge of the drum, to the edge of a solid conducting disk, having a diameter equal to that of the drum "T" and parallel to it but detached, so as to remain stationary while the drum turns. The contact "f" rests directly on the central part of the disk. Also in this case the "e.m.f." measured by the galvanometer is zero. From the last two results Blondel concludes that the "e.m.f." measured in the first experiment was not caused by the progressive decrease of the flux but by the sweeping of the flux by the wire joining the center of the coil with the brush "f". To further confirm this he performs a fourth experiment. Fourth experiment. The wire wound on "T" is connected to the edge of a solid disk of diameter equal to that of the drum "T" and integral with it. The contact "f" strips against the center of the disk. In this case the galvanometer records a "e.m.f." exactly equal to that of the first experiment. Not only that, but if you rotate the disc keeping the drums still, it still records the same "e.m.f." that is caused only by the fact that a part of the circuit sweeps the flow. Moreover, by varying the point of contact of the coil from the outer edge to the center of the disc, the induced "e.m.f." is proportional to the area of the circle having as radius the distance between the two points of attachment. The result is analogous to the Faraday disk. Conclusions. From here Blondel deduces that: 1) When the magnetic field is constant, there is an "e.m.f." only if the circuit cuts through the lines of force of the field, as in the first experiment (rotational axis-drum edge section). If this does not happen, even varying the total flux through the circuit, there is no "e.m.f.", as in the second experiment. 2) The case in which the closing line of the circuit (axis-edge section) moves within a solid conductor (but the conductor remains stationary), as in the third experiment, is not equivalent to the case in which the entire conductor moves, as in the fourth experiment (in this case the Lorentz force acts). Thus "one must reject as inaccurate the too general statements of the law of induction" and to the statement that ""An electromotive force originates in a closed circuit when the number of magnetic lines passing through it varies"..." should be added "and when the variation is produced either by the conductor sweeping the lines of force or by a variation in the field of the inductor itself". Basically experiments show how Faraday's basic law, that is the one that takes into account only flux variation, cannot be the general law of induction. In fact it is necessary to include also the contribution due to Lorentz force to obtain the general formula.
[ { "math_id": 0, "text": "\\Phi" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "\\Phi = B \\cdot S" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "\\mathrm{e.m.f.} = -{d(\\mathrm N \\Phi) \\over dt} = -\\Phi{d\\mathrm N\\over dt}," } ]
https://en.wikipedia.org/wiki?curid=66797211
6680683
Cubic form
Homogeneous polynomial of degree 3 In mathematics, a cubic form is a homogeneous polynomial of degree 3, and a cubic hypersurface is the zero set of a cubic form. In the case of a cubic form in three variables, the zero set is a cubic plane curve. In , Boris Delone and Dmitry Faddeev showed that binary cubic forms with integer coefficients can be used to parametrize orders in cubic fields. Their work was generalized in to include all cubic rings (a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;cubic ring is a ring that is isomorphic to Z3 as a Z-module), giving a discriminant-preserving bijection between orbits of a GL(2, Z)-action on the space of integral binary cubic forms and cubic rings up to isomorphism. The classification of real cubic forms formula_0 is linked to the classification of umbilical points of surfaces. The equivalence classes of such cubics form a three-dimensional real projective space and the subset of parabolic forms define a surface – the umbilic torus. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a x^3 + 3 b x^2 y + 3 c x y^2 + d y^3" } ]
https://en.wikipedia.org/wiki?curid=6680683
66810834
Extended natural numbers
In mathematics, the extended natural numbers is a set which contains the values formula_0 and formula_1 (infinity). That is, it is the result of adding a maximum element formula_1 to the natural numbers. Addition and multiplication work as normal for finite values, and are extended by the rules formula_2 (formula_3), formula_4 and formula_5 for formula_6. With addition and multiplication, formula_7 is a semiring but not a ring, as formula_1 lacks an additive inverse. The set can be denoted by formula_8, formula_9 or formula_10. It is a subset of the extended real number line, which extends the real numbers by adding formula_11 and formula_12. Applications. In graph theory, the extended natural numbers are used to define distances in graphs, with formula_1 being the distance between two unconnected vertices. They can be used to show the extension of some results, such as the max-flow min-cut theorem, to infinite graphs. In topology, the topos of right actions on the extended natural numbers is a category PRO of projection algebras. In constructive mathematics, the extended natural numbers formula_9 are a one-point compactification of the natural numbers, yielding the set of non-increasing binary sequences i.e. formula_13 such that formula_14. The sequence formula_15 represents formula_16, while the sequence formula_17 represents formula_1. It is a retract of formula_18 and the claim that formula_19 implies the limited principle of omniscience. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0, 1, 2, \\dots" }, { "math_id": 1, "text": "\\infty" }, { "math_id": 2, "text": "n+\\infty=\\infty+n=\\infty" }, { "math_id": 3, "text": "n\\in\\mathbb{N}\\cup \\{\\infty\\}" }, { "math_id": 4, "text": "0\\times \\infty=\\infty \\times 0=0" }, { "math_id": 5, "text": "m\\times \\infty=\\infty\\times m=\\infty" }, { "math_id": 6, "text": "m\\neq 0" }, { "math_id": 7, "text": "\\mathbb{N}\\cup \\{\\infty\\}" }, { "math_id": 8, "text": "\\overline{\\mathbb{N}}" }, { "math_id": 9, "text": "\\mathbb{N}_\\infty" }, { "math_id": 10, "text": "\\mathbb{N}^\\infty" }, { "math_id": 11, "text": "-\\infty" }, { "math_id": 12, "text": "+\\infty" }, { "math_id": 13, "text": "(x_0,x_1,\\dots)\\in 2^\\mathbb{N}" }, { "math_id": 14, "text": "\\forall i\\in\\mathbb{N}: x_i\\ge x_{i+1}" }, { "math_id": 15, "text": "1^n 0^\\omega" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "1^\\omega" }, { "math_id": 18, "text": "2^\\mathbb{N}" }, { "math_id": 19, "text": "\\mathbb{N}\\cup \\{\\infty\\}\\subseteq \\mathbb{N}_\\infty" } ]
https://en.wikipedia.org/wiki?curid=66810834
668181
Transposition (music)
Operation in music In music, transposition refers to the process or operation of moving a collection of notes (pitches or pitch classes) up or down in pitch by a constant interval. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The shifting of a melody, a harmonic progression or an entire musical piece to another key, while maintaining the same tone structure, i.e. the same succession of whole tones and semitones and remaining melodic intervals. For example, a music transposer might transpose an entire piece of music into another key. Similarly, one might transpose a tone row or an unordered collection of pitches such as a chord so that it begins on another pitch. The transposition of a set "A" by "n" semitones is designated by T"n"("A"), representing the addition (mod 12) of an integer "n" to each of the pitch class integers of the set "A". Thus the set ("A") consisting of 0–1–2 transposed by 5 semitones is 5–6–7 (T5("A")) since 0 + 5 5, 1 + 5 6, and 2 + 5 7. Scalar transpositions. In scalar transposition, every pitch in a collection is shifted up or down a fixed number of scale steps within some scale. The pitches remain in the same scale before and after the shift. This term covers both chromatic and diatonic transpositions as follows. Chromatic transposition. Chromatic transposition is scalar transposition within the chromatic scale, implying that every pitch in a collection of notes is shifted by the same number of semitones. For instance, transposing the pitches C4–E4–G4 upward by four semitones, one obtains the pitches E4–G♯4–B4. Diatonic transposition. Diatonic transposition is scalar transposition within a diatonic scale (the most common kind of scale, indicated by one of a few standard key signatures). For example, transposing the pitches C4–E4–G4 up two steps in the familiar C major scale gives the pitches E4–G4–B4. Transposing the same pitches up by two steps in the F major scale instead gives E4–G4–B♭4. Pitch and pitch class transpositions. There are two further kinds of transposition, by pitch interval or by pitch interval class, applied to pitches or pitch classes, respectively. Transposition may be applied to pitches or to pitch classes. For example, the pitch A4, or 9, transposed by a major third, or the pitch interval 4: formula_0 while that pitch class, 9, transposed by a major third, or the pitch class interval 4: formula_1. Sight transposition. Although transpositions are usually written out, musicians are occasionally asked to transpose music "at sight", that is, to read the music in one key while playing in another. Musicians who play transposing instruments sometimes have to do this (for example when encountering an unusual transposition, such as clarinet in C), as well as singers' accompanists, since singers sometimes request a different key than the one printed in the music to better fit their vocal range (although many, but not all, songs are printed in editions for high, medium, and low voice). There are three basic techniques for teaching sight transposition: interval, clef, and numbers. Interval. First one determines the interval between the written key and the target key. Then one imagines the notes up (or down) by the corresponding interval. A performer using this method may calculate each note individually, or group notes together (e.g. "a descending chromatic passage starting on F" might become a "descending chromatic passage starting on A" in the target key). Clef. Clef transposition is routinely taught (among other places) in Belgium and France. One imagines a different clef and a different key signature than the ones printed. The change of clef is used so that the lines and spaces correspond to different notes than the lines and spaces of the original score. Seven clefs are used for this: treble (2nd line G-clef), bass (4th line F-clef), baritone (3rd line F-clef or 5th line C-clef, although in France and Belgium sight-reading exercises for this clef, as a preparation for clef transposition practice, are always printed with the 3rd line F-clef), and C-clefs on the four lowest lines; these allow any given staff position to correspond to each of the seven note names A through G. The signature is then adjusted for the actual accidental (natural, sharp or flat) one wants on that note. The octave may also have to be adjusted (this sort of practice ignores the conventional octave implication of the clefs), but this is a trivial matter for most musicians. Numbers. Transposing by numbers means, one determines the scale degree of the written note (e.g. first, fourth, fifth, etc.) in the given key. The performer then plays the corresponding scale degree of the target chord. Transpositional equivalence. Two musical objects are transpositionally equivalent if one can be transformed into another by transposition. It is similar to enharmonic equivalence, octave equivalence, and inversional equivalence. In many musical contexts, transpositionally equivalent chords are thought to be similar. Transpositional equivalence is a feature of musical set theory. The terms "transposition" and "transposition equivalence" allow the concept to be discussed as both an operation and relation, an activity and a state of being. Compare with modulation and related key. Using integer notation and modulo 12, to transpose a pitch "x" by "n" semitones: formula_2 or formula_3 For pitch class transposition by a pitch class interval: formula_4 Twelve-tone transposition. Milton Babbitt defined the "transformation" of transposition within the twelve-tone technique as follows: By applying the transposition operator (T) to a [twelve-tone] set we will mean that every "p" of the set "P" is mapped homomorphically (with regard to order) into a T("p") of the set T("P") according to the following operation: formula_5 where "to" is any integer 0–11 inclusive, where, of course, the "to" remains fixed for a given transposition. The + sign indicates ordinary transposition. Here "To" is the transposition corresponding to "to" (or "o", according to Schuijer); "pi,j" is the pitch of the "i"th tone in "P" belong to the pitch class (set number) "j". Allen Forte defines transposition so as to apply to unordered sets of other than twelve pitches: the addition mod 12 of any integer "k" in "S" to every integer "p" of "P". thus giving, "12 transposed forms of "P"". Fuzzy transposition. Joseph Straus created the concept of fuzzy transposition, and fuzzy inversion, to express transposition as a voice-leading event, "the 'sending' of each element of a given PC [pitch-class] set to its T"n"-correspondent...[enabling] him to relate PC sets of two adjacent chords in terms of a transposition, even when not all of the 'voices' participated fully in the transpositional move.". A transformation within voice-leading space rather than pitch-class space as in pitch class transposition.
[ { "math_id": 0, "text": "9 + 4 = 13" }, { "math_id": 1, "text": "9 + 4 =13 \\equiv 1\\pmod{12}" }, { "math_id": 2, "text": "\\boldsymbol{T}^p_n (x) = x+n" }, { "math_id": 3, "text": "\\boldsymbol{T}^p_n (x) \\rightarrow x+n" }, { "math_id": 4, "text": "\\boldsymbol{T}_n (x) = x+n \\pmod{12}" }, { "math_id": 5, "text": "\\boldsymbol{T}_o(p_{i,j})=p_{i,j}+t_o" } ]
https://en.wikipedia.org/wiki?curid=668181
66819
Root mean square
Square root of the mean square In mathematics, the root mean square (abbrev. RMS, RMS or rms) of a set of numbers is the square root of the set's mean square. Given a set formula_0, its RMS is denoted as either formula_1 or formula_2. The RMS is also known as the quadratic mean (denoted formula_3), a special case of the generalized mean. The RMS of a continuous function is denoted formula_4 and can be defined in terms of an integral of the square of the function. The RMS of an alternating electric current equals the value of constant direct current that would dissipate the same power in a resistive load. In estimation theory, the root-mean-square deviation of an estimator measures how far the estimator strays from the data. Definition. The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean of the squares of the values, or the square of the function that defines the continuous waveform. In physics, the RMS current value can also be defined as the "value of the direct current that dissipates the same power in a resistor." In the case of a set of "n" values formula_5, the RMS is formula_6 The corresponding formula for a continuous function (or waveform) "f"("t") defined over the interval formula_7 is formula_8 and the RMS for a function over all time is formula_9 The RMS over all time of a periodic function is equal to the RMS of one period of the function. The RMS value of a continuous function or signal can be approximated by taking the RMS of a sample consisting of equally spaced observations. Additionally, the RMS value of various waveforms can also be determined without calculus, as shown by Cartwright. In the case of the RMS statistic of a random process, the expected value is used instead of the mean. In common waveforms. If the waveform is a pure sine wave, the relationships between amplitudes (peak-to-peak, peak) and RMS are fixed and known, as they are for any continuous periodic wave. However, this is not true for an arbitrary waveform, which may not be periodic or continuous. For a zero-mean sine wave, the relationship between RMS and peak-to-peak amplitude is: "Peak-to-peak" formula_10 For other waveforms, the relationships are not the same as they are for sine waves. For example, for either a triangular or sawtooth wave: "Peak-to-peak" formula_11 In waveform combinations. Waveforms made by summing known simple waveforms have an RMS value that is the root of the sum of squares of the component RMS values, if the component waveforms are orthogonal (that is, if the average of the product of one simple waveform with another is zero for all pairs other than a waveform times itself). formula_12 Alternatively, for waveforms that are perfectly positively correlated, or "in phase" with each other, their RMS values sum directly. Uses. In electrical engineering. Voltage. A special case of RMS of waveform combinations is: formula_13 where formula_14 refers to the direct current (or average) component of the signal, and formula_15 is the alternating current component of the signal. Average electrical power. Electrical engineers often need to know the power, "P", dissipated by an electrical resistance, "R". It is easy to do the calculation when there is a constant current, "I", through the resistance. For a load of "R" ohms, power is given by: formula_16 However, if the current is a time-varying function, "I"("t"), this formula must be extended to reflect the fact that the current (and thus the instantaneous power) is varying over time. If the function is periodic (such as household AC power), it is still meaningful to discuss the "average" power dissipated over time, which is calculated by taking the average power dissipation: formula_17 So, the RMS value, "I"RMS, of the function "I"("t") is the constant current that yields the same power dissipation as the time-averaged power dissipation of the current "I"("t"). Average power can also be found using the same method that in the case of a time-varying voltage, "V"("t"), with RMS value "V"RMS, formula_18 This equation can be used for any periodic waveform, such as a sinusoidal or sawtooth waveform, allowing us to calculate the mean power delivered into a specified load. By taking the square root of both these equations and multiplying them together, the power is found to be: formula_19 Both derivations depend on voltage and current being proportional (that is, the load, "R", is purely resistive). Reactive loads (that is, loads capable of not just dissipating energy but also storing it) are discussed under the topic of AC power. In the common case of alternating current when "I"("t") is a sinusoidal current, as is approximately true for mains power, the RMS value is easy to calculate from the continuous case equation above. If "I"p is defined to be the peak current, then: formula_20 where "t" is time and "ω" is the angular frequency ("ω" = 2π/"T", where "T" is the period of the wave). Since "I"p is a positive constant and was to be squared within the integral: formula_21 Using a trigonometric identity to eliminate squaring of trig function: formula_22 but since the interval is a whole number of complete cycles (per definition of RMS), the sine terms will cancel out, leaving: formula_23 A similar analysis leads to the analogous equation for sinusoidal voltage: formula_24 where "I"P represents the peak current and "V"P represents the peak voltage. Because of their usefulness in carrying out power calculations, listed voltages for power outlets (for example, 120V in the US, or 230V in Europe) are almost always quoted in RMS values, and not peak values. Peak values can be calculated from RMS values from the above formula, which implies "V"P = "V"RMS × √2, assuming the source is a pure sine wave. Thus the peak value of the mains voltage in the USA is about 120 × √2, or about 170 volts. The peak-to-peak voltage, being double this, is about 340 volts. A similar calculation indicates that the peak mains voltage in Europe is about 325 volts, and the peak-to-peak mains voltage, about 650 volts. RMS quantities such as electric current are usually calculated over one cycle. However, for some purposes the RMS current over a longer period is required when calculating transmission power losses. The same principle applies, and (for example) a current of 10 amps used for 12 hours each 24-hour day represents an average current of 5 amps, but an RMS current of 7.07 amps, in the long term. The term "RMS power" is sometimes erroneously used (e.g., in the audio industry) as a synonym for "mean power" or "average power" (it is proportional to the square of the RMS voltage or RMS current in a resistive load). For a discussion of audio power measurements and their shortcomings, see Audio power. Speed. In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation: formula_25 where "R" represents the gas constant, 8.314 J/(mol·K), "T" is the temperature of the gas in kelvins, and "M" is the molar mass of the gas in kilograms per mole. In physics, speed is defined as the scalar magnitude of velocity. For a stationary gas, the average speed of its molecules can be in the order of thousands of km/h, even though the average velocity of its molecules is zero. Error. When two data sets — one set from theoretical prediction and the other from actual measurement of some physical variable, for instance — are compared, the RMS of the pairwise differences of the two data sets can serve as a measure of how far on average the error is from 0. The mean of the absolute values of the pairwise differences could be a useful measure of the variability of the differences. However, the RMS of the differences is usually the preferred measure, probably due to mathematical convention and compatibility with other formulae. In frequency domain. The RMS can be computed in the frequency domain, using Parseval's theorem. For a sampled signal formula_26, where formula_27 is the sampling period, formula_28 where formula_29 and "N" is the sample size, that is, the number of observations in the sample and DFT coefficients. In this case, the RMS computed in the time domain is the same as in the frequency domain: formula_30 Relationship to other statistics. If formula_31 is the arithmetic mean and formula_32 is the standard deviation of a population or a waveform, then: formula_33 From this it is clear that the RMS value is always greater than or equal to the average, in that the RMS includes the "error" / square deviation as well. Physical scientists often use the term "root mean square" as a synonym for standard deviation when it can be assumed the input signal has zero mean, that is, referring to the square root of the mean squared deviation of a signal from a given baseline or fit. This is useful for electrical engineers in calculating the "AC only" RMS of a signal. Standard deviation being the RMS of a signal's variation about the mean, rather than about 0, the DC component is removed (that is, RMS(signal) = stdev(signal) if the mean signal is 0). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_i" }, { "math_id": 1, "text": "x_\\mathrm{RMS}" }, { "math_id": 2, "text": "\\mathrm{RMS}_x" }, { "math_id": 3, "text": "M_2" }, { "math_id": 4, "text": "f_\\mathrm{RMS}" }, { "math_id": 5, "text": "\\{x_1,x_2,\\dots,x_n\\}" }, { "math_id": 6, "text": "\nx_\\text{RMS} = \\sqrt{ \\frac{1}{n} \\left( {x_1}^2 + {x_2}^2 + \\cdots + {x_n}^2 \\right) }.\n" }, { "math_id": 7, "text": "T_1 \\le t \\le T_2" }, { "math_id": 8, "text": "\nf_\\text{RMS} = \\sqrt {{1 \\over {T_2-T_1}} {\\int_{T_1}^{T_2} {[f(t)]}^2\\, {\\rm d}t}},\n" }, { "math_id": 9, "text": "\nf_\\text{RMS} = \\lim_{T\\rightarrow \\infty} \\sqrt {{1 \\over {2T}} {\\int_{-T}^{T} {[f(t)]}^2\\, {\\rm d}t}}.\n" }, { "math_id": 10, "text": " = 2 \\sqrt{2} \\times \\text{RMS} \\approx 2.8 \\times \\text{RMS}." }, { "math_id": 11, "text": " = 2 \\sqrt{3} \\times \\text{RMS} \\approx 3.5 \\times \\text{RMS}." }, { "math_id": 12, "text": "\\text{RMS}_\\text{Total} =\\sqrt{\\text{RMS}_1^2 + \\text{RMS}_2^2 + \\cdots + \\text{RMS}_n^2}" }, { "math_id": 13, "text": "\\text{RMS}_\\text{AC+DC} = \\sqrt{\\text{V}_\\text{DC}^2 + \\text{RMS}_\\text{AC}^2}" }, { "math_id": 14, "text": "\\text{V}_\\text{DC}" }, { "math_id": 15, "text": "\\text{RMS}_\\text{AC}" }, { "math_id": 16, "text": "P = I^2 R." }, { "math_id": 17, "text": "\\begin{align}\nP_{Avg}\n&= \\left( I(t)^2R \\right)_{Avg} &&\\text{where } \\left( \\cdots \\right)_{Avg} \\text{ denotes the temporal mean of a function} \\\\[3pt]\n&= \\left( I(t)^2 \\right)_{Avg} R &&\\text{(as } R \\text{ does not vary over time, it can be factored out)} \\\\[3pt]\n&= I_\\text{RMS}^2R &&\\text{by definition of root-mean-square}\n\\end{align}" }, { "math_id": 18, "text": "P_\\text{Avg} = {V_\\text{RMS}^2 \\over R}." }, { "math_id": 19, "text": "P_\\text{Avg} = V_\\text{RMS} I_\\text{RMS}." }, { "math_id": 20, "text": "I_\\text{RMS} = \\sqrt{{1 \\over {T_2 - T_1}} \\int_{T_1}^{T_2} \\left[I_\\text{p} \\sin(\\omega t)\\right]^2 dt}," }, { "math_id": 21, "text": "I_\\text{RMS} = I_\\text{p} \\sqrt{{1 \\over {T_2 - T_1}} {\\int_{T_1}^{T_2} {\\sin^2(\\omega t)}\\, dt}}." }, { "math_id": 22, "text": "\\begin{align}\nI_\\text{RMS} &= I_\\text{p} \\sqrt{{1 \\over {T_2 - T_1}} {\\int_{T_1}^{T_2} {{1 - \\cos(2\\omega t) \\over 2}}\\, dt}} \\\\[3pt]\n\t\t &= I_\\text{p} \\sqrt{{1 \\over {T_2 - T_1}} \\left[ {t \\over 2} - {\\sin(2\\omega t) \\over 4\\omega} \\right]_{T_1}^{T_2} }\n\\end{align}" }, { "math_id": 23, "text": "I_\\text{RMS} = I_\\text{p} \\sqrt{{1 \\over {T_2 - T_1}} \\left[ {{t \\over 2}} \\right]_{T_1}^{T_2} } = I_\\text{p} \\sqrt{{1 \\over {T_2 - T_1}} {{{T_2 - T_1} \\over 2}} } = {I_\\text{p} \\over \\sqrt{2}}." }, { "math_id": 24, "text": "V_\\text{RMS} = {V_\\text{p} \\over \\sqrt{2}}," }, { "math_id": 25, "text": "v_\\text{RMS} = \\sqrt{3RT \\over M}" }, { "math_id": 26, "text": "x[n] = x(t=nT)" }, { "math_id": 27, "text": "T" }, { "math_id": 28, "text": "\\sum_{n=1}^N{x^2[n]} = \\frac{1}{N}\\sum_{m=1}^N \\left| X[m] \\right|^2," }, { "math_id": 29, "text": "X[m] = \\operatorname{DFT}\\{x[n]\\}" }, { "math_id": 30, "text": "\n\\text{RMS}\\{x[n]\\}\n= \\sqrt{\\frac{1}{N}\\sum_n{x^2[n]}}\n= \\sqrt{\\frac{1}{N^2}\\sum_m{\\bigl| X[m] \\bigr|}^2}\n= \\sqrt{\\sum_m{\\left| \\frac{X[m]}{N} \\right|^2}}.\n" }, { "math_id": 31, "text": "\\bar{x}" }, { "math_id": 32, "text": "\\sigma_x" }, { "math_id": 33, "text": "x_\\text{rms}^2 = \\overline{x}^2 + \\sigma_x^2 = \\overline{x^2}." } ]
https://en.wikipedia.org/wiki?curid=66819
66824970
1 Chronicles 13
First Book of Chronicles, chapter 13 1 Chronicles 13 is the thirteenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains the account of an unsuccessful attempt to bring the Ark of the Covenant to Jerusalem by David. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30). Text. This chapter was originally written in the Hebrew language. It is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). The ark brought from Kiriath-Jearim (13:1–4). Verses 1–4 detail the preparations by David involving all Israel in the first attempt to bring the ark into Jerusalem, more than the parallel account in 2 Samuel 6:1–2. The ark is a national symbol of Israel's religion and important for David as he had been firmly and unanimously established as the king of all Israel. Consistent with the earlier chapters, David consulted his military leaders and then the whole congregation (verse 2) to achieve two conditions for the execution of the effort: the willingness of the participants and God's acceptance of the plan. It was later revealed that the plan lacked God's acceptance, as it was done without the significant collaboration of the priests and Levites. "And David said to all the assembly of Israel, "If it seems good to you, and if it is of the Lord our God, let us send out to our brethren everywhere who are left in all the land of Israel, and with them to the priests and Levites who are in their cities and their common-lands, that they may gather together to us;"" Verse 2. The priests and Levites lived within the territories of Israel's tribes (1 Chronicles 6:54–81). Uzza and the ark (13:5–14). Verses 5–14 follows closely to the report in 2 Samuel 6:3–11 (without verse 12). The boundaries of Israel were expanded in Chronicles from the usual phrase "from Beersheba to Dan" to be between "the Shihor river in Egypt and Lebo-hamath"; the area achieved after David's spectacular victories (2 Chronicles 7:8; cf. Joshua 13:3, 5 although the extended regions were not conquered in the time of Joshua). "So David gathered all Israel together, from Shihor in Egypt to as far as the entrance of Hamath, to bring the ark of God from Kirjath Jearim." "And David and all Israel went up to Baalah, to Kirjath Jearim, which belonged to Judah, to bring up from there the ark of God the Lord, who dwells between the cherubim, where His name is proclaimed." "And when they came unto the threshingfloor of Chidon, Uzza put forth his hand to hold the ark; for the oxen stumbled." "And David was displeased, because the Lord had made a breach upon Uzza: wherefore that place is called Perezuzza to this day." "So David would not move the ark with him into the City of David, but took it aside into the house of Obed-Edom the Gittite." Verse 13. "The City of David": refers to a section in southern Jerusalem fortified by David and named after him (1 Chronicles 11:7), also may refer to "Mount Zion".. "And the ark of God remained with the household of Obed-edom in his house three months. And the Lord blessed the household of Obed-edom and all that he had." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66824970
668449
Material derivative
Time rate of change of some physical quantity of a material element in a velocity field In continuum mechanics, the material derivative describes the time rate of change of some physical quantity (like heat or momentum) of a material element that is subjected to a space-and-time-dependent macroscopic velocity field. The material derivative can serve as a link between Eulerian and Lagrangian descriptions of continuum deformation. For example, in fluid dynamics, the velocity field is the flow velocity, and the quantity of interest might be the temperature of the fluid. In which case, the material derivative then describes the temperature change of a certain fluid parcel with time, as it flows along its pathline (trajectory). &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; Other names. There are many other names for the material derivative, including: Definition. The material derivative is defined for any tensor field "y" that is "macroscopic", with the sense that it depends only on position and time coordinates, "y" = "y"(x, "t"): formula_0 where ∇"y" is the covariant derivative of the tensor, and u(x, "t") is the flow velocity. Generally the convective derivative of the field u·∇"y", the one that contains the covariant derivative of the field, can be interpreted both as involving the streamline tensor derivative of the field u·(∇"y"), or as involving the streamline directional derivative of the field (u·∇) "y", leading to the same result. Only this spatial term containing the flow velocity describes the transport of the field in the flow, while the other describes the intrinsic variation of the field, independent of the presence of any flow. Confusingly, sometimes the name "convective derivative" is used for the whole material derivative "D"/"Dt", instead for only the spatial term u·∇. The effect of the time-independent terms in the definitions are for the scalar and tensor case respectively known as advection and convection. Scalar and vector fields. For example, for a macroscopic scalar field "φ"(x, "t") and a macroscopic vector field A(x, "t") the definition becomes: formula_1 In the scalar case ∇"φ" is simply the gradient of a scalar, while ∇A is the covariant derivative of the macroscopic vector (which can also be thought of as the Jacobian matrix of A as a function of x). In particular for a scalar field in a three-dimensional Cartesian coordinate system ("x"1, "x"2, "x"3), the components of the velocity u are "u"1, "u"2, "u"3, and the convective term is then: formula_2 Development. Consider a scalar quantity "φ" = "φ"(x, "t"), where t is time and x is position. Here "φ" may be some physical variable such as temperature or chemical concentration. The physical quantity, whose scalar quantity is "φ", exists in a continuum, and whose macroscopic velocity is represented by the vector field u(x, "t"). The (total) derivative with respect to time of "φ" is expanded using the multivariate chain rule: formula_3 It is apparent that this derivative is dependent on the vector formula_4 which describes a "chosen" path x("t") in space. For example, if formula_5 is chosen, the time derivative becomes equal to the partial time derivative, which agrees with the definition of a partial derivative: a derivative taken with respect to some variable (time in this case) holding other variables constant (space in this case). This makes sense because if formula_6, then the derivative is taken at some "constant" position. This static position derivative is called the Eulerian derivative. An example of this case is a swimmer standing still and sensing temperature change in a lake early in the morning: the water gradually becomes warmer due to heating from the sun. In which case the term formula_7 is sufficient to describe the rate of change of temperature. If the sun is not warming the water (i.e. formula_8), but the path x("t") is not a standstill, the time derivative of "φ" may change due to the path. For example, imagine the swimmer is in a motionless pool of water, indoors and unaffected by the sun. One end happens to be at a constant high temperature and the other end at a constant low temperature. By swimming from one end to the other the swimmer senses a change of temperature with respect to time, even though the temperature at any given (static) point is a constant. This is because the derivative is taken at the swimmer's changing location and the second term on the right formula_9 is sufficient to describe the rate of change of temperature. A temperature sensor attached to the swimmer would show temperature varying with time, simply due to the temperature variation from one end of the pool to the other. The material derivative finally is obtained when the path x("t") is chosen to have a velocity equal to the fluid velocity formula_10 That is, the path follows the fluid current described by the fluid's velocity field u. So, the material derivative of the scalar "φ" is formula_11 An example of this case is a lightweight, neutrally buoyant particle swept along a flowing river and experiencing temperature changes as it does so. The temperature of the water locally may be increasing due to one portion of the river being sunny and the other in a shadow, or the water as a whole may be heating as the day progresses. The changes due to the particle's motion (itself caused by fluid motion) is called "advection" (or convection if a vector is being transported). The definition above relied on the physical nature of a fluid current; however, no laws of physics were invoked (for example, it was assumed that a lightweight particle in a river will follow the velocity of the water), but it turns out that many physical concepts can be described concisely using the material derivative. The general case of advection, however, relies on conservation of mass of the fluid stream; the situation becomes slightly different if advection happens in a non-conservative medium. Only a path was considered for the scalar above. For a vector, the gradient becomes a tensor derivative; for tensor fields we may want to take into account not only translation of the coordinate system due to the fluid movement but also its rotation and stretching. This is achieved by the upper convected time derivative. Orthogonal coordinates. It may be shown that, in orthogonal coordinates, the "j"-th component of the convection term of the material derivative of a vector field formula_12 is given by formula_13 where the "h""i" are related to the metric tensors by formula_14 In the special case of a three-dimensional Cartesian coordinate system ("x", "y", "z"), and A being a 1-tensor (a vector with three components), this is just: formula_15 where formula_16 is a Jacobian matrix. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\mathrm{D} y}{\\mathrm{D}t} \\equiv \\frac{\\partial y}{\\partial t} + \\mathbf{u}\\cdot\\nabla y," }, { "math_id": 1, "text": "\\begin{align}\n \\frac{\\mathrm{D}\\varphi}{\\mathrm{D}t} &\\equiv \\frac{\\partial \\varphi}{\\partial t} + \\mathbf{u}\\cdot\\nabla \\varphi, \\\\[3pt]\n \\frac{\\mathrm{D}\\mathbf{A}}{\\mathrm{D}t} &\\equiv \\frac{\\partial \\mathbf{A}}{\\partial t} + \\mathbf{u}\\cdot\\nabla \\mathbf{A}.\n\\end{align}" }, { "math_id": 2, "text": " \\mathbf{u}\\cdot \\nabla \\varphi = u_1 \\frac {\\partial \\varphi} {\\partial x_1} + u_2 \\frac {\\partial \\varphi} {\\partial x_2} + u_3 \\frac {\\partial \\varphi} {\\partial x_3}." }, { "math_id": 3, "text": "\\frac{\\mathrm{d}}{\\mathrm{d} t}\\varphi(\\mathbf x, t) = \\frac{\\partial \\varphi}{\\partial t} + \\dot \\mathbf x \\cdot \\nabla \\varphi." }, { "math_id": 4, "text": "\\dot \\mathbf x \\equiv \\frac{\\mathrm{d} \\mathbf x}{\\mathrm{d} t}," }, { "math_id": 5, "text": " \\dot \\mathbf x= \\mathbf 0" }, { "math_id": 6, "text": "\\dot \\mathbf x = 0" }, { "math_id": 7, "text": " {\\partial \\varphi}/{\\partial t}" }, { "math_id": 8, "text": " {\\partial \\varphi}/{\\partial t} = 0" }, { "math_id": 9, "text": " \\dot \\mathbf x \\cdot \\nabla \\varphi " }, { "math_id": 10, "text": "\\dot \\mathbf x = \\mathbf u." }, { "math_id": 11, "text": "\\frac{\\mathrm{D} \\varphi}{\\mathrm{D} t} = \\frac{\\partial \\varphi}{\\partial t} + \\mathbf u \\cdot \\nabla \\varphi." }, { "math_id": 12, "text": "\\mathbf{A}" }, { "math_id": 13, "text": "[\\left(\\mathbf{u} \\cdot \\nabla \\right)\\mathbf{A}]_j = \n\\sum_i \\frac{u_i}{h_i} \\frac{\\partial A_j}{\\partial q^i} + \\frac{A_i}{h_i h_j}\\left(u_j \\frac{\\partial h_j}{\\partial q^i} - u_i \\frac{\\partial h_i}{\\partial q^j}\\right),\n" }, { "math_id": 14, "text": "h_i = \\sqrt{g_{ii}}." }, { "math_id": 15, "text": "(\\mathbf{u}\\cdot\\nabla) \\mathbf{A} = \n\\begin{pmatrix} \n \\displaystyle\n u_x \\frac{\\partial A_x}{\\partial x} + u_y \\frac{\\partial A_x}{\\partial y}+u_z \\frac{\\partial A_x}{\\partial z}\n \\\\\n \\displaystyle\n u_x \\frac{\\partial A_y}{\\partial x} + u_y \\frac{\\partial A_y}{\\partial y}+u_z \\frac{\\partial A_y}{\\partial z} \n \\\\\n \\displaystyle\n u_x \\frac{\\partial A_z}{\\partial x} + u_y \\frac{\\partial A_z}{\\partial y}+u_z \\frac{\\partial A_z}{\\partial z} \n\\end{pmatrix} =\n\\frac{\\partial (A_x, A_y, A_z)}{\\partial (x, y, z)}\\mathbf{u}\n" }, { "math_id": 16, "text": "\\frac{\\partial(A_x, A_y, A_z)}{\\partial(x, y, z)}" } ]
https://en.wikipedia.org/wiki?curid=668449
66846197
Random graph theory of gelation
Random graph theory of gelation is a mathematical theory for sol–gel processes. The theory is a collection of results that generalise the Flory–Stockmayer theory, and allow identification of the gel point, gel fraction, size distribution of polymers, molar mass distribution and other characteristics for a set of many polymerising monomers carrying arbitrary numbers and types of reactive functional groups. The theory builds upon the notion of the random graph, introduced by mathematicians Paul Erdős and Alfréd Rényi, and independently by Edgar Gilbert in the late 1950s, as well as on the generalisation of this concept known as the random graph with a fixed degree sequence. The theory has been originally developed to explain step-growth polymerisation, and adaptations to other types of polymerisation now exist. Along with providing theoretical results the theory is also constructive. It indicates that the graph-like structures resulting from polymerisation can be sampled with an algorithm using the configuration model, which makes these structures available for further examination with computer experiments. Premises and degree distribution. At a given point of time, degree distribution formula_0, is the probability that a randomly chosen monomer has formula_1 connected neighbours. The central idea of the random graph theory of gelation is that a cross-linked or branched polymer can be studied separately at two levels: 1) monomer reaction kinetics that predicts formula_0 and 2) random graph with a given degree distribution. The advantage of such a decoupling is that the approach allows one to study the monomer kinetics with relatively simple rate equations, and then deduce the degree distribution serving as input for a random graph model. In several cases the aforementioned rate equations have a known analytical solution. One type of functional groups. In the case of step-growth polymerisation of monomers carrying functional groups of the same type (so called formula_2 polymerisation) the degree distribution is given by: formula_3 where formula_4 is bond conversion, formula_5 is the average functionality, and formula_6 is the initial fractions of monomers of functionality formula_7. In the later expression unit reaction rate is assumed without loss of generality. According to the theory, the system is in the gel state when formula_8, where the gelation conversion is formula_9. Analytical expression for average molecular weight and molar mass distribution are known too. When more complex reaction kinetics are involved, for example chemical substitution, side reactions or degradation, one may still apply the theory by computing formula_10 using numerical integration. In which case, formula_11 signifies that the system is in the gel state at time t (or in the sol state when the inequality sign is flipped). Two types of functional groups. When monomers with two types of functional groups A and B undergo step growth polymerisation by virtue of a reaction between A and B groups, a similar analytical results are known. See the table on the right for several examples. In this case, formula_12 is the fraction of initial monomers with formula_13 groups A and formula_14 groups B. Suppose that A is the group that is depleted first. Random graph theory states that gelation takes place when formula_8, where the gelation conversion is formula_15 and formula_16. Molecular size distribution, the molecular weight averages, and the distribution of gyration radii have known formal analytical expressions. When degree distribution formula_17, giving the fraction of monomers in the network with formula_18 neighbours connected via A group and formula_19 connected via B group at time formula_20 is solved numerically, the gel state is detected when formula_21, where formula_22 and formula_23. Generalisations. Known generalisations include monomers with an arbitrary number of functional group types, crosslinking polymerisation, and complex reaction networks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u(n)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "A_1 +A_2+A_3+\\cdots" }, { "math_id": 3, "text": "u(n,t)=\\sum_{m=n}^\\infty \\binom{m}{n} c(t)^n \\big(1-c(t)\\big)^{m-n}f_m, " }, { "math_id": 4, "text": " c(t)=\\frac{\\mu t}{1+\\mu t} " }, { "math_id": 5, "text": " \\mu =\\sum_{m=1}^k m f_m" }, { "math_id": 6, "text": " f_m" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": " c(t)>c_g " }, { "math_id": 9, "text": " c_g=\\frac{\\sum_{m=1}^{\\infty} m f_m}{\\sum_{m=1}^{\\infty} (m^2-m) f_m} " }, { "math_id": 10, "text": " u(n,t) " }, { "math_id": 11, "text": " \\sum_{n=1}^{\\infty} (n^2-2n)u(n,t)>0 " }, { "math_id": 12, "text": " f_{m,k} " }, { "math_id": 13, "text": " m " }, { "math_id": 14, "text": " k " }, { "math_id": 15, "text": " c_g=\\frac{\\nu_{10}}{\\nu_{11}+\\sqrt{ (\\nu_{20}-\\nu_{10})(\\nu_{02}-\\nu_{01})}} " }, { "math_id": 16, "text": " \\nu_{i,j}=\\sum_{m,k=1}^\\infty m^i k^j f_{m,k} " }, { "math_id": 17, "text": " u(n,l,t) " }, { "math_id": 18, "text": " n " }, { "math_id": 19, "text": " l " }, { "math_id": 20, "text": " t " }, { "math_id": 21, "text": " 2 \\mu \\mu_{11} -\\mu\\mu_{02} -\\mu \\mu_{20} +\\mu_{02}\\mu_{20} - \\mu_{11}^2>0 " }, { "math_id": 22, "text": " \\mu_{i,j}=\\sum_{n,l=1}^\\infty n^i l^j u(n,l,t) " }, { "math_id": 23, "text": " \\mu=\\mu_{01}=\\mu_{10} " } ]
https://en.wikipedia.org/wiki?curid=66846197
66851658
Uniform boundedness conjecture for rational points
Mathematics conjecture about rational points on algebraic curves In arithmetic geometry, the uniform boundedness conjecture for rational points asserts that for a given number field formula_0 and a positive integer formula_1, there exists a number formula_2 depending only on formula_0 and formula_3 such that for any algebraic curve formula_4 defined over formula_0 having genus equal to formula_3 has at most formula_2 formula_0-rational points. This is a refinement of Faltings's theorem, which asserts that the set of formula_0-rational points formula_5 is necessarily finite. Progress. The first significant progress towards the conjecture was due to Caporaso, Harris, and Mazur. They proved that the conjecture holds if one assumes the Bombieri–Lang conjecture. Mazur's conjecture B. Mazur's conjecture B is a weaker variant of the uniform boundedness conjecture that asserts that there should be a number formula_6 such that for any algebraic curve formula_4 defined over formula_0 having genus formula_3 and whose Jacobian variety formula_7 has Mordell–Weil rank over formula_0 equal to formula_8, the number of formula_0-rational points of formula_4 is at most formula_6. Michael Stoll proved that Mazur's conjecture B holds for hyperelliptic curves with the additional hypothesis that formula_9. Stoll's result was further refined by Katz, Rabinoff, and Zureick-Brown in 2015. Both of these works rely on Chabauty's method. Mazur's conjecture B was resolved by Dimitrov, Gao, and Habegger in 2021 using the earlier work of Gao and Habegger on the geometric Bogomolov conjecture instead of Chabauty's method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "g \\geq 2 " }, { "math_id": 2, "text": "N(K,g)" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "C(K)" }, { "math_id": 6, "text": "N(K,g,r)" }, { "math_id": 7, "text": "J_C" }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "r \\leq g - 3 " } ]
https://en.wikipedia.org/wiki?curid=66851658
66854311
Kubelka–Munk theory
Modelling the appearance of paint coatings The Kubelka-Munk theory, devised by Paul Kubelka and Franz Munk, is a fundamental approach to modelling the appearance of paint films. As published in 1931, the theory addresses "the question of how the color of a substrate is changed by the application of a coat of paint of specified composition and thickness, and especially the thickness of paint needed to obscure the substrate". The mathematical relationship involves just two paint-dependent constants. In their article, "fundamental differential equations" are developed using a two-stream approximation for light diffusing through a coating whose absorption and remission (back-scattering) coefficients are known. The total remission from a coating surface is the summation of: 1) the reflectance of the coating surface; 2) the remission from the interior of the coating; and 3) the remission from the surface of the substrate. The intensity considered in the latter two parts is modified by the absorption of the coating material. The concept is based on the simplified picture of two diffuse light fluxes moving through semi-infinite plane-parallel layers, with one flux proceeding "downward", and the other simultaneously "upward". While Kubelka entered this field through an interest in coatings, his work has influenced workers in other areas as well. In the original article, there is a special case of interest to many fields is "the albedo of an infinitely thick coating". This case yielded the Kubelka–Munk equation, which describes the remission from a sample composed of an infinite number of infinitesimal layers, each having "a"0 as an absorption fraction and "r"0 as a remission fraction. The authors noted that the remission from an infinite number of these infinitesimal layers is "solely a function of the ratio of the absorption and back-scatter (remission) constants "a"0/"r"0, but not in any way on the absolute numerical values of these constants". (The equation is presented in the same mathematical form as in the article, but with symbolism modified.) formula_1 While numerous early authors had developed similar two-constant equations, the mathematics of most of these was found to be consistent with the Kubelka–Munk treatment. Others added additional constants to produce more accurate models, but these generally did not find wide acceptance. Due to its simplicity and its acceptable prediction accuracy in many industrial applications, the Kubelka–Munk model remains very popular. However, in almost every application area, the limitations of the model have required improvements. Sometimes these improvements are touted as extensions of Kubelka–Munk theory, sometimes as embracing more general mathematics of which the Kubelka–Munk equation is a special case, and sometimes as an alternate approach. Paint colors. In the original article, there are several special cases important to paints that are addressed, along with a mathematical definition of hiding power (an ability to hide the surface of an object). The hiding power of a coating measures its ability to obscure a background of contrasting color. Hiding power is also known as opacity or covering power. In the following, R is the fraction of incident light that is remitted (reflected) by a coated substrate under consideration, "R"g is the remission fraction from the substrate alone, "R"c is the remission fraction from the coating, "R"∞ is remission fraction of an infinitely thick layer, formula_2 is the fraction of incident light transmitted by the sample under consideration, and "X" is the coating thickness. In the original article, there is a solution for remission from a coating of finite thickness. Kubelka derived many additional formulas for a variety of other cases, which were published in the post-war years. Whereas the 1931 theory assumed that light flows in one dimension (two fluxes, upward and downward within the layer), in 1948 Kubelka derived the same equations (up to a factor of 2) assuming spherical scatter within the paint layer. Later he generalized the theory to inhomogeneous layers (see below). Paper and paper coatings. The Kubelka–Munk theory is also used in the paper industry to predict optical properties of paper, avoiding a labor-intensive trial-and-error approach. The theory is relatively simple in terms of the number of constants involved, works very well for many papers, and is well documented for use by the pulp and paper industry. If the optical properties (e.g., reflectance and opacity) of each pulp, filler, and dye used in paper-making are known, then the optical properties of a paper made with any combination of the materials can be predicted. If the contrast ratio and reflectivity of a paper are known, the changes in these properties with a change in basis weight can be predicted. While the Kubelka–Munk coefficients are assumed to be linear and independent quantities, the relationship fails in regions of strong absorption, such as in the case of dyed paper. Several theories were proposed to explain the non-linear behavior of the coefficients, attributing the non-linearity to the non-isotropic structure of paper at both the micro- and macroscopic levels. However, using an analysis based on the Kramers–Kronig relations, the coefficients were shown to be dependent quantities related to the real and imaginary part of the refractive index. By accounting for this dependency, the anomalous behavior of the Kubelka–Munk coefficients in regions of strong absorption were fully explained. Semiconductors. The band-gap energy of semiconductors is frequently determined from a Tauc plot, where the quantity formula_11 is plotted against photon energy E. Then the band-gap energy can be obtained by extending the straight segment of the graph to the E axis. There is a simpler method adapted from the Kubelka–Munk theory, in which the band gap is calculated by plotting, formula_12 versus E, where formula_13 is the absorption coefficient. Colors. Early practitioners, especially D. R. Duncan, assumed that in a mixture of pigments, the colors produced in any given medium may be deduced from formulae involving two constants for each pigment. These constants, which vary with the wavelength of the incident light, measure respectively the absorbing power of the pigment for light and its scattering power. The work of Kubelka and Munk was seen as yielding a useful systematic approach to color mixing and matching. By resolving the Kubelka–Munk equation for the ratio of absorption to scatter, one can obtain a "remission function": formula_14 We may define formula_15 and formula_16 as absorption and back-scattering coefficients, which replace the absorption and remission fractions "a"0 and "r"0 in the Kubelka–Munk equation above. Then assuming separate additivity of the absorption and coefficients for each of formula_17 components of concentration formula_18: formula_19 For the case of small amount of pigments, the scatter formula_16 is dominated by the base material and is assumed to be constant. In such a case, the equation is linear in concentration of pigment. Spectroscopy. One special case has received much attention in diffuse reflectance spectroscopy: that of an opaque (infinitely thick) coating, which can be applied to a sample modeled as an infinite number of infinitesimal layers. The two-stream approximation was embraced by the early practitioners. There were far more mathematics to choose from, but the name Kubelka–Munk became widely regarded as synonymous with any technique that modeled diffuse radiation moving through layers of infinitesimal size. This was aided by the popular assumption that the Kubelka–Munk function (above) was analogous to the absorbance function in transmission spectroscopy. In the field of infrared spectroscopy, it was common to prepare solid samples by finely grinding the sample with potassium bromide (KBr). This led to a situation analogous to the described in the section just above for pigments, where the analyte had little effect on the scatter, which was dominated by the KBr. In this case, the assumption of the function being linear with concentration was reasonable. However, in the field of near-infrared spectroscopy, the samples are generally measured in their natural (often particulate) state, and deviations from linearity at higher absorption levels were routinely observed. The remission function (also called the Kubelka–Munk function) was almost abandoned in favor of "log(1/"R")". A more general equation, called the Dahm equation, was developed, along with a scheme to separate the effects of scatter from absorption in the log(1/"R") data. In the equation, formula_20 and formula_21 are the measured remission by and transmission through a sample of formula_22 layers, each layer having absorption and remission fractions of formula_13 and R. Note that the so-called ART function formula_23 is constant for any sample thickness. In other areas of spectroscopy, there are shifts away from the strict use of the Kubelka–Munk treatment as well. Failure of continuous models of diffuse reflectance. Continuous models are widely used to model diffuse reflection from particulate samples. They are embodied in various theories, including diffusion theory, the equation of radiation transfer, as well as Kubelka–Munk. In spite of its widespread use, there has long been an understanding that the Kubelka–Munk (K–M) theory has limitations. The term "failure of the Kubelka–Munk theory" has been applied because it does not "remain valid in strongly absorbing materials". There have been many attempts to explain the limitations and amend the K–M equation. In literature related to diffuse-reflection infrared Fourier-transform (DRIFT) spectra, "particularly specular reflection" is often identified as a culprit. In some corners, there is the working assumption that the problem is that the K–M theory is a two-flux theory, and that introducing additional directions will solve the problem. In particular, two continuous theories, "diffusion theory" and the "equation of radiation transfer" (ERT), have their advocates. Some of the advocates of the ERT have called to our attention the failure of the ERT to predict the desired linear absorption coefficient as particle size gets large, and blamed it on the hidden mass effect. In 2003, Donald and Kevin Dahm illustrated the degree to which the continuous theories all suffer from the fundamental limitation of trying to model a discontinuous sample as a continuum and suggested that as long as the effect of this limitation is unexplored, there is little reason to search for other reasons for "failure". Spectroscopists have a desire to determine the same absorption coefficient quantity from diffuse reflectance measurements as they would from a transmission measurement on a non-scattering sample of the same material. The Bouguer–Lambert law describes the attenuation of transmitted light as exponential falloff in intensity of a direct beam of light as it passes through a medium. The cause of the attenuation may be absorption or scatter. A coefficient unaffected by scatter is desired by absorption spectroscopists. Mathematically, the Bouguer–Lambert law may be expressed as formula_24 where formula_25 is the linear absorption coefficient, formula_0 is the back-scatter coefficient, and formula_26 is their sum, often called the extinction coefficient. (The symbols formula_27 and formula_28 may be used to represent the absorption and scattering parts of the extinction coefficients.) Through work with the Dahm equation, we know that the ART function is constant for all sample thicknesses of the same material. This would include the infinitesimal layer used in the Kubelka–Munk differentiation. Consequently we may equate numerous functions: formula_29 Using a simple system (albeit rather complex mathematics), it can be shown that continuous models correctly predict the R and formula_2 in the ART function, but do not correctly predict the fractions of incident light that are transmitted directly. From this, it can be deduced that the coefficients formula_15 and formula_16 and not proportional to the formula_25 and formula_0 in the Bouguer–Lambert law. Treatment of inhomogeneous layers. A coating layer is not the same as the substrate it covers. As Kubelka was interested in coatings, he was of course very interested in the handling of what he called "inhomogeneous layers". A set of equations, one of which was believed to apply to the case, had been published by Frank Benford in 1946 for the case of two light streams through plane parallel layers. However, it did not handle it successfully. Kubelka solved the problem, and we illustrate the solution here. First, a case to which the equation of Benford may be straightforwardly applied. The sketch shows two surfaces bounding a slab of a non-absorbing medium. Notice that the assembly would appear identical regardless of which side was being entered. Apart from the surfaces, the medium has no spectroscopic properties. A beam of light of unit intensity reaches the front surface, and by our assumptions, half is remitted and half is transmitted. The portion that is transmitted proceeds to the other surface undiminished. There it is again split where half (1/4 of the original incident intensity) is transmitted and half is remitted. The amounts that remitted from the first surface can be totaled as can the amount that are transmitted through the second. The total remission is 2/3 ≈ 0.667. The total transmission is 1/3 ≈ 0.333. Alternatively, we can use the equation of Benford that applies. For two plane parallel layers, "x" and "y", having different properties, the transmission formula_30 remission formula_31, and absorption fractions formula_32 for the two layers can be calculated from the properties of the individual layers (formula_33) from the following equations: formula_34 Next we will examine the case where the medium is absorbing one. While the total assembly would behave the same in either direction, in order to apply the mathematics, we will need to use an intermediate step where it does not. Here we will assume that again the surfaces will remit and transmit 1/2 the amount striking it, this time we will assume that half of the intensity will be absorbed a trip across the slab. A beam of light of unit intensity reaches the front surface, and by our assumptions, half is remitted and half is transmitted, but this time half of the transmitted light, or 1/4 is absorbed before another 1/4 reaches the second surface, and 1/4 is remitted back across the slab to face half of it being absorbed. The sketch shows that the calculated values from the equations should be "R" = 8/15 ≈ 0.533, "T" = 2/15 ≈ 0.133, "A" = 1/3 ≈ 0.333. Now the sketch has three layers, labeled 1, 2, and 3. Layers 1 and 3 remit and transmit 1/2 and absorb nothing. Layer 2 absorbs half and transmits half, but remits nothing. We can build the assembly by first combining layers 1 and 2, and then combining that result as the "x" value in combing with layer 3 (as "y"). So for step 1: formula_35 formula_36 Kubelka has shown by theory and experiment that remittance and absorption of a non-homogeneous specimen depend on the direction of illumination, whereas transmittance does not. Consequently, for non-homogeneous layers, the remission formula_37 from the first layer that occurs in the denominator is the remission when illuminated from the reverse (not the forward) direction, so we will need to know the value for formula_38 for the next step, that is when the layer "x" is layer 2, and layer "y" is layer 1: formula_39 The next step is then to set formula_40 and formula_41, with formula_42 and formula_43, with formula_37 in the denominator as formula_44: formula_45 In computer art. The K–M paint-mixing algorithm has been adapted to directly use the RGB color model by Sochorová and Jamriška in 2021. Their "Mixbox" approach works by converting the inputs into a version of CMYK (phthalo blue, quinacridone magenta, Hansa yellow, and titanium white) plus a residue (to account for the gamut difference), performing the K–M mixing in that latent space, and then producing the output in RGB. There are additional concerns for dealing with wider gamuts and improving speed. This RGB adaptation makes it easier for digital painting software to integrate the more realistic K–M method. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "R_\\infty = 1 + \\frac {a_0}{r_0} - \\sqrt{ \\frac{a_0^2}{r_0^2} + 2 \\frac{a_0}{r_0} }." }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "a = 0," }, { "math_id": 4, "text": "R_\\infty = 1." }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "R = r_0 X / (r_0 X + 1)." }, { "math_id": 7, "text": "R_\\text{c} = 0" }, { "math_id": 8, "text": "a_0 X." }, { "math_id": 9, "text": "R = R_g \\exp(-2a_0 X)" }, { "math_id": 10, "text": "R_\\infty = 0." }, { "math_id": 11, "text": "\\sqrt{F(R_\\infty)E}" }, { "math_id": 12, "text": "(aE)^2" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "\nF(R_\\infty) \\equiv \\frac{(1 - R_\\infty)^2}{2R_\\infty} = \\frac{a_0}{r_0}.\n" }, { "math_id": 15, "text": "K" }, { "math_id": 16, "text": "S" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "c_i" }, { "math_id": 19, "text": "\n\\frac{(1 - R_\\infty)^2}{2R_\\infty} = \\frac{a_0}{r_0} = \\frac{K}{S} = \\frac{\\Sigma(c_i K_i)}{\\Sigma(c_i S_i)} \\approx \\frac{\\Sigma(c_i K_i)}{S}.\n" }, { "math_id": 20, "text": "R_n" }, { "math_id": 21, "text": "T_n" }, { "math_id": 22, "text": "n" }, { "math_id": 23, "text": "A(R, T) \\equiv \\frac{(1 - R_n)^2 - T_n^2}{R_n} = \\frac{(2 - a - 2r)a}{r}" }, { "math_id": 24, "text": "\n\\Tau(direct) = \\exp(-kd) \\exp(-sd) = \\exp[-(k + s)d)] = \\exp(-\\epsilon d),\n" }, { "math_id": 25, "text": "k" }, { "math_id": 26, "text": "\\epsilon" }, { "math_id": 27, "text": "\\mu_a " }, { "math_id": 28, "text": "\\mu_s" }, { "math_id": 29, "text": "\n2F(R) = A(R,T) = \\frac{(2 - a - 2r)a}{r} \\approx 2\\frac{a_0}{r_0} = 2\\frac{a_0/dx}{r_0/dx} = 2\\frac{K}{S}.\n" }, { "math_id": 30, "text": "T_{xy}," }, { "math_id": 31, "text": "R_{xy}" }, { "math_id": 32, "text": "A_{xy}" }, { "math_id": 33, "text": "T_x, R_x, T_y, R_y" }, { "math_id": 34, "text": "\\begin{aligned}\n T_{xy} &= \\frac{T_x T_y}{1 - R_{(-x)} R_y} = \\frac {(1/2)(1/2)}{1 - (1/2)(1/2)} = \\frac{1/4}{3/4} = 1/3, \\\\\n R_{xy} &= R_x + \\frac{T_x^2 R_y}{1 - R_{(-x)} R_y} = 1/2 + \\frac{(1/2)(1/2)(1/2)}{1 - (1/2)(1/2)} = 2/3, \\\\\n A_{xy} &= 1 - T_{xy} - R_{xy} = 1 - 1/2 - 1/2 = 0.\n\\end{aligned}" }, { "math_id": 35, "text": "R_x = 1/2, T_x = 1/2, R_y = 0, T_y = 1/2," }, { "math_id": 36, "text": "\\begin{aligned}\n T_{xy} &= \\frac{T_x T_y}{1 - R_{(-x)} R_y} = \\frac{(1/2)(1/2)}{1 - (1/2)(0)} = 1/4, \\\\\n R_{xy} &= R_x + \\frac{T_x^2 R_y}{1 - R_{(-x)} R_y} = 1/2 + \\frac{(1/2)(1/2)0}{1 - (1/2)0} = 1/2, \\\\\n A_{xy} &= 1 - T_{xy} - R_{xy} = 1 - 1/4 - 1/2 = 1/4.\n\\end{aligned}" }, { "math_id": 37, "text": "R_{(-x)}" }, { "math_id": 38, "text": "R_{21}" }, { "math_id": 39, "text": "\nR_{yx} = R_y + \\frac{T_y^2 R_x}{1 - R_{(-y)} R_x} = 0.0 + \\frac{(1/2)(1/2)(1/2)}{1 - 0(1/2)} = 1/8.\n" }, { "math_id": 40, "text": "R_x = 1/2" }, { "math_id": 41, "text": "T_x = 1/4" }, { "math_id": 42, "text": "R_y = 1/2" }, { "math_id": 43, "text": "T_y = 1/2" }, { "math_id": 44, "text": "1/8" }, { "math_id": 45, "text": "\\begin{aligned}\n T_{xy} &= \\frac{T_x T_y}{1 - R_x R_y } = \\frac{(1/2)(1/4)}{1 - (1/8)(1/2)} = 2/15, \\\\\n R_{xy} &= R_x + \\frac {T_x^2 R_y}{1 - R_{(-x)} R_y} = 1/2 + \\frac{(1/4)(1/4)(1/2)}{1 - (1/8)(1/2)} = 8/15, \\\\\n A_{xy} &= 1 - T_{xy} - R_{xy} = 1 - 2/15 - 8/15 = 1/3.\n\\end{aligned}" } ]
https://en.wikipedia.org/wiki?curid=66854311
66854421
Surface equivalence principle
In electromagnetism, surface equivalence principle or surface equivalence theorem relates an arbitrary current distribution within an imaginary closed surface with an equivalent source on the surface. It is also known as field equivalence principle, Huygens' equivalence principle or simply as the equivalence principle. Being a more rigorous reformulation of the Huygens–Fresnel principle, it is often used to simplify the analysis of radiating structures such as antennas. Certain formulations of the principle are also known as Love equivalence principle and Schelkunoff equivalence principle, after Augustus Edward Hough Love and Sergei Alexander Schelkunoff, respectively. Physical meaning. General formulation. The principle yields an equivalent problem for a radiation problem by introducing an imaginary closed surface and fictitious surface current densities. It is an extension of Huygens–Fresnel principle, which describes each point on a wavefront as a spherical wave source. The equivalence of the imaginary surface currents are enforced by the uniqueness theorem in electromagnetism, which dictates that a unique solution can be determined by fixing a boundary condition on a system. With the appropriate choice of the imaginary current densities, the fields inside the surface or outside the surface can be deduced from the imaginary currents. In a radiation problem with given current density sources, electric current density formula_0 and magnetic current density formula_1, the tangential field boundary conditions necessitate that formula_2 formula_3 where formula_4 and formula_5 correspond to the imaginary current sources that are impressed on the closed surface. formula_6 and formula_7 represent the electric and magnetic fields inside the surface, respectively, while formula_8 and formula_9 are the fields outside of the surface. Both the original and imaginary currents should produce the same external field distributions. Love and Schelkunoff equivalence principles. Per the boundary conditions, the fields inside the surface and the current densities can be arbitrarily chosen as long as they produce the same external fields. Love's equivalence principle, introduced in 1901 by Augustus Edward Hough Love, takes the internal fields as zero: formula_10 formula_11 The fields inside the surface are referred as null fields. Thus, the surface currents are chosen as to sustain the external fields in the original problem. Alternatively, Love equivalent problem for field distributions inside the surface can be formulated: this requires the negative of surface currents for the external radiation case. Thus, the surface currents will radiate the fields in the original problem in the inside of the surface; nevertheless, they will produce null external fields. Schelkunoff equivalence principle, introduced by Sergei Alexander Schelkunoff, substitutes the closed surface with a perfectly conducting material body. In the case of a perfect electrical conductor, the electric currents that are impressed on the surface won't radiate due to Lorentz reciprocity. Thus, the original currents can be substituted with surface magnetic currents only. A similar formulation for a perfect magnetic conductor would use impressed electric currents. The equivalence principles can also be applied to conductive half-spaces with the aid of method of image charges. Applications. The surface equivalence principle is heavily used in the analysis of antenna problems to simplify the problem: in many of the applications, the close surface is chosen as so to encompass the conductive elements to alleviate the limits of integration. Selected uses in antenna theory include the analysis of aperture antennas and the cavity model approach for microstrip patch antennas. It has also been used as a domain decomposition method for method of moments analysis of complex antenna structures. Schelkunoff's formulation is employed particularly for scattering problems. The principle has also been used in the analysis design of metamaterials such as Huygens’ metasurfaces and plasmonic scatterers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "J_1" }, { "math_id": 1, "text": "M_1" }, { "math_id": 2, "text": "J_s = \\hat{n} \\times (H_1 - H)" }, { "math_id": 3, "text": "M_s = -\\hat{n} \\times (E_1 - E)" }, { "math_id": 4, "text": "J_s" }, { "math_id": 5, "text": "M_s" }, { "math_id": 6, "text": "E" }, { "math_id": 7, "text": "H" }, { "math_id": 8, "text": "E_1" }, { "math_id": 9, "text": "H_1" }, { "math_id": 10, "text": "J_s = \\hat{n} \\times H_1" }, { "math_id": 11, "text": "M_s = -\\hat{n} \\times E_1" } ]
https://en.wikipedia.org/wiki?curid=66854421
6685900
College Scholastic Ability Test
South Korean standardized test &lt;/table&gt; The College Scholastic Ability Test or CSAT (, Hanja: ), also abbreviated Suneung (, Hanja: ), is a standardized test which is recognized by South Korean universities. The Korea Institute of Curriculum and Evaluation (KICE) administers the annual test on the third Thursday in November. In 2020, however, it was postponed to the first Thursday in December (December 3), due to the COVID-19 pandemic. CSAT was originally designed to assess the scholastic ability required for college. Because the CSAT is primary factor considered during the Regular Admission round, it plays an important role in South Korean education. The test has been cited for its efficiency, emphasis on merit, and good international results. Of the students taking the test, 20 percent are high-school graduates who did not achieve their desired score the previous year. Despite the emphasis on the CSAT, it is not a requirement for a high school diploma. On test day, the KRX stock market opens late, and bus and metro service is increased to avoid traffic jams and allow students to get to the testing sites more easily. Planes are grounded during the listening portion of the English section so their noise does not disturb the students. In some cases, students running late for the test may be escorted to their testing site by police officers via motorcycle. Younger students and members of the students' families gather outside testing sites to cheer them on. Purpose. The CSAT is designed to test a candidate's ability to study in college, with questions based on Korea's high-school curriculum. It standardizes high-school education and provides accurate, objective data for university admission. Schedule. All questions are multiple-choice, except for the 9 questions in the Mathematics section, which are short answer. Sections. The CSAT consists of six sections: national language (Korean), mathematics, English, Korean history, subordinate subjects (social studies, sciences, and vocational education), and second foreign language/Chinese characters and classics. All sections are optional except Korean history, but most candidates take all the other sections except second foreign language/Classical Chinese. In the mathematics section, candidates are made to take Math I (which consists of Logarithm, Sequences and Trigonometry) and Math II (which consists of Limits and Calculus on polynomials), and allowed to select one among Probability and Statistics, Geometry and Calculus. The subordinate subjects is divided into three sections: social studies, science, and vocational education. Candidates may choose up to two subjects, but may not select from different sections at the same time; Physics II and Biology I may be chosen for the subordinate section since both are sciences, but World history and Principles of Accounting may not – the former is in the social studies section, and the latter in vocational education. Only vocational high-school graduates can choose the vocational education section. In the second foreign language/Classical Chinese section, the candidate chooses one subject. Most high-ranked universities require applicants to take two science subordinate subjects and Geometry or Calculus in the mathematics section if they apply for a STEM major, and do not accept subordinate subjects in the same field (such as Physics I and Physics II). National Language. In the National Language section, candidates are assessed on their ability to read, understand and analyse Korean texts rapidly and accurately. Its 45 questions of the subject are classified into four categories: Common topics Elective topics (select 1 out of 2 options, Q35-45) Common subjects. Reading. This category consists of four articles, from the topics reading theory, humanities/arts, law/economy and science/technology. Each passage has 3-6 questions. Candidates need to answer questions such as, "Of the five statements below, which one does not agree with the passage above?" or "According to the passage, which one is the correct analysis of the following example?" Literature. This category consists of texts from five categories: modern poetry, classic poetry, modern novel, classical prose and play/essay. Candidates may be asked to summarise a single passage or outline a common theme between multiple texts (sometimes of different text types), among many other question types. Elective subjects. Speech and writing. This category consists of 11 questions relating to three texts. Language and Media. Language forms questions 35-39 and relates to four topics: phonology, syntax, morphology and history of Korean. An additional topic may be used to complete the required five, or two questions are taken from morphology or syntax. Media forms questions 40-45 and relates to the characteristics of media and the creation of an online post or message. Mathematics. All mathematics candidates take the Maths I and II and select one elective topic from three choices: Calculus, Geometry or Probability and Statistics. Calculus is most preferred by students applying for natural science majors, while Probability and Statistics are preferred by students applying for the humanities. Geometry is the least popular, with only 4.1% of students selecting it as their elective. The Ga and Na type system was abolished from 2022 onwards, which means that students applying for the natural-sciences majors no longer have to study all three topics. Writing of the test. The test is written in September each year by about 500 South Korean teachers through a secretive process in an undisclosed location in Gangwon. The test writers are prohibited from communicating with the outside world. Administration. High-school graduates and students about to graduate high school may take the test. After the KICE prints test papers and OMR cards, they are distributed three days before the test to each test area. In 2018, there were 85 test areas. Test monitors are middle- or high-school teachers. Superintendents of each education office decide who will monitor and where they will go. There are two test monitors for each period, except for the fourth period (which has three, because of test-paper collection). Most testing rooms are high-school classrooms, and there is a 28-candidate limit in each room. Except for the English and Korean-history sections, grades are based on a stanine curve. Grade, percentile, and a standard score for each section and subject are added to the transcript. The standard score is calculated by the following formula: formula_0 formula_1 and formula_2 are standard scores. formula_3 is the standard deviation of the standard score, and formula_4 is its average. In the national-language and mathematics sections, formula_3 is 20 and formula_4 is 100. For the rest, formula_3 is 10 and formula_4 is 50. formula_2 is calculated by the following formula: formula_5 formula_6 is the candidate's original score. formula_7 is the average of the original formula_8 candidate scores. formula_9 is the candidate's standard deviation. Examples. Although the CSAT is compared to the US SAT, their relative importance is different. Mathematics. Since CSAT problems are designed for all high school students, its overall difficulty is not too high. But some of them can be very tricky, which are so-called 'killer problems'. Here are some killer problems that were on the test. The 30th problem in type Ga of the 2017 CSAT was:A function formula_10 defined for formula_11, where formula_8 is a constant, and a quartic function formula_12 whose leading coefficient is formula_13 satisfy the three conditions below: A) For all real numbers formula_6, such that formula_11, formula_14. B) For two different real numbers formula_15 and formula_16, formula_10 has the same local maximum formula_17 at formula_18 and formula_19. (formula_20) C) formula_10 has more local extrema than formula_12 does. formula_21. Find the minimum of formula_17.The 29th mathematics problem in the 1997 CSAT was: If two equations formula_22 and formula_23 have 7 and 9 solutions respectively and a set formula_24 is an infinite set, formula_25, the number of elements in formula_26's subsetformula_27, varies according to the values of formula_28 and formula_29. Find the maximum of formula_25. The 29th problem in mathematics subject type B (the former Ga) of the 2014 CSAT follows:formula_30 and formula_31 are points on the sphere formula_32. formula_33 and formula_34 are the foots of two perpendiculars from formula_30 and formula_31 to the plane formula_35 respectively. formula_36 and formula_37 are the foots of two perpendiculars from formula_30 and formula_31 to the plane formula_38 respectively. Find the maximum of formula_39. The 30th problem in type Ga of the 2019 CSAT was: A function formula_10 is a cubic function whose leading coefficient is formula_40, and a function formula_41 attains local extrema at formula_18. List every formula_15 with formula_42 from least to greatest as formula_43. formula_10 and formula_12 satisfy the following: A) formula_44 and formula_45. B) formula_46 C) formula_47 Let formula_48. Find the value of formula_49. The correct answers are 216, 15, 24, 27, respectively. English. The following question appeared on 2011 CSAT, and had a correct-response rate of 13 percent. The paragraph is excerpted from John Leofric Stocks' "The Limits of Purpose": So far as you are wholly concentrated on bringing about a certain result, clearly, the quicker and easier it is brought about the better. Your resolve to secure a sufficiency of food for yourself and your family will induce you to spend weary days in tilling the ground and tending livestock; but if Nature provided food and meat in abundance ready for the table, you would thank Nature for sparing you much labor and consider yourself so much the better off. An executed purpose, in short, is a transaction in which the time and energy spent on the execution are balanced against the resulting assets, and the ideal case is one in which__________________. Purpose, then, justifies the efforts it exacts only conditionally, by their fruits. Preliminary College Scholastic Ability Test. The Preliminary College Scholastic Ability Test (PCSAT) is administered nationally. The relationship between PCSAT and CSAT is comparable to that between the PSAT and the SAT in the United States. The PCSAT is divided into two categories: the National United Achievement Tests (NUAT) and the College Scholastic Ability Test Simulation (CSAT Simulation). These tests are more similar to the CSAT than privately administered mock tests, since the PCSAT's examiner committee is similar to that of the CSAT. The CSAT Simulation is hosted by the same institution as the CSAT, and is used to predict the level of difficulty or types of questions which might appear on that year's CSAT. Although the NUAT and the CSAT Simulation are similar to the CSAT in their number of candidates, types of questions and relative difficulty, the NUAT is hosted by the Ministry of Education for high-school students. The CSAT Simulation is run by KICE and may be taken by anyone who is eligible for the CSAT. Both exams are reliable, official mock tests for the CSAT, and both are graded by the KICE. National United Achievement Test. The National United Achievement Test (NUAT, Korean: ) is administered in the same way as the CSAT, and was introduced in 2002 to relieve dependence on private mock tests. High-school students may apply to take the test, and local education offices decide whether it will be administered in their districts. Every office of education in South Korea normally participates in the NUAT to prepare students for the CSAT, and the number of applicants parallels the CSAT. The Seoul Metropolitan Office of Education, Busan Metropolitan Office of Education (freshmen and sophomores), Gyeonggi-do Office of Education, and Incheon Office of Education take turns creating the questions, and the KICE grades the test and issues report cards. The basic structure of the exam is identical to the CSAT. For mathematics, social studies, science and second language, its range is determined by when it is conducted. In the Korean and English sections, the questions are not directly from textbooks but are constructed in accordance with the curriculum. As of 2014, there are four NUATs per year; it is not the same for every district, however, and some have only two exams per year for freshmen and sophomores. The NUAT for freshmen and sophomores is held in March, June, September and November; seniors are tested in March, April, July and October to avoid conflict with June and September, when the CSAT Simulation is given. College Scholastic Ability Test Simulation. The College Scholastic Ability Test Simulation (CSAT Simulation, codice_0 parameter.&lt;/code&gt;) is given by KICE. Unlike the NUAT, anyone who is eligible for the CSAT may also take this test. The CSAT Simulation was introduced after the CSAT failed to set the proper difficulty level in 2001 and 2002. First implemented in 2002, it was held only in September during its early years. The test has been given twice a year, in June and September, since 2004. It covers everything in the curriculum for the Korean- and second-language sections, and two-thirds of what the CSAT covers for the other sections. The September exam covers everything in every section, like the CSAT. The number of questions and test time per section is identical to the CSAT. History. Since the liberation of Korea, South Korea has changed its methods of university and college admission from twelve to sixteen times. The policies ranged from allowing colleges to choose students to outlawing hagwons. Parents and students have had difficulty adjusting to the changes. The changes have been cited as evidence of systemic instability and the sensitivity of the admission process to public opinion. University and college admissions were first left to the universities, and the first CSAT incarnation appeared at the beginning of 1960. The Supreme Council for National Reconstruction established an early CSAT from 1962 to 1963 as a qualification test for students. Due to the small number of students passing the test, colleges soon had a student shortage. The admissions process was criticized as inefficient, and the government scrapped the policy from 1964 to 1968. A similar policy was adopted in 1969 by the Third Republic of Korea, and the new test was the Preliminary College Entrance Examination (대학입학예비고사); it continued, mostly unchanged, until 1981. That year, the policy was significantly changed. The test name was changed to Preliminary College Preparations Examination (대학예비고사), and hagwons (cram schools) were outlawed. In 1982, the test name was changed again to College Entrance Strength Test (대입학력고사). The current CSAT system was established in 1993, and has undergone several revisions since then. In 2004, the government of South Korea introduced a 2008 College Admissions Change Proposal; however, it failed to bring about significant changes. Present day. The test, based on national-standard textbooks, is designed to encourage cognitive skills. The Korea Institute of Curriculum and Evaluation creates the problems, prints and corrects the tests, supervises the test-making, and sets the test fee. The problems are created by KICE members who are university professors and high-school teachers. Two groups make the problems: one creates them, and the other checks them. The creators are primarily professors, although high-school teachers have been included since 2000. The problem-checkers are high-school teachers. Both groups sign non-disclosure agreements with the KICE. In 2012, there was a total of 696 staff members involved in creating the problems. A member of the group earns about $300 per day. The 2016 subjects were national language, mathematics, English, Korean history, social studies/science/vocational education, and foreign language/Hanja. Although students may choose all (or some) of the subjects, Korean history is required. Social studies is divided into life and ethics, ethics and ideologies, Korean geography, world geography, East Asian history, world history, law and politics, society and culture, and economics; students may choose two subjects. In the science section, students can choose two subjects from Physics 1 and 2, Chemistry 1 and 2, Biology 1 and 2, and Earth Science 1 and 2. Vocational education is divided into agricultural science, industry, commerce, oceanography, and home economics; students must choose one subject. However, vocational education may only be taken if the student has completed 80 percent of the expert studies. Foreign language is divided into German 1, French 1, Spanish 1, Chinese 1, Japanese 1, Russian 1, Arabic 1, basic Vietnamese, and Classical Chinese 1. Students can choose one subject. After the test, the administrators collect, scan and correct them. The test correction (confirming the documentation and grades) and printing the results take about one month. However, test takers sometimes use unofficial websites to figure out how well they performed soon after taking the test. The test is taken seriously and day-to-day operations are halted or delayed on test day. Many shops, flights, military training, construction projects, banks, and other activities and establishments are closed or canceled. The KRX stock market opens late. Neither students nor administrators may bring in cell phones, books, newspapers, food, or any other material which could distract other test-takers. Most complaints after the test involve administrator actions such as talking, opening windows, standing in front of a desk, sniffling, clicking a computer mouse, eating candy, and walking. Administrators are warned against doing anything which could distract students in any way. In the 1990s, there was a rumor that if students had the S-shaped emblem, they could go to a prestigious university (Seoul National University), and if they had the letter III, they could get a score of 300 on the CSAT, which led to the Onata incident in which test takers secretly removed the Sonata III emblem. Therefore, there were many Sonata IIIs with the letters S and III missing from the emblem and for this reason, Hyundai Motors implemented a free emblem replacement service. Pressure to perform well on the CSAT has been linked to psychological stress, depression and suicide. The highly competitive exam is also cited as a contributing factor to South Korea's declining birth rate, as parents feel pressure to pay for expensive "hagwon" cram schools to help their children study. Critics also say that students from wealthier families have an advantage due to the prevalence of cram schools, and that the test detracts from students' education with its emphasis on rote memorization and topics that are distinct from the curriculum followed in schools. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S = z \\sigma + m" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "\\sigma" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "z = \\frac{x - m_0}{\\sigma_0}" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "m_0" }, { "math_id": 8, "text": "a" }, { "math_id": 9, "text": "\\sigma_0" }, { "math_id": 10, "text": "f(x)" }, { "math_id": 11, "text": "x>a" }, { "math_id": 12, "text": "g(x)" }, { "math_id": 13, "text": "-1" }, { "math_id": 14, "text": "(x-a)f(x)=g(x)" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "\\beta" }, { "math_id": 17, "text": "M" }, { "math_id": 18, "text": "x=\\alpha" }, { "math_id": 19, "text": "x=\\beta" }, { "math_id": 20, "text": "M>0" }, { "math_id": 21, "text": "\\beta-\\alpha=6\\sqrt3" }, { "math_id": 22, "text": "P(x)=0" }, { "math_id": 23, "text": "Q(x)=0" }, { "math_id": 24, "text": "A=\\{(x,y)\\mid P(x)Q(y)=0, \\ Q(x)P(y)=0 \\ and \\ x,y \\in \\R\\}" }, { "math_id": 25, "text": "n(B)" }, { "math_id": 26, "text": "A" }, { "math_id": 27, "text": " B=\\{(x,y)\\mid (x,y)\\in A \\ and \\ x=y\\}" }, { "math_id": 28, "text": "P(x)" }, { "math_id": 29, "text": "Q(x)" }, { "math_id": 30, "text": "P" }, { "math_id": 31, "text": "Q" }, { "math_id": 32, "text": "x^2+y^2+z^2=4" }, { "math_id": 33, "text": "P_1" }, { "math_id": 34, "text": "Q_1" }, { "math_id": 35, "text": "y=4" }, { "math_id": 36, "text": "P_2" }, { "math_id": 37, "text": "Q_2" }, { "math_id": 38, "text": "y+\\sqrt{3}z+8=0" }, { "math_id": 39, "text": "2\\left\\vert \\overrightarrow{PQ} \\right\\vert^2-\\left\\vert \\overrightarrow{P_1Q_1} \\right\\vert^2-\\left\\vert \\overrightarrow{P_2Q_2} \\right\\vert^2" }, { "math_id": 40, "text": "6\\pi" }, { "math_id": 41, "text": "g(x)=\\frac{1}{2+\\sin(f(x))}" }, { "math_id": 42, "text": "\\alpha \\geq 0" }, { "math_id": 43, "text": "\\alpha_1,\\ \\alpha_2,\\ \\alpha_3,\\ \\cdots" }, { "math_id": 44, "text": "\\alpha_1 =0" }, { "math_id": 45, "text": "g(\\alpha_1)=\\frac{2}{5}" }, { "math_id": 46, "text": "\\frac{1}{g(\\alpha_5)}=\\frac{1}{g(\\alpha_2)}+\\frac{1}{2}" }, { "math_id": 47, "text": "0<f(0)<\\frac{\\pi}{2}" }, { "math_id": 48, "text": "g'(-\\frac{1}{2})=a\\pi" }, { "math_id": 49, "text": "a^2" } ]
https://en.wikipedia.org/wiki?curid=6685900