id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
75548132
GYO algorithm
The GYO algorithm is an algorithm that applies to hypergraphs. The algorithm takes as input a hypergraph and determines if the hypergraph is α-acyclic. If so, it computes a decomposition of the hypergraph. The algorithm was proposed in 1979 by Graham and independently by Yu and Özsoyoğlu, hence its name. Definition. A hypergraph is a generalization of a graph. Formally, a hypergraph formula_0 consists of a set of vertices "V", and of a set "E" of hyperedges, each of which is a subset of the vertices "V". Given a hypergraph, we can define its "primal graph" as the undirected graph defined on the same set of vertices, in which we put an edge between any two vertices which occur together in some hyperedge. A hypergraph "H" is α-acyclic if it satisfies two conditions: being chordal and being conformal. More precisely, we say that "H" is chordal if its primal graph is a chordal graph. We say that "H" is conformal if, for every clique of the primal graph, there is a hyperedge of "H" containing all the vertices of the clique. The GYO algorithm takes as input a hypergraph and determines if it is α-acyclic in this sense. Principle of the algorithm. The algorithm iteratively removes the so-called "ears" of the hypergraph, until the hypergraph is fully decomposed. Formally, we say that a hyperedge "e" of a hypergraph formula_1 is an ear if one of the following two conditions holds: In particular, every edge that is a subset of another edge is an ear. The GYO algorithm then proceeds as follows: If the algorithm successfully eliminates all vertices, then the hypergraph is α-acylic. Otherwise, if the algorithm gets to a non-emtpy hypergraph that has no ears, then the original hypergraph was not α-acyclic: <templatestyles src="Math_proof/styles.css" />The GYO algorithm ends on the empty hypergraph if and only if H is formula_11-acyclic Assume first the GYO algorithm ends on the empty hypergraph, let formula_7 be the sequence of ears that it has found, and let formula_8 the sequence of hypergraphs obtained (in particular formula_9 and formula_10 is the empty hypergraph). It is clear that formula_10, the empty hypergraph, is formula_11-acyclic. One can then check that, if formula_12 is formula_11-acyclic then formula_13 is also formula_11-acyclic. This implies that formula_14 is indeed formula_11-acyclic. For the other direction, assuming that formula_1 is formula_11-acyclic, one can show that formula_1 has an ear formula_2. Since removing this ear yields an hypergraph that is still acyclic, we can continue this process until the hypergraph becomes empty.
[ { "math_id": 0, "text": "H = (V, E)" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "e" }, { "math_id": 3, "text": "e'" }, { "math_id": 4, "text": "e \\cap e' = \\emptyset" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "e \\setminus f" }, { "math_id": 7, "text": "e_1,\\ldots,e_m" }, { "math_id": 8, "text": "H_0,\\ldots,H_m" }, { "math_id": 9, "text": "H_0 = H" }, { "math_id": 10, "text": "H_m" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "H_n" }, { "math_id": 13, "text": "H_{n-1}" }, { "math_id": 14, "text": "H_0" } ]
https://en.wikipedia.org/wiki?curid=75548132
7555
Casimir effect
Force resulting from the quantisation of a field In quantum field theory, the Casimir effect (or Casimir force) is a physical force acting on the macroscopic boundaries of a confined space which arises from the quantum fluctuations of a field. It is named after the Dutch physicist Hendrik Casimir, who predicted the effect for electromagnetic systems in 1948. In the same year, Casimir together with Dirk Polder described a similar effect experienced by a neutral atom in the vicinity of a macroscopic interface, which is called the Casimir–Polder force. Their result is a generalization of the London–van der Waals force and includes retardation due to the finite speed of light. The fundamental principles leading to the London–van der Waals force, the Casimir force, and the Casimir–Polder force can be formulated on the same footing. In 1997 a direct experiment by Steven K. Lamoreaux quantitatively measured the Casimir force to be within 5% of the value predicted by the theory. The Casimir effect can be understood by the idea that the presence of macroscopic material interfaces, such as electrical conductors and dielectrics, alter the vacuum expectation value of the energy of the second-quantized electromagnetic field. Since the value of this energy depends on the shapes and positions of the materials, the Casimir effect manifests itself as a force between such objects. Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string as well as plates submerged in turbulent water or gas illustrate the Casimir force. In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; in applied physics it is significant in some aspects of emerging microtechnologies and nanotechnologies. Physical properties. The typical example is of two uncharged conductive plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field means that no field exists between the plates, and no force connects them. When this field is instead studied using the quantum electrodynamic vacuum, it is seen that the plates do affect the virtual photons that constitute the field, and generate a net force – either an attraction or a repulsion depending on the plates' specific arrangement. Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. This force has been measured and is a striking example of an effect captured formally by second quantization. The treatment of boundary conditions in these calculations is controversial. In fact, "Casimir's original goal was to compute the van der Waals force between polarizable molecules" of the conductive plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields. Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is small. This force becomes so strong that it becomes the dominant force between uncharged conductors at submicron scales. In fact, at separations of 10 nm – about 100 times the typical size of an atom – the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depends on surface geometry and other factors). History. Dutch physicists Hendrik Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947; this special form is called the Casimir–Polder force. After a conversation with Niels Bohr, who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948. This latter phenomenon is called the Casimir effect. Predictions of the force were later extended to finite-conductivity metals and dielectrics, while later calculations considered more general geometries. Experiments before 1997 observed the force qualitatively, and indirect validation of the predicted Casimir energy was made by measuring the thickness of liquid helium films. Finally, in 1997 Lamoreaux's direct experiment quantitatively measured the force to within 5% of the value predicted by the theory. Subsequent experiments approached an accuracy of a few percent. Possible causes. Vacuum energy. The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum. The vacuum has, implicitly, all of the properties that a particle may have: spin, or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is formula_0 Summing over all possible oscillators at all points in space gives an infinite quantity. Since only "differences" in energy are physically measurable (with the notable exception of gravitation, which remains beyond the scope of quantum field theory), this infinity may be considered a feature of the mathematics rather than of the physics. This argument is the underpinning of the theory of renormalization. Dealing with infinite quantities in this way was a cause of widespread unease among quantum field theorists before the development in the 1970s of the renormalization group, a mathematical formalism for scale transformations that provides a natural basis for the process. When the scope of the physics is widened to include gravity, the interpretation of this formally infinite quantity remains problematic. There is currently no compelling explanation as to why it should not result in a cosmological constant that is many orders of magnitude larger than observed. However, since we do not yet have any fully coherent quantum theory of gravity, there is likewise no compelling reason as to why it should instead actually result in the value of the cosmological constant that we observe. The Casimir effect for fermions can be understood as the spectral asymmetry of the fermion operator (−1)"F", where it is known as the Witten index. Relativistic van der Waals force. Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha approaching infinity limit", and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates." Casimir and Polder's original paper used this method to derive the Casimir–Polder force. In 1978, Schwinger, DeRadd, and Milton published a similar derivation for the Casimir effect between two parallel plates. More recently, Nikolic proved from first principles of quantum electrodynamics that the Casimir force does not originate from the vacuum energy of the electromagnetic field, and explained in simple terms why the fundamental microscopic origin of Casimir force lies in van der Waals forces. Effects. Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals or dielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric. Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, a radar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the nth standing wave is En. The vacuum expectation value of the energy of the electromagnetic field in the cavity is then formula_1 with the sum running over all possible values of n enumerating the standing waves. The factor of is present because the zero-point energy of the nth mode is "En", where En is the energy increment for the nth mode. (It is the same as appears in the equation "E" "ħω".) Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions. In particular, one may ask how the zero-point energy depends on the shape s of the cavity. Each energy level En depends on the shape, and so one should write "En"("s") for the energy level, and for the vacuum expectation value. At this point comes an important observation: The force at point p on the wall of the cavity is equal to the change in the vacuum energy if the shape s of the wall is perturbed a little bit, say by "δs", at p. That is, one has formula_2 This value is finite in many practical calculations. Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance l apart). With "a" ≪ "l", the states within the slot of width a are highly constrained so that the energy E of any one mode is widely separated from that of the next. This is not the case in the large region l where there is a large number of states (about ) with energy evenly spaced between E and the next mode in the narrow slot, or in other words, all slightly larger than E. Now on shortening a by an amount da (which is negative), the mode in the narrow slot shrinks in wavelength and therefore increases in energy proportional to −, whereas all the states that lie in the large region lengthen and correspondingly decrease their energy by an amount proportional to − (note the different denominator). The two effects nearly cancel, but the net change is slightly negative, because the energy of all the modes in the large region are slightly larger than the single mode in the slot. Thus the force is attractive: it tends to make a slightly smaller, the plates drawing each other closer, across the thin slot. Derivation of Casimir effect assuming zeta-regularization. In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance a apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the surface of a conductor. Assuming the plates lie parallel to the xy-plane, the standing waves are formula_3 where ψ stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, kx and ky are the wavenumbers in directions parallel to the plates, and formula_4 is the wavenumber perpendicular to the plates. Here, n is an integer, resulting from the requirement that ψ vanish on the metal plates. The frequency of this wave is formula_5 where c is the speed of light. The vacuum energy is then the sum over all possible excitation modes. Since the area of the plates is large, we may sum by integrating over two of the dimensions in k-space. The assumption of periodic boundary conditions yields, formula_6 where A is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce a regulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is formula_7 In the end, the limit "s" → 0 is to be taken. Here s is just a complex number, not to be confused with the shape discussed previously. This integral sum is finite for s real and larger than 3. The sum has a pole at "s" 3, but may be analytically continued to "s" 0, where the expression is finite. The above expression simplifies to: formula_8 where polar coordinates "q"2 "kx"2 + "ky"2 were introduced to turn the double integral into a single integral. The q in front is the Jacobian, and the 2"π" comes from the angular integration. The integral converges if Re("s") > 3, resulting in formula_9 The sum diverges at s in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of the Riemann zeta function to "s" 0 is assumed to make sense physically in some way, then one has formula_10 But "ζ"(−3) and so one obtains formula_11 The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area for idealized, perfectly conducting plates with vacuum between them is formula_12 where The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of ħ shows that the Casimir force per unit area is very small, and that furthermore, the force is inherently of quantum-mechanical origin. By integrating the equation above it is possible to calculate the energy required to separate to infinity the two plates as: formula_13 where In Casimir's original derivation, a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance L apart). The zero-point energy on "both" sides of the plate is considered. Instead of the above "ad hoc" analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as in the above. More recent theory. Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Evgeny Lifshitz and his students. Using this approach, complications of the bounding surfaces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lifshitz's theory for two metal plates reduces to Casimir's idealized force law for large separations a much greater than the skin depth of the metal, and conversely reduces to the force law of the London dispersion force (with a coefficient called a Hamaker constant) for small a, with a more complicated dependence on a for intermediate separations determined by the dispersion of the materials. Lifshitz's result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions. For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius R is much larger than the separation a, in which case the nearby surfaces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate force (neglecting both skin-depth and higher-order curvature effects). However, in the 2010s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned surfaces or objects of various shapes. Measurement. One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven (Netherlands), in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory, but with large experimental errors. The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory, and by Umar Mohideen and Anushree Roy of the University of California, Riverside. In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of a sphere with a very large radius. In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates using microresonators. Numerous variations of these experiments are summarized in the 2009 review by Klimchitskaya. In 2013, a conglomerate of scientists from Hong Kong University of Science and Technology, University of Florida, Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory demonstrated a compact integrated silicon chip that can measure the Casimir force. The integrated chip defined by electron-beam lithography does not need extra alignment, making it an ideal platform for measuring Casimir force between complex geometries. In 2017 and 2021, the same group from Hong Kong University of Science and Technology demonstrated the non-monotonic Casimir force and distance-independent Casimir force, respectively, using this on-chip platform. Regularization. In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator. The heat kernel or exponentially regulated sum is formula_14 where the limit "t" → 0+ is taken in the end. The divergence of the sum is typically manifested as formula_15 for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant C which "does not" depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator formula_16 is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. The zeta function regulator formula_17 is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in the complex s plane, with the bulk divergence at "s" 4. This sum may be analytically continued past this pole, to obtain a finite part at "s" 0. Not every cavity configuration necessarily leads to a finite part (the lack of a pole at "s" 0) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in "Landau and Lifshitz", "Theory of Continuous Media".) Generalities. The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles". More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over the eigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the Van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, in configuration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects. In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling the topological winding number of the pion field surrounding the nucleon. A "pseudo-Casimir" effect can be found in liquid crystal systems, where the boundary conditions imposed through anchoring by rigid walls give rise to a long-range force, analogous to the force that arises between conducting plates. Dynamical Casimir effect. The dynamical Casimir effect is the production of particles and energy from an accelerated "moving mirror". This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s. In May 2011 an announcement was made by researchers at the Chalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect. In March 2013 an article appeared on the PNAS scientific journal describing an experiment that demonstrated the dynamical Casimir effect in a Josephson metamaterial. In July 2019 an article was published describing an experiment providing evidence of optical dynamical Casimir effect in a dispersion-oscillating fibre. In 2020, Frank Wilczek et al., proposed a resolution to the information loss paradox associated with the moving mirror model of the dynamical Casimir effect. Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect (moving mirror) has been used to help understand the Unruh effect. Repulsive forces. There are few instances wherein the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lifshitz showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise. This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lifshitz was carried out by Munday et al. who described it as "quantum levitation". Other scientists have also suggested the use of gain media to achieve a similar levitation effect, though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers–Kronig relations). Casimir and Casimir–Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al. A notable recent development on repulsive Casimir forces relies on using chiral materials. Q.-D. Jiang at Stockholm University and Nobel Laureate Frank Wilczek at MIT show that chiral "lubricant" can generate repulsive, enhanced, and tunable Casimir interactions. Timothy Boyer showed in his work published in 1968 that a conductor with spherical symmetry will also show this repulsive force, and the result is independent of radius. Further work shows that the repulsive force can be generated with materials of carefully chosen dielectrics. Speculative applications. It has been suggested that the Casimir forces have application in nanotechnology, in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, and so-called Casimir oscillators. In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS. In 2001, Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod – a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system's behaviour agreed well with theoretical calculations. The Casimir effect shows that quantum field theory allows the energy density in very small regions of space to be negative relative to the ordinary vacuum energy, and the energy densities cannot be arbitrarily negative as the theory breaks down at atomic distances. Such prominent physicists such as Stephen Hawking and Kip Thorne, have speculated that such effects might make it possible to stabilize a traversable wormhole. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "{E}=\\tfrac12 \\hbar \\omega \\, ." }, { "math_id": 1, "text": "\\langle E \\rangle=\\tfrac12 \\sum_n E_n" }, { "math_id": 2, "text": "F(p) = - \\left. \\frac{\\delta \\langle E(s) \\rangle } { \\delta s } \\right\\vert_p \\,." }, { "math_id": 3, "text": "\\psi_n(x,y,z;t)=e^{-i\\omega_nt} e^{ik_xx+ik_yy} \\sin(k_n z) \\,," }, { "math_id": 4, "text": "k_n=\\frac{n\\pi}{a}" }, { "math_id": 5, "text": "\\omega_n=c \\sqrt{{k_x}^2 + {k_y}^2 + \\frac{n^2\\pi^2}{a^2}} \\,," }, { "math_id": 6, "text": "\\langle E \\rangle=\\frac{\\hbar}{2} \\cdot 2\n\\int \\frac{A \\,dk_x \\,dk_y}{(2\\pi)^2} \\sum_{n=1}^\\infty \\omega_n \\,," }, { "math_id": 7, "text": "\\frac{\\langle E(s) \\rangle}{A}=\\hbar\n\\int \\frac{dk_x \\,dk_y}{(2\\pi)^2} \\sum_{n=1}^\\infty \\omega_n\n\\left| \\omega_n \\right|^{-s} \\,." }, { "math_id": 8, "text": "\\frac{\\langle E(s) \\rangle}{A}=\n\\frac{\\hbar c^{1-s}}{4\\pi^2} \\sum_n \\int_0^\\infty 2\\pi q \\,dq\n\\left | q^2 + \\frac{\\pi^2 n^2}{a^2} \\right|^\\frac{1-s}{2} \\,," }, { "math_id": 9, "text": "\\frac{\\langle E(s) \\rangle}{A}=\n-\\frac {\\hbar c^{1-s} \\pi^{2-s}}{2a^{3-s}} \\frac{1}{3-s}\n\\sum_n \\left| n \\right| ^{3-s}=\n-\\frac {\\hbar c^{1-s} \\pi^{2-s}}{2a^{3-s}(3-s)}\\sum_n \\frac{1}{\\left| n\\right| ^{s-3}} \\,." }, { "math_id": 10, "text": "\\frac{\\langle E \\rangle}{A}=\n\\lim_{s\\to 0} \\frac{\\langle E(s) \\rangle}{A}=\n-\\frac {\\hbar c \\pi^2}{6a^3} \\zeta (-3) \\,." }, { "math_id": 11, "text": "\\frac{\\langle E \\rangle}{A}=\n-\\frac {\\hbar c \\pi^2}{720 a^3}\\,." }, { "math_id": 12, "text": "\\frac{F_\\mathrm{c}}{A}=-\\frac{d}{da} \\frac{\\langle E \\rangle}{A} = -\\frac {\\hbar c \\pi^2} {240 a^4}" }, { "math_id": 13, "text": "\\begin{align}\nU_E(a) &= \\int F(a) \\,da = \\int - \\hbar c \\pi^2 \\frac {A} {240 a^4} \\,da \\\\[4pt]\n&= \\hbar c \\pi^2 \\frac {A} {720 a^3} \n\\end{align}" }, { "math_id": 14, "text": "\\langle E(t) \\rangle=\\frac12 \\sum_n \\hbar |\\omega_n |\n\\exp \\bigl(-t |\\omega_n |\\bigr)\\,," }, { "math_id": 15, "text": "\\langle E(t) \\rangle=\\frac{C}{t^3} + \\textrm{finite}\\," }, { "math_id": 16, "text": "\\langle E(t) \\rangle=\\frac12 \\sum_n \\hbar |\\omega_n |\n\\exp \\left(-t^2 |\\omega_n |^2\\right)" }, { "math_id": 17, "text": "\\langle E(s) \\rangle=\\frac12 \\sum_n \\hbar |\\omega_n | |\\omega_n |^{-s}" } ]
https://en.wikipedia.org/wiki?curid=7555
75554495
Caesium selenate
<templatestyles src="Chembox/styles.css"/> Chemical compound Caesium selanate is an inorganic compound, with the chemical formula of Cs2SeO4. It can form colourless crystals of the orthorhombic crystal system. Preparation. caesium selenate can be obtained from the reaction of caesium carbonate and selenic acid solution: <chem>Cs2CO3 + H2SeO4 -> Cs2SeO4 + H2O + CO2</chem>formula_0 caesium selenate can also be prepared by the neutralization reaction of selenic acid and caesium hydroxide: <chem>2 CsOH + H2SeO4 -> Cs2SeO4 + 2 H2O</chem> Properties. caesium selenate can precipitate compounds such as CsLiSeO4·<templatestyles src="Fraction/styles.css" />1⁄2H2O and Cs4LiH3(SeO4)4 in Cs2SeO4-Li2SeO4-H2O and its acidification system. It can also form double salts with other metals, such as Cs2Mg(SeO4)2·6H2O, Cs2Co(SeO4)2·6H2O, etc. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\uparrow" } ]
https://en.wikipedia.org/wiki?curid=75554495
75554549
Amnesiac flooding
Algorithm related to graph distribution In distributed computing amnesic flooding is a stateless distributed flooding algorithm that can be implemented as a broadcast protocol in synchronous distributed networks without the need to store messages or flags between communication rounds. The algorithm is simple: "When a node receives a message, it forwards it to all of its neighbours it did not receive the message from. To initiate a broadcast on a network, a node simply sends the message to all of its neighbours." The algorithm has been shown to terminate when the message begins at any subset of the network nodes or any sequence thereof. For formula_0 a subset of the nodes of a graph formula_1, then formula_2 the time until an amnesiac flood terminates when started from formula_3 is known to obey the following bounds: formula_4 if formula_1 is formula_5-bipartite and formula_6 if it is not, where formula_7 is the eccentricity of formula_3 and formula_8 is the diameter. A graph is formula_3-bipartite, if the quotient graph of formula_1 with formula_3 contracted to a single node is bipartite. This termination time is optimal for formula_3-bipartite graphs and is asymptotically optimal for single node initialisation on non-bipartite graphs. This termination is robust with respect to the loss of edges and nodes, however it fails with delays on edges or the addition of new edges. Variants of Amnesiac Flooding. Since its introduction, several variants of and related problems to amnesiac flooding have been studied. For example, a modified variant requiring the initial set to retain their knowledge of this membership and the message for a single round, but always requiring formula_9 rounds has been proposed. A dynamic version of amnesiac flooding has been introduced considering the case where there are multiple different messages in the system and where each node can only send one message per round. This has been shown to terminate in the partial send case (formula_10 sends an arbitrary message to its neighbours that didn't send it any message last round) and the ranked-full send case (formula_10 sends the highest ranked message formula_11 to all of its neighbours that did not send it formula_11 last round). However, the unranked-full send (formula_10 sends an arbitrary message formula_11 to all of its neighbours that did not send it formula_11 last round) does not necessarily terminate without additional stored information (such as the diameter of the graph ). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "I \\subseteq V" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "t(I)" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "t(I)=e(I)" }, { "math_id": 5, "text": " I" }, { "math_id": 6, "text": "e(I)<t(I) \\leq e(I)+d(G)+1" }, { "math_id": 7, "text": "e(I)" }, { "math_id": 8, "text": "d(G)" }, { "math_id": 9, "text": "e(I)+1" }, { "math_id": 10, "text": "v" }, { "math_id": 11, "text": "m" } ]
https://en.wikipedia.org/wiki?curid=75554549
75558170
Charge based boundary element fast multipole method
Numerical technique for bioelectromagnetic modeling The charge-based formulation of the boundary element method (BEM) is a dimensionality reduction numerical technique that is used to model quasistatic electromagnetic phenomena in highly complex conducting media (targeting, e.g., the human brain) with a very large (up to approximately 1 billion) number of unknowns. The charge-based BEM solves an integral equation of the potential theory written in terms of the induced surface charge density. This formulation is naturally combined with fast multipole method (FMM) acceleration, and the entire method is known as charge-based BEM-FMM. The combination of BEM and FMM is a common technique in different areas of computational electromagnetics and, in the context of bioelectromagnetism, it provides improvements over the finite element method. Historical development. Along with more common electric potential-based BEM, the quasistatic charge-based BEM, derived in terms of the single-layer (charge) density, for a single-compartment medium has been known in the potential theory since the beginning of the 20th century. For multi-compartment conducting media, the surface charge density formulation first appeared in discretized form (for faceted interfaces) in the 1964 paper by Gelernter and Swihart. A subsequent continuous form, including time-dependent and dielectric effects, appeared in the 1967 paper by Barnard, Duck, and Lynn. The charge-based BEM has also been formulated for conducting, dielectric, and magnetic media, and used in different applications. In 2009, Greengard et al. successfully applied the charge-based BEM with fast multipole acceleration to molecular electrostatics of dielectrics. A similar approach to realistic modeling of the human brain with multiple conducting compartments was first described by Makarov et al. in 2018. Along with this, the BEM-based multilevel fast multipole method has been widely used in radar and antenna studies at microwave frequencies as well as in acoustics. Physical background - surface charges in biological media. The charge-based BEM is based on the concept of an "impressed" (or primary) electric field formula_0 and a "secondary" electric field formula_1. The impressed field is usually known "a priori" or is trivial to find. For the human brain, the impressed electric field can be classified as one of the following: When the impressed field is "turned on", free charges located within a conducting volume D immediately begin to redistribute and accumulate at the boundaries (interfaces) of regions of different conductivity in D. A surface charge density formula_4 appears on the conductivity interfaces. This charge density induces a secondary conservative electric field formula_1 following Coulomb's law. One example is a human under a direct current powerline with the known field formula_0 directed down. The superior surface of the human's conducting body will be charged negatively while its inferior portion is charged positively. These surface charges create a secondary electric field that effectively cancels or blocks the primary field everywhere in the body so that no current will flow within the body under DC steady state conditions. Another example is a human head with electrodes attached. At any conductivity interface with a normal vector formula_5 pointing from an "inside" (-) compartment of conductivity formula_6to an "outside" (+) compartment of conductivity formula_7, Kirchhoff's current law requires continuity of the normal component of the electric current density. This leads to the interfacial boundary condition in the form for every facet at a triangulated interface. As long as formula_8 are different from each other, the two normal components of the electric field, formula_9, must also be different. Such a jump across the interface is only possible when a sheet of surface charge exists at that interface. Thus, if an electric current or voltage is applied, the surface charge density follows. The goal of the numerical analysis is to find the unknown surface charge distribution and thus the total electric field formula_10 (and the total electric potential if required) anywhere in space. System of equations for surface charges. Below, a derivation is given based on Gauss's law and Coulomb's law. All conductivity interfaces, denoted by S, are discretized into planar triangular facets formula_11 with centers formula_12. Assume that an m-th facet with the normal vector formula_13 and area formula_14 carries a uniform surface charge density formula_15. If a volumetric tetrahedral mesh were present, the charged facets would belong to tetrahedra with different conductivity values. We first compute the electric field formula_16 at the point formula_17, for formula_18 i.e., "just outside" facet 𝑚 at its center. This field contains three contributions: A similar treatment holds for the electric field formula_24 just inside facet 𝑚, but the electric field of the flat sheet of charge changes its sign. Using Coulomb's law to calculate the contribution of facets different from formula_11, we find From this equation, we see that the normal component of the electric field indeed undergoes a jump through the charged interface. This is equivalent to a jump relation of the potential theory. As a second step, the two expressions for formula_25 are substituted into the interfacial boundary condition formula_26, applied to every facet 𝑚. This operation leads to a system of linear equations for unknown charge densities formula_15 which solves the problem: where formula_27 is the electric conductivity contrast at the m-th facet. The normalization constant formula_20 will cancel out after the solution is substituted in the expression for formula_1 and becomes redundant. Application of fast multipole method. For modern characterizations of brain topologies with ever-increasing levels of complexity, the above system of equations for formula_15 is very large; it is therefore solved iteratively. An initial guess for formula_15 is the last term on its right-hand side while the sum is ignored. Next, the sum is computed and the initial guess is refined, etc. This solution employs the simple Jacobi iterative method. The more rigorous generalized minimum residual method (GMRES) yields a much faster convergence of the BEM-FMM. In either case, the major work is in computing the underbraced sum in the system of equations above for every formula_28 at every iteration; this operation corresponds to a repetitive matrix-vector multiplication. However, one can recognize this sum as an electric field (times formula_29) of formula_30 charges to be computed at formula_30 observation points. Such a computation is "exactly" the task of the fast multipole method, which performs fast matrix-by-vector multiplication in formula_31 or even formula_32 operations instead of formula_33. The FMM3D library realized in both Python and MATLAB can be used for this purpose. It is therefore unnecessary to form or store the dense system matrix typical for the standard BEM. Continuous charge-based BEM. Near-field correction. The system of equations formulated above is derived with the collocation method and is less accurate. The corresponding integral equation is obtained from the "local" jump relations of the potential theory and the "local" interfacial boundary condition of normal electric current continuity. It is a Fredholm integral equation of the second kind Its derivation does not involve Green's identities (integrations by parts) and is applicable to non-nested geometries. When the Galerkin method is applied and the same zeroth-order basis functions (with a constant charge density for each facet) are still used on triangulated interfaces, we obtain exactly the same discretization as before if we replace the double integrals over surfaces formula_34 and formula_35 of triangles formula_11 and formula_21, respectively, by formula_36 where formula_37 is the surface area of the triangle formula_38. This approximation is only valid when formula_39 is much larger than a typical facet size i.e., in the "far field". Otherwise, semi-analytical formulae and Gaussian quadratures for triangles should be used. Typically, 4 to 32 such neighbor integrals per facet should be precomputed, stored, and then used at every iteration. This is an important correction to the plain fast multipole method in the "near field" which should also be used in the simple discrete formulation derived above. Such a correction makes it possible to obtain an "unconstrained" numerical (but not anatomical) resolution in the brain. Applications and limitations. Applications of the charge-based BEM-FMM include modeling brain stimulation with near real-time accurate TMS computations as well as neurophysiological recordings. They also include modeling challenging mesoscale head topologies such as thin brain membranes (dura mater, arachnoid mater, and pia mater). This is particularly important for accurate transcranial direct-current stimulation and electroconvulsive therapy dosage predictions. The BEM-FMM allows for straightforward adaptive mesh refinement including multiple extracerebral brain compartments. Another application is modeling electric field perturbations within densely packed neuronal/axonal arbor. Such perturbations change the biophysical activating function. A charge-based BEM formulation is being developed for promising bi-domain biophysical modeling of axonal processes. In its present form, the charge-based BEM-FMM is applicable to multi-compartment piecewise homogeneous media only; it cannot handle macroscopically anisotropic tissues. Additionally, the maximum number of facets (degrees of freedom) is limited to approximately formula_40 for typical academic computer hardware resources used as of 2023.
[ { "math_id": 0, "text": "\\mathbf{E}^i" }, { "math_id": 1, "text": "\\mathbf{E}^s" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "\\mathbf{J}^i=\\sigma\\mathbf{E}^i" }, { "math_id": 4, "text": "\\rho(\\mathbf{r})" }, { "math_id": 5, "text": "\\mathbf{n}" }, { "math_id": 6, "text": "\\sigma^-" }, { "math_id": 7, "text": "\\sigma^+" }, { "math_id": 8, "text": "\\sigma^{\\pm}" }, { "math_id": 9, "text": "\\mathbf{E}^{\\pm}\\cdot\\mathbf{n}" }, { "math_id": 10, "text": " \\mathbf{E}=\\mathbf{E}^i+\\mathbf{E}^s" }, { "math_id": 11, "text": "t_m" }, { "math_id": 12, "text": "\\mathbf{r}_m" }, { "math_id": 13, "text": "\\mathbf{n}_m" }, { "math_id": 14, "text": "A_m" }, { "math_id": 15, "text": "\\rho_m" }, { "math_id": 16, "text": "\\mathbf{E}^+_{m}" }, { "math_id": 17, "text": "\\mathbf{r}_m+\\delta \\mathbf{n}_m" }, { "math_id": 18, "text": "\\delta\\rightarrow 0 ^+" }, { "math_id": 19, "text": "+\\rho_m/2\\varepsilon_0\\cdot \\mathbf{n}_m" }, { "math_id": 20, "text": "\\varepsilon_0" }, { "math_id": 21, "text": "t_n" }, { "math_id": 22, "text": "A_n \\rho_n" }, { "math_id": 23, "text": "\\mathbf{r}_n" }, { "math_id": 24, "text": "\\mathbf{E}^-_{m}" }, { "math_id": 25, "text": "\\mathbf{E}^{\\pm} _{m}" }, { "math_id": 26, "text": " \\sigma^{-}\\mathbf{E}_m^{-}\\cdot \\mathbf{n}_m = \\sigma^+\\mathbf{E}_m^{+}\\cdot\\mathbf{n}_m" }, { "math_id": 27, "text": "K_m=\\frac{\\sigma^{-}-\\sigma^{+}}{\\sigma^{-}+\\sigma^{+}}" }, { "math_id": 28, "text": "{m}" }, { "math_id": 29, "text": "\\frac{1}{2\\pi \\epsilon_{0}}" }, { "math_id": 30, "text": "{M}" }, { "math_id": 31, "text": "O(M\\log{M})" }, { "math_id": 32, "text": "O(M)" }, { "math_id": 33, "text": "O(M^2)" }, { "math_id": 34, "text": "S_m" }, { "math_id": 35, "text": "S_n" }, { "math_id": 36, "text": " \\int_{S_m}\\int_{S_n}\\frac{\\mathbf{r}-\\mathbf{r^\\prime}}{|\\mathbf{r}-\\mathbf{r^\\prime}|^3}ds(\\mathbf{r^\\prime})ds(\\mathbf{r}) \\approx{A_m }{A_n}\\frac{\\mathbf{r}_m-\\mathbf{r}_n }{|\\mathbf{r}_m-\\mathbf{r}_n|^3}, " }, { "math_id": 37, "text": "{A_n}" }, { "math_id": 38, "text": "{t_n}" }, { "math_id": 39, "text": "|\\mathbf{r}_m-\\mathbf{r}_n |" }, { "math_id": 40, "text": "10^9" } ]
https://en.wikipedia.org/wiki?curid=75558170
755604
Complexification
Topic in mathematics In mathematics, the complexification of a vector space "V" over the field of real numbers (a "real vector space") yields a vector space "V"C over the complex number field, obtained by formally extending the scaling of vectors by real numbers to include their scaling ("multiplication") by complex numbers. Any basis for "V" (a space over the real numbers) may also serve as a basis for "V"C over the complex numbers. Formal definition. Let formula_0 be a real vector space. The of "V" is defined by taking the tensor product of formula_0 with the complex numbers (thought of as a 2-dimensional vector space over the reals): formula_1 The subscript, formula_2, on the tensor product indicates that the tensor product is taken over the real numbers (since formula_0 is a real vector space this is the only sensible option anyway, so the subscript can safely be omitted). As it stands, formula_3 is only a real vector space. However, we can make formula_3 into a complex vector space by defining complex multiplication as follows: formula_4 More generally, complexification is an example of extension of scalars – here extending scalars from the real numbers to the complex numbers – which can be done for any field extension, or indeed for any morphism of rings. Formally, complexification is a functor VectR → VectC, from the category of real vector spaces to the category of complex vector spaces. This is the adjoint functor – specifically the left adjoint – to the forgetful functor VectC → VectR forgetting the complex structure. This forgetting of the complex structure of a complex vector space formula_0 is called (or sometimes ""). The decomplexification of a complex vector space formula_0 with basis formula_5 removes the possibility of complex multiplication of scalars, thus yielding a real vector space formula_6 of twice the dimension with a basis formula_7 Basic properties. By the nature of the tensor product, every vector "v" in "V"C can be written uniquely in the form formula_8 where "v"1 and "v"2 are vectors in "V". It is a common practice to drop the tensor product symbol and just write formula_9 Multiplication by the complex number "a" + "i b" is then given by the usual rule formula_10 We can then regard "V"C as the direct sum of two copies of "V": formula_11 with the above rule for multiplication by complex numbers. There is a natural embedding of "V" into "V"C given by formula_12 The vector space "V" may then be regarded as a "real" subspace of "V"C. If "V" has a basis {"e""i"} (over the field R) then a corresponding basis for "V"C is given by { "e""i" ⊗ 1 } over the field C. The complex dimension of "V"C is therefore equal to the real dimension of "V": formula_13 Alternatively, rather than using tensor products, one can use this direct sum as the "definition" of the complexification: formula_14 where formula_3 is given a linear complex structure by the operator "J" defined as formula_15 where "J" encodes the operation of “multiplication by i”. In matrix form, "J" is given by: formula_16 This yields the identical space – a real vector space with linear complex structure is identical data to a complex vector space – though it constructs the space differently. Accordingly, formula_3 can be written as formula_17 or formula_18 identifying "V" with the first direct summand. This approach is more concrete, and has the advantage of avoiding the use of the technically involved tensor product, but is ad hoc. Dickson doubling. The process of complexification by moving from R to C was abstracted by twentieth-century mathematicians including Leonard Dickson. One starts with using the identity mapping "x"* = "x" as a trivial involution on R. Next two copies of R are used to form "z" = ("a , b") with the complex conjugation introduced as the involution "z"* = ("a", −"b"). Two elements w and z in the doubled set multiply by formula_19 Finally, the doubled set is given a norm "N"("z") = "z* z". When starting from R with the identity involution, the doubled set is C with the norm "a"2 + "b"2. If one doubles C, and uses conjugation ("a,b")* = ("a"*, –"b"), the construction yields quaternions. Doubling again produces octonions, also called Cayley numbers. It was at this point that Dickson in 1919 contributed to uncovering algebraic structure. The process can also be initiated with C and the trivial involution "z"* = "z". The norm produced is simply "z"2, unlike the generation of C by doubling R. When this C is doubled it produces bicomplex numbers, and doubling that produces biquaternions, and doubling again results in bioctonions. When the base algebra is associative, the algebra produced by this Cayley–Dickson construction is called a composition algebra since it can be shown that it has the property formula_20 Complex conjugation. The complexified vector space "V"C has more structure than an ordinary complex vector space. It comes with a canonical complex conjugation map: formula_21 defined by formula_22 The map χ may either be regarded as a conjugate-linear map from "V"C to itself or as a complex linear isomorphism from "V"C to its complex conjugate formula_23. Conversely, given a complex vector space "W" with a complex conjugation χ, "W" is isomorphic as a complex vector space to the complexification "V"C of the real subspace formula_24 In other words, all complex vector spaces with complex conjugation are the complexification of a real vector space. For example, when "W" = C"n" with the standard complex conjugation formula_25 the invariant subspace "V" is just the real subspace R"n". Linear transformations. Given a real linear transformation "f" : "V" → "W" between two real vector spaces there is a natural complex linear transformation formula_26 given by formula_27 The map formula_28 is called the complexification of "f". The complexification of linear transformations satisfies the following properties In the language of category theory one says that complexification defines an (additive) functor from the category of real vector spaces to the category of complex vector spaces. The map "f"C commutes with conjugation and so maps the real subspace of "V"C to the real subspace of "W"C (via the map "f"). Moreover, a complex linear map "g" : "V"C → "W"C is the complexification of a real linear map if and only if it commutes with conjugation. As an example consider a linear transformation from R"n" to R"m" thought of as an "m"×"n" matrix. The complexification of that transformation is exactly the same matrix, but now thought of as a linear map from C"n" to C"m". Dual spaces and tensor products. The dual of a real vector space "V" is the space "V"* of all real linear maps from "V" to R. The complexification of "V"* can naturally be thought of as the space of all real linear maps from "V" to C (denoted HomR("V",C)). That is, formula_33 The isomorphism is given by formula_34 where "φ"1 and "φ"2 are elements of "V"*. Complex conjugation is then given by the usual operation formula_35 Given a real linear map "φ" : "V" → C we may extend by linearity to obtain a complex linear map "φ" : "V"C → C. That is, formula_36 This extension gives an isomorphism from HomR("V",C) to HomC("V"C,C). The latter is just the "complex" dual space to "V"C, so we have a natural isomorphism: formula_37 More generally, given real vector spaces "V" and "W" there is a natural isomorphism formula_38 Complexification also commutes with the operations of taking tensor products, exterior powers and symmetric powers. For example, if "V" and "W" are real vector spaces there is a natural isomorphism formula_39 Note the left-hand tensor product is taken over the reals while the right-hand one is taken over the complexes. The same pattern is true in general. For instance, one has formula_40 In all cases, the isomorphisms are the “obvious” ones. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "V^{\\Complex} = V\\otimes_{\\R} \\Complex\\,." }, { "math_id": 2, "text": "\\R" }, { "math_id": 3, "text": "V^{\\Complex}" }, { "math_id": 4, "text": "\\alpha(v \\otimes \\beta) = v\\otimes(\\alpha\\beta)\\qquad\\mbox{ for all } v\\in V \\mbox{ and }\\alpha,\\beta \\in \\Complex." }, { "math_id": 5, "text": "e_{\\mu}" }, { "math_id": 6, "text": "W_{\\R}" }, { "math_id": 7, "text": "\\{e_{\\mu}, ie_{\\mu}\\}." }, { "math_id": 8, "text": "v = v_1\\otimes 1 + v_2\\otimes i" }, { "math_id": 9, "text": "v = v_1 + iv_2.\\," }, { "math_id": 10, "text": "(a+ib)(v_1 + iv_2) = (av_1 - bv_2) + i(bv_1 + av_2).\\," }, { "math_id": 11, "text": "V^{\\Complex} \\cong V \\oplus i V" }, { "math_id": 12, "text": "v\\mapsto v\\otimes 1." }, { "math_id": 13, "text": "\\dim_{\\Complex} V^{\\Complex} = \\dim_{\\R} V." }, { "math_id": 14, "text": "V^{\\Complex} := V \\oplus V," }, { "math_id": 15, "text": "J(v,w) := (-w,v)," }, { "math_id": 16, "text": "J = \\begin{bmatrix}0 & -I_V \\\\ I_V & 0\\end{bmatrix}." }, { "math_id": 17, "text": "V \\oplus JV" }, { "math_id": 18, "text": "V \\oplus i V," }, { "math_id": 19, "text": "w z = (a,b) \\times (c,d) = (ac\\ - \\ d^*b,\\ da \\ + \\ b c^*)." }, { "math_id": 20, "text": "N(p\\,q) = N(p)\\,N(q)\\,." }, { "math_id": 21, "text": "\\chi : V^{\\Complex} \\to \\overline{V^{\\Complex}}" }, { "math_id": 22, "text": "\\chi(v\\otimes z) = v\\otimes \\bar z." }, { "math_id": 23, "text": "\\overline {V^{\\Complex}}" }, { "math_id": 24, "text": "V = \\{ w \\in W : \\chi(w) = w \\}." }, { "math_id": 25, "text": "\\chi(z_1,\\ldots,z_n) = (\\bar z_1,\\ldots,\\bar z_n)" }, { "math_id": 26, "text": "f^{\\Complex} : V^{\\Complex} \\to W^{\\Complex}" }, { "math_id": 27, "text": "f^{\\Complex}(v\\otimes z) = f(v)\\otimes z." }, { "math_id": 28, "text": "f^{\\Complex}" }, { "math_id": 29, "text": "(\\mathrm{id}_V)^{\\Complex} = \\mathrm{id}_{V^{\\Complex}}" }, { "math_id": 30, "text": "(f \\circ g)^{\\Complex} = f^{\\Complex} \\circ g^{\\Complex}" }, { "math_id": 31, "text": "(f+g)^{\\Complex} = f^{\\Complex} + g^{\\Complex}" }, { "math_id": 32, "text": "(a f)^{\\Complex} = a f^{\\Complex} \\quad \\forall a \\in \\R" }, { "math_id": 33, "text": "(V^*)^{\\Complex} = V^*\\otimes \\Complex \\cong \\mathrm{Hom}_{\\Reals}(V,\\Complex)." }, { "math_id": 34, "text": "(\\varphi_1\\otimes 1 + \\varphi_2\\otimes i) \\leftrightarrow \\varphi_1 + i \\varphi_2" }, { "math_id": 35, "text": "\\overline{\\varphi_1 + i\\varphi_2} = \\varphi_1 - i \\varphi_2." }, { "math_id": 36, "text": "\\varphi(v\\otimes z) = z\\varphi(v)." }, { "math_id": 37, "text": "(V^*)^{\\Complex} \\cong (V^{\\Complex})^*." }, { "math_id": 38, "text": "\\mathrm{Hom}_{\\Reals}(V,W)^{\\Complex} \\cong \\mathrm{Hom}_{\\Complex}(V^{\\Complex},W^{\\Complex})." }, { "math_id": 39, "text": "(V \\otimes_{\\Reals} W)^{\\Complex} \\cong V^{\\Complex} \\otimes_{\\Complex} W^{\\Complex}\\,." }, { "math_id": 40, "text": "(\\Lambda_{\\Reals}^k V)^{\\Complex} \\cong \\Lambda_{\\Complex}^k (V^{\\Complex})." } ]
https://en.wikipedia.org/wiki?curid=755604
7556093
Green's function for the three-variable Laplace equation
Partial differential equations In physics, the Green's function (or fundamental solution) for the Laplacian (or Laplace operator) in three variables is used to describe the response of a particular type of physical system to a point source. In particular, this Green's function arises in systems that can be described by Poisson's equation, a partial differential equation (PDE) of the form formula_0 where formula_1 is the Laplace operator in formula_2, formula_3 is the source term of the system, and formula_4 is the solution to the equation. Because formula_1 is a linear differential operator, the solution formula_4 to a general system of this type can be written as an integral over a distribution of source given by formula_3: formula_5 where the Green's function for Laplacian in three variables formula_6 describes the response of the system at the point formula_7 to a point source located at formula_8: formula_9 and the point source is given by formula_10, the Dirac delta function. Motivation. One physical system of this type is a charge distribution in electrostatics. In such a system, the electric field is expressed as the negative gradient of the electric potential, and Gauss's law in differential form applies: formula_11 Combining these expressions gives us Poisson's equation: formula_12 We can find the solution formula_13 to this equation for an arbitrary charge distribution by temporarily considering the distribution created by a point charge formula_14 located at formula_8: formula_15 In this case, formula_16 which shows that formula_17 for formula_18 will give the response of the system to the point charge formula_14. Therefore, from the discussion above, if we can find the Green's function of this operator, we can find formula_13 to be formula_19 for a general charge distribution. Mathematical exposition. The free-space Green's function for the Laplace operator in three variables is given in terms of the reciprocal distance between two points and is known as the "Newton kernel" or "Newtonian potential". That is to say, the solution of the equation formula_20 is formula_21 where formula_22 are the standard Cartesian coordinates in a three-dimensional space, and formula_23 is the Dirac delta function. The "algebraic expression" of the Green's function for the three-variable Laplace operator, apart from the constant term formula_24 expressed in Cartesian coordinates shall be referred to as formula_25 Many expansion formulas are possible, given the algebraic expression for the Green's function. One of the most well-known of these, the Laplace expansion for the three-variable Laplace equation, is given in terms of the generating function for Legendre polynomials, formula_26 which has been written in terms of spherical coordinates formula_27. The less than (greater than) notation means, take the primed or unprimed spherical radius depending on which is less than (greater than) the other. The formula_28 represents the angle between the two arbitrary vectors formula_29 given by formula_30 The free-space circular cylindrical Green's function (see below) is given in terms of the reciprocal distance between two points. The expression is derived in Jackson's "Classical Electrodynamics". Using the Green's function for the three-variable Laplace operator, one can integrate the Poisson equation in order to determine the potential function. Green's functions can be expanded in terms of the basis elements (harmonic functions) which are determined using the separable coordinate systems for the linear partial differential equation. There are many expansions in terms of special functions for the Green's function. In the case of a boundary put at infinity with the boundary condition setting the solution to zero at infinity, then one has an infinite-extent Green's function. For the three-variable Laplace operator, one can for instance expand it in the rotationally invariant coordinate systems which allow separation of variables. For instance: formula_31 where formula_32 and formula_33 is the odd-half-integer degree Legendre function of the second kind, which is a toroidal harmonic. Here the expansion has been written in terms of cylindrical coordinates formula_34. See for instance Toroidal coordinates. Using one of the Whipple formulae for toroidal harmonics we can obtain an alternative form of the Green's function formula_35 in terms for a toroidal harmonic of the first kind. This formula was used in 1999 for astrophysical applications in a paper published in "The Astrophysical Journal", published by Howard Cohl and Joel Tohline. The above-mentioned formula is also known in the engineering community. For instance, a paper written in the "Journal of Applied Physics" in volume 18, 1947 pages 562-577 shows N.G. De Bruijn and C.J. Boukamp knew of the above relationship. In fact, virtually all the mathematics found in recent papers was already done by Chester Snow. This is found in his book titled "Hypergeometric and Legendre Functions with Applications to Integral Equations of Potential Theory", National Bureau of Standards Applied Mathematics Series 19, 1952. Look specifically on pages 228-263. The article by Chester Snow, "Magnetic Fields of Cylindrical Coils and Annular Coils" (National Bureau of Standards, Applied Mathematical Series 38, December 30, 1953), clearly shows the relationship between the free-space Green's function in cylindrical coordinates and the Q-function expression. Likewise, see another one of Snow's pieces of work, titled "Formulas for Computing Capacitance and Inductance", National Bureau of Standards Circular 544, September 10, 1954, pp 13–41. Indeed, not much has been published recently on the subject of toroidal functions and their applications in engineering or physics. However, a number of engineering applications do exist. One application was published; the article was written by J.P. Selvaggi, S. Salon, O. Kwon, and M.V.K. Chari, "Calculating the External Magnetic Field From Permanent Magnets in Permanent-Magnet Motors-An Alternative Method," IEEE Transactions on Magnetics, Vol. 40, No. 5, September 2004. These authors have done extensive work with Legendre functions of the second kind and half-integral degree or toroidal functions of zeroth order. They have solved numerous problems which exhibit circular cylindrical symmetry employing the toroidal functions. The above expressions for the Green's function for the three-variable Laplace operator are examples of single summation expressions for this Green's function. There are also single-integral expressions for this Green's function. Examples of these can be seen to exist in rotational cylindrical coordinates as an integral Laplace transform in the difference of vertical heights whose kernel is given in terms of the order-zero Bessel function of the first kind as formula_36 where formula_37 are the greater (lesser) variables formula_38 and formula_39. Similarly, the Green's function for the three-variable Laplace equation can be given as a Fourier integral cosine transform of the difference of vertical heights whose kernel is given in terms of the order-zero modified Bessel function of the second kind as formula_40 Rotationally invariant Green's functions for the three-variable Laplace operator. Green's function expansions exist in all of the rotationally invariant coordinate systems which are known to yield solutions to the three-variable Laplace equation through the separation of variables technique. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": " \\nabla^2 u(\\mathbf{x}) = f(\\mathbf{x}) " }, { "math_id": 1, "text": "\\nabla^2" }, { "math_id": 2, "text": "\\mathbb{R}^3" }, { "math_id": 3, "text": "f(\\mathbf{x})" }, { "math_id": 4, "text": "u(\\mathbf{x})" }, { "math_id": 5, "text": " u(\\mathbf{x}) = \\int G(\\mathbf{x},\\mathbf{x'})f(\\mathbf{x'})d\\mathbf{x}'" }, { "math_id": 6, "text": "G(\\mathbf{x},\\mathbf{x'})" }, { "math_id": 7, "text": "\\mathbf{x}" }, { "math_id": 8, "text": "\\mathbf{x'}" }, { "math_id": 9, "text": "\\nabla^2 G(\\mathbf{x},\\mathbf{x'}) = \\delta(\\mathbf{x}-\\mathbf{x'})" }, { "math_id": 10, "text": "\\delta(\\mathbf{x}-\\mathbf{x'})" }, { "math_id": 11, "text": "\\begin{align}\n\\mathbf{E} &= - \\mathbf{\\nabla} \\phi(\\mathbf{x}) \\\\[1ex]\n\\boldsymbol{\\nabla} \\cdot \\mathbf{E} &= \\frac{\\rho(\\mathbf{x})}{\\varepsilon_0}\n\\end{align} " }, { "math_id": 12, "text": "-\\mathbf{\\nabla}^2 \\phi(\\mathbf{x}) = \\frac{\\rho(\\mathbf{x})}{\\varepsilon_0} " }, { "math_id": 13, "text": "\\phi(\\mathbf{x})" }, { "math_id": 14, "text": "q" }, { "math_id": 15, "text": "\\rho(\\mathbf{x}) = q \\, \\delta(\\mathbf{x}-\\mathbf{x'})" }, { "math_id": 16, "text": "-\\frac{\\varepsilon_0}{q} \\mathbf{\\nabla}^2\\phi(\\mathbf{x}) = \\delta(\\mathbf{x}-\\mathbf{x'}) " }, { "math_id": 17, "text": "G(\\mathbf{x}, \\mathbf{x'})" }, { "math_id": 18, "text": "-\\frac{\\varepsilon_0}{q} \\nabla^2" }, { "math_id": 19, "text": " \\phi(\\mathbf{x}) = \\int G(\\mathbf{x},\\mathbf{x'}) \\rho(\\mathbf{x'}) \\,d\\mathbf{x}'" }, { "math_id": 20, "text": " \\nabla^2 G(\\mathbf{x},\\mathbf{x'}) = \\delta(\\mathbf{x}-\\mathbf{x'})" }, { "math_id": 21, "text": " G(\\mathbf{x},\\mathbf{x'}) = -\\frac{1}{4\\pi \\left|\\mathbf{x} - \\mathbf{x'}\\right|}," }, { "math_id": 22, "text": "\\mathbf{x}=(x,y,z)" }, { "math_id": 23, "text": "\\delta" }, { "math_id": 24, "text": "-1/(4\\pi)" }, { "math_id": 25, "text": "\\frac{1}{|\\mathbf{x} - \\mathbf{x'}|}\n= \\left[\\left(x - x'\\right)^2 + \\left(y - y'\\right)^2 + \\left(z - z'\\right)^2\\right]^{-{1}/{2}}.\n" }, { "math_id": 26, "text": " \\frac{1}{|\\mathbf{x} - \\mathbf{x'}|} = \\sum_{l=0}^\\infty \\frac{r_<^l}{r_>^{l+1}} P_l(\\cos\\gamma)," }, { "math_id": 27, "text": "(r,\\theta,\\varphi)" }, { "math_id": 28, "text": "\\gamma" }, { "math_id": 29, "text": "(\\mathbf{x},\\mathbf{x'})" }, { "math_id": 30, "text": "\\cos\\gamma = \\cos\\theta\\cos\\theta' + \\sin\\theta\\sin\\theta' \\cos(\\varphi-\\varphi')." }, { "math_id": 31, "text": " \\frac{1}{|\\mathbf{x} - \\mathbf{x'}|} =\n\\frac{1}{\\pi\\sqrt{R R'}}\n\\sum_{m=-\\infty}^\\infty e^{im(\\varphi-\\varphi')} Q_{m-\\frac{1}{2}}(\\chi)" }, { "math_id": 32, "text": " \\chi = \\frac{R^2 + {R'}^2 + \\left(z-z'\\right)^2}{2RR'}" }, { "math_id": 33, "text": "Q_{m-\\frac{1}{2}}(\\chi)" }, { "math_id": 34, "text": "(R,\\varphi,z)" }, { "math_id": 35, "text": "\\frac{1}{|\\mathbf{x} - \\mathbf{x'}|} =\n\\sqrt{\\frac{\\pi}{2RR'(\\chi^2-1)^{1/2}}}\n\\sum_{m=-\\infty}^\\infty \\frac{\\left(-1\\right)^m}{\\Gamma(m+1/2)} P_{-\\frac{1}{2}}^m\n{\\left(\\frac{\\chi}{\\sqrt{\\chi^2-1}}\\right)} e^{im(\\varphi-\\varphi')}" }, { "math_id": 36, "text": " \\frac{1}{|\\mathbf{x} - \\mathbf{x'}|} =\n\\int_0^\\infty J_0{\\left( k\\sqrt{R^2 + {R'}^2 - 2RR'\\cos(\\varphi-\\varphi')}\\right)} e^{-k(z_>-z_<)}\\,dk," }, { "math_id": 37, "text": "z_> (z_<)" }, { "math_id": 38, "text": "z" }, { "math_id": 39, "text": "z'" }, { "math_id": 40, "text": " \\frac{1}{|\\mathbf{x} - \\mathbf{x'}|} =\n\\frac{2}{\\pi} \\int_0^\\infty K_0{\\left( k\\sqrt{R^2 + {R'}^2 - 2RR'\\cos(\\varphi-\\varphi')} \\right)} \\cos[k(z-z')] \\, dk. " } ]
https://en.wikipedia.org/wiki?curid=7556093
755615
Poundal
&lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The poundal (symbol: pdl) is a unit of force, introduced in 1877, that is part of the Absolute English system of units, which itself is a coherent subsystem of the foot–pound–second system. formula_0 The poundal is defined as the force necessary to accelerate 1 pound-mass at 1 foot per second squared. 1 pdl = exactly. Background. English units require re-scaling of either force or mass to eliminate a numerical proportionality constant in the equation "F = ma". The poundal represents one choice, which is to rescale units of force. Since a pound of "force" (pound force) accelerates a pound of "mass" (pound mass) at 32.174 049 ft/s2 (9.80665 m/s2; the acceleration of gravity, "g"), we can scale down the unit of force to compensate, giving us one that accelerates 1 pound mass at 1 ft/s2 rather than at 32.174 049 ft/s2; and that is the poundal, which is approximately &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄32 pound force. For example, a force of 1200 poundals is required to accelerate a person of 150 pounds mass at 8 feet per second squared: formula_1 The poundal-as-force, pound-as-mass system is contrasted with an alternative system in which pounds are used as "force" (pounds-force), and instead, the "mass" unit is rescaled by a factor of roughly 32. That is, one pound-force will accelerate one pound-mass at 32 feet per second squared; we can scale "up" the unit of "mass" to compensate, which will be accelerated by 1 ft/s2 (rather than 32 ft/s2) given the application of one pound force; this gives us a unit of mass called the slug, which is about 32 pounds mass. Using this system (slugs and pounds-force), the above expression could be expressed as: formula_2 Note: Slugs () and poundals (1/) are never used in the same system, since they are opposite solutions of the same problem. Rather than changing either force or mass units, one may choose to express acceleration in units of the acceleration due to Earth's gravity (called "g"). In this case, we can keep both pounds-mass and pounds-force, such that applying one pound force to one pound mass accelerates it at one unit of acceleration ("g"): formula_3 Expressions derived using poundals for force and lb for mass (or lbf for force and slugs for mass) have the advantage of not being tied to conditions on the surface of the earth. Specifically, computing "F" = "ma" on the moon or in deep space as poundals, lb⋅ft/s2 or lbf = slug⋅ft/s2, avoids the constant tied to acceleration of gravity on earth. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1\\,\\text{pdl} = 1\\,\\text{lb}{\\cdot}\\text{ft}/\\text{s}^2" }, { "math_id": 1, "text": "\\mathrm{150~lb \\times 8~\\frac{ft}{s^2} = 1200~pdl}" }, { "math_id": 2, "text": "\\mathrm{4.66~slug \\times 8~\\frac{ft}{s^2} = 37.3~lbf}" }, { "math_id": 3, "text": "150~\\mathrm{lb} \\cdot 0.249g = 37.3~\\mathrm{lbf}" } ]
https://en.wikipedia.org/wiki?curid=755615
755647
Semilattice
Partial order with joins &lt;templatestyles src="Stack/styles.css"/&gt; In mathematics, a join-semilattice (or upper semilattice) is a partially ordered set that has a join (a least upper bound) for any nonempty finite subset. Dually, a meet-semilattice (or lower semilattice) is a partially ordered set which has a meet (or greatest lower bound) for any nonempty finite subset. Every join-semilattice is a meet-semilattice in the inverse order and vice versa. Semilattices can also be defined algebraically: join and meet are associative, commutative, idempotent binary operations, and any such operation induces a partial order (and the respective inverse order) such that the result of the operation for any two elements is the least upper bound (or greatest lower bound) of the elements with respect to this partial order. A lattice is a partially ordered set that is both a meet- and join-semilattice with respect to the same partial order. Algebraically, a lattice is a set with two associative, commutative idempotent binary operations linked by corresponding absorption laws. Order-theoretic definition. A set "S" partially ordered by the binary relation ≤ is a "meet-semilattice" if For all elements "x" and "y" of "S", the greatest lower bound of the set {"x", "y"} exists. The greatest lower bound of the set {"x", "y"} is called the meet of "x" and "y", denoted "x" ∧ "y". Replacing "greatest lower bound" with "least upper bound" results in the dual concept of a "join-semilattice". The least upper bound of {"x", "y"} is called the join of "x" and "y", denoted "x" ∨ "y". Meet and join are binary operations on "S". A simple induction argument shows that the existence of all possible pairwise suprema (infima), as per the definition, implies the existence of all non-empty finite suprema (infima). A join-semilattice is bounded if it has a least element, the join of the empty set. Dually, a meet-semilattice is bounded if it has a greatest element, the meet of the empty set. Other properties may be assumed; see the article on completeness in order theory for more discussion on this subject. That article also discusses how we may rephrase the above definition in terms of the existence of suitable Galois connections between related posets — an approach of special interest for category theoretic investigations of the concept. Algebraic definition. A "meet-semilattice" is an algebraic structure formula_0 consisting of a set "S" with a binary operation ∧, called meet, such that for all members "x", "y", and "z" of "S", the following identities hold: A meet-semilattice formula_0 is bounded if "S" includes an identity element 1 such that "x" ∧ 1 "x" for all "x" in "S". If the symbol ∨, called join, replaces ∧ in the definition just given, the structure is called a "join-semilattice". One can be ambivalent about the particular choice of symbol for the operation, and speak simply of "semilattices". A semilattice is a commutative, idempotent semigroup; i.e., a commutative band. A bounded semilattice is an idempotent commutative monoid. A partial order is induced on a meet-semilattice by setting "x" ≤ "y" whenever "x" ∧ "y" "x". For a join-semilattice, the order is induced by setting "x" ≤ "y" whenever "x" ∨ "y" "y". In a bounded meet-semilattice, the identity 1 is the greatest element of "S". Similarly, an identity element in a join semilattice is a least element. Connection between the two definitions. An order theoretic meet-semilattice 〈"S", ≤〉 gives rise to a binary operation ∧ such that 〈"S", ∧〉 is an algebraic meet-semilattice. Conversely, the meet-semilattice 〈"S", ∧〉 gives rise to a binary relation ≤ that partially orders "S" in the following way: for all elements "x" and "y" in "S", "x" ≤ "y" if and only if "x" = "x" ∧ "y". The relation ≤ introduced in this way defines a partial ordering from which the binary operation ∧ may be recovered. Conversely, the order induced by the algebraically defined semilattice 〈"S", ∧〉 coincides with that induced by ≤. Hence the two definitions may be used interchangeably, depending on which one is more convenient for a particular purpose. A similar conclusion holds for join-semilattices and the dual ordering ≥. Examples. Semilattices are employed to construct other order structures, or in conjunction with other completeness properties. are in fact the same set. Commutativity and associativity of ∧ assure (1), idempotence, (2). This semilattice is the free semilattice over "L". It is not bounded by "L", because a set is not a member of itself. Semilattice morphisms. The above algebraic definition of a semilattice suggests a notion of morphism between two semilattices. Given two join-semilattices ("S", ∨) and ("T", ∨), a homomorphism of (join-) semilattices is a function "f": "S" → "T" such that "f"("x" ∨ "y") = "f"("x") ∨ "f"("y"). Hence "f" is just a homomorphism of the two semigroups associated with each semilattice. If "S" and "T" both include a least element 0, then "f" should also be a monoid homomorphism, i.e. we additionally require that "f"(0) = 0. In the order-theoretic formulation, these conditions just state that a homomorphism of join-semilattices is a function that preserves binary joins and least elements, if such there be. The obvious dual—replacing ∧ with ∨ and 0 with 1—transforms this definition of a join-semilattice homomorphism into its meet-semilattice equivalent. Note that any semilattice homomorphism is necessarily monotone with respect to the associated ordering relation. For an explanation see the entry preservation of limits. Equivalence with algebraic lattices. There is a well-known equivalence between the category formula_9 of join-semilattices with zero with formula_10-homomorphisms and the category formula_11 of algebraic lattices with compactness-preserving complete join-homomorphisms, as follows. With a join-semilattice formula_12 with zero, we associate its ideal lattice formula_13. With a formula_10-homomorphism formula_14 of formula_10-semilattices, we associate the map formula_15, that with any ideal formula_16 of formula_12 associates the ideal of formula_17 generated by formula_18. This defines a functor formula_19. Conversely, with every algebraic lattice formula_20 we associate the formula_10-semilattice formula_21 of all compact elements of formula_20, and with every compactness-preserving complete join-homomorphism formula_22 between algebraic lattices we associate the restriction formula_23. This defines a functor formula_24. The pair formula_25 defines a category equivalence between formula_9 and formula_11. Distributive semilattices. Surprisingly, there is a notion of "distributivity" applicable to semilattices, even though distributivity conventionally requires the interaction of two binary operations. This notion requires but a single operation, and generalizes the distributivity condition for lattices. A join-semilattice is distributive if for all "a", "b", and "x" with "x" ≤ "a" ∨ "b" there exist "a' " ≤ "a" and "b' " ≤ "b" such that "x" = "a' " ∨ "b' ". Distributive meet-semilattices are defined dually. These definitions are justified by the fact that any distributive join-semilattice in which binary meets exist is a distributive lattice. See the entry distributivity (order theory). A join-semilattice is distributive if and only if the lattice of its ideals (under inclusion) is distributive. Complete semilattices. Nowadays, the term "complete semilattice" has no generally accepted meaning, and various mutually inconsistent definitions exist. If completeness is taken to require the existence of all infinite joins, or all infinite meets, whichever the case may be, as well as finite ones, this immediately leads to partial orders that are in fact complete lattices. For why the existence of all possible infinite joins entails the existence of all possible infinite meets (and vice versa), see the entry completeness (order theory). Nevertheless, the literature on occasion still takes complete join- or meet-semilattices to be complete lattices. In this case, "completeness" denotes a restriction on the scope of the homomorphisms. Specifically, a complete join-semilattice requires that the homomorphisms preserve all joins, but contrary to the situation we find for completeness properties, this does not require that homomorphisms preserve all meets. On the other hand, we can conclude that every such mapping is the lower adjoint of some Galois connection. The corresponding (unique) upper adjoint will then be a homomorphism of complete meet-semilattices. This gives rise to a number of useful categorical dualities between the categories of all complete semilattices with morphisms preserving all meets or joins, respectively. Another usage of "complete meet-semilattice" refers to a bounded complete cpo. A complete meet-semilattice in this sense is arguably the "most complete" meet-semilattice that is not necessarily a complete lattice. Indeed, a complete meet-semilattice has all "non-empty" meets (which is equivalent to being bounded complete) and all directed joins. If such a structure has also a greatest element (the meet of the empty set), it is also a complete lattice. Thus a complete semilattice turns out to be "a complete lattice possibly lacking a top". This definition is of interest specifically in domain theory, where bounded complete algebraic cpos are studied as Scott domains. Hence Scott domains have been called "algebraic semilattices". Cardinality-restricted notions of completeness for semilattices have been rarely considered in the literature. Free semilattices. This section presupposes some knowledge of category theory. In various situations, free semilattices exist. For example, the forgetful functor from the category of join-semilattices (and their homomorphisms) to the category of sets (and functions) admits a left adjoint. Therefore, the free join-semilattice F("S") over a set "S" is constructed by taking the collection of all non-empty "finite" subsets of "S", ordered by subset inclusion. Clearly, "S" can be embedded into F("S") by a mapping "e" that takes any element "s" in "S" to the singleton set {"s"}. Then any function "f" from a "S" to a join-semilattice "T" (more formally, to the underlying set of "T") induces a unique homomorphism "f' " between the join-semilattices F("S") and "T", such that "f" = "f' " ○ "e". Explicitly, "f' " is given by formula_26 Now the obvious uniqueness of "f' " suffices to obtain the required adjunction—the morphism-part of the functor F can be derived from general considerations (see adjoint functors). The case of free meet-semilattices is dual, using the opposite subset inclusion as an ordering. For join-semilattices with bottom, we just add the empty set to the above collection of subsets. In addition, semilattices often serve as generators for free objects within other categories. Notably, both the forgetful functors from the category of frames and frame-homomorphisms, and from the category of distributive lattices and lattice-homomorphisms, have a left adjoint. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. It is often the case that standard treatments of lattice theory define a semilattice, if that, and then say no more. See the references in the entries order theory and lattice theory. Moreover, there is no literature on semilattices of comparable magnitude to that on semigroups.
[ { "math_id": 0, "text": "\\langle S, \\land \\rangle" }, { "math_id": 1, "text": "\\mathbb{N}" }, { "math_id": 2, "text": "\\leq \\omega" }, { "math_id": 3, "text": " \\xi " }, { "math_id": 4, "text": " \\xi \\leq \\eta " }, { "math_id": 5, "text": " \\forall Q \\in \\eta, \\exists P \\in \\xi " }, { "math_id": 6, "text": " Q \\subset P " }, { "math_id": 7, "text": " \\xi \\vee \\eta = \\{ P \\cap Q \\mid P \\in \\xi \\ \\& \\ Q \\in \\eta \\} " }, { "math_id": 8, "text": " \\{ S \\} " }, { "math_id": 9, "text": "\\mathcal{S}" }, { "math_id": 10, "text": "(\\vee,0)" }, { "math_id": 11, "text": "\\mathcal{A}" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "\\operatorname{Id}\\ S" }, { "math_id": 14, "text": "f \\colon S \\to T" }, { "math_id": 15, "text": "\\operatorname{Id}\\ f \\colon \\operatorname{Id}\\ S \\to \\operatorname{Id}\\ T" }, { "math_id": 16, "text": "I" }, { "math_id": 17, "text": "T" }, { "math_id": 18, "text": "f(I)" }, { "math_id": 19, "text": "\\operatorname{Id} \\colon \\mathcal{S} \\to \\mathcal{A}" }, { "math_id": 20, "text": "A" }, { "math_id": 21, "text": "K(A)" }, { "math_id": 22, "text": "f \\colon A \\to B" }, { "math_id": 23, "text": "K(f) \\colon K(A) \\to K(B)" }, { "math_id": 24, "text": "K \\colon \\mathcal{A} \\to \\mathcal{S}" }, { "math_id": 25, "text": "(\\operatorname{Id},K)" }, { "math_id": 26, "text": "f'(A) = \\bigvee\\{f(s) | s \\in A\\}." } ]
https://en.wikipedia.org/wiki?curid=755647
75565127
Distributional data analysis
Branch of nonparametric statistics Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is a convex space, it is not a vector space. Notation. Let formula_0 be a probability measure on formula_1, where formula_2 with formula_3. The probability measure formula_0 can be equivalently characterized as cumulative distribution function formula_4 or probability density function formula_5 if it exists. For univariate distributions with formula_6, quantile function formula_7 can also be used. Let formula_8 be a space of distributions formula_0 and let formula_9 be a metric on formula_8 so that formula_10 forms a metric space. There are various metrics available for formula_9. For example, suppose formula_11, and let formula_12 and formula_13 be the density functions of formula_14 and formula_15, respectively. The Fisher-Rao metric is defined as formula_16 For univariate distributions, let formula_17 and formula_18 be the quantile functions of formula_14 and formula_15. Denote the formula_19-Wasserstein space as formula_20, which is the space of distributions with finite formula_21-th moments. Then, for formula_22, the formula_19-Wasserstein metric is defined as formula_23 Mean and variance. For a probability measure formula_24, consider a random process formula_25 such that formula_26. One way to define mean and variance of formula_0 is to introduce the Fréchet mean and the Fréchet variance. With respect to the metric formula_9 on formula_8, the "Fréchet mean" formula_27, also known as the barycenter, and the "Fréchet variance" formula_28 are defined as formula_29 A widely used example is the Wasserstein-Fréchet mean, or simply the "Wasserstein mean", which is the Fréchet mean with the formula_30-Wasserstein metric formula_31. For formula_32, let formula_33 be the quantile functions of formula_0 and formula_34, respectively. The Wasserstein mean and Wasserstein variance is defined as formula_35 Modes of variation. Modes of variation are useful concepts in depicting the variation of data around the mean function. Based on the Karhunen-Loève representation, modes of variation show the contribution of each eigenfunction to the mean. Functional principal component analysis. Functional principal component analysis(FPCA) can be directly applied to the probability density functions. Consider a distribution process formula_26 and let formula_5 be the density function of formula_0. Let the mean density function as formula_36 and the covariance function as formula_37 with orthonormal eigenfunctions formula_38 and eigenvalues formula_39. By the Karhunen-Loève theorem, formula_40, where principal components formula_41. The formula_42th mode of variation is defined as formula_43 with some constant formula_44, such as 2 or 3. Transformation FPCA. Assume the probability density functions formula_5 exist, and let formula_45 be the space of density functions. Transformation approaches introduce a continuous and invertible transformation formula_46, where formula_47 is a Hilbert space of functions. For instance, the log quantile density transformation or the centered log ratio transformation are popular choices. For formula_48, let formula_49, the transformed functional variable. The mean function formula_50 and the covariance function formula_51 are defined accordingly, and let formula_52 be the eigenpairs of formula_53. The Karhunen-Loève decomposition gives formula_54, where formula_55. Then, the formula_42th transformation mode of variation is defined as formula_56 Log FPCA and Wasserstein Geodesic PCA. Endowed with metrics such as the Wasserstein metric formula_31 or the Fisher-Rao metric formula_57, we can employ the (pseudo) Riemannian structure of formula_8. Denote the tangent space at the Fréchet mean formula_27 as formula_58, and define the logarithm and exponential maps formula_59 and formula_60. Let formula_61 be the projected density onto the tangent space, formula_62. In Log FPCA, FPCA is performed to formula_61 and then projected back to formula_8 using the exponential map. Therefore, with formula_54, the formula_42th Log FPCA mode of variation is defined as formula_63 As a special case, consider formula_30-Wasserstein space formula_64, a random distribution formula_65, and a subset formula_66. Let formula_67 and formula_68. Let formula_69 be the metric space of nonempty, closed subsets of formula_64, endowed with Hausdorff distance, and define formula_70 Let the reference measure formula_71 be the Wasserstein mean formula_27. Then, a "principal geodesic subspace (PGS)" of dimension formula_72 with respect to formula_27 is a set formula_73. Note that the tangent space formula_58 is a subspace of formula_74, the Hilbert space of formula_75-square-integrable functions. Obtaining the PGS is equivalent to performing PCA in formula_74 under constraints to lie in the convex and closed subset. Therefore, a simple approximation of the Wasserstein Geodesic PCA is the Log FPCA by relaxing the geodesicity constraint, while alternative techniques are suggested. Distributional regression. Fréchet regression. Fréchet regression is a generalization of regression with responses taking values in a metric space and Euclidean predictors. Using the Wasserstein metric formula_31, Fréchet regression models can be applied to distributional objects. The global Wasserstein-Fréchet regression model is defined as which generalizes the standard linear regression. For the local Wasserstein-Fréchet regression, consider a scalar predictor formula_76 and introduce a smoothing kernel formula_77. The local Fréchet regression model, which generalizes the local linear regression model, is defined as formula_78 where formula_79, formula_80 and formula_81. Transformation based approaches. Consider the response variable formula_0 to be probability distributions. With the space of density functions formula_45 and a Hilbert space of functions formula_47, consider continuous and invertible transformations formula_46. Examples of transformations include log hazard transformation, log quantile density transformation, or centered log-ratio transformation. Linear methods such as functional linear models are applied to the transformed variables. The fitted models are interpreted back in the original density space formula_8 using the inverse transformation. Random object approaches. In Wasserstein regression, both predictors formula_82 and responses formula_0 can be distributional objects. Let formula_83 and formula_84 be the Wasserstein mean of formula_82 and formula_0, respectively. The Wasserstein regression model is defined as formula_85 with a linear regression operator formula_86 Estimation of the regression operator is based on empirical estimators obtained from samples. Also, the Fisher-Rao metric formula_57 can be used in a similar fashion. Hypothesis testing. Wasserstein F-test. Wasserstein formula_4-test has been proposed to test for the effects of the predictors in the Fréchet regression framework with the Wasserstein metric. Consider Euclidean predictors formula_87 and distributional responses formula_65. Denote the Wasserstein mean of formula_0 as formula_88, and the sample Wasserstein mean as formula_89. Consider the global Wasserstein-Fréchet regression model formula_90 defined in (1), which is the conditional Wasserstein mean given formula_91. The estimator of formula_90, formula_92 is obtained by minimizing the empirical version of the criterion. Let formula_4, formula_93, formula_5, formula_94, formula_95, formula_96, formula_97, formula_98, and formula_99 denote the cumulative distribution, quantile, and density functions of formula_0, formula_88, and formula_100, respectively. For a pair formula_101, define formula_102 be the optimal transport map from formula_103 to formula_0. Also, define formula_104, the optimal transport map from formula_88 to formula_100. Finally, define the covariance kernel formula_105 and by the Mercer decomposition, formula_106. If there are no regression effects, the conditional Wasserstein mean would equal the Wasserstein mean. That is, hypotheses for the test of no effects are formula_107 To test for these hypotheses, the proposed global Wasserstein formula_4-statistic and its asymptotic distribution are formula_108 where formula_109. An extension to hypothesis testing for partial regression effects, and alternative testing approximations using the Satterthwaite's approximation or a bootstrap approach are proposed. Tests for the intrinsic mean. The Hilbert sphere formula_110 is defined as formula_111, where formula_47 is a separable infinite-dimensional Hilbert space with inner product formula_112 and norm formula_113. Consider the space of square root densities formula_114. Then with the Fisher-Rao metric formula_57 on formula_5, formula_115 is the positive orthant of the Hilbert sphere formula_110 with formula_116. Let a chart formula_117 as a smooth homeomorphism that maps formula_118 onto an open subset formula_119 of a separable Hilbert space formula_120 for coordinates. For example, formula_121 can be the logarithm map. Consider a random element formula_122 equipped with the Fisher-Rao metric, and write its Fréchet mean as formula_34. Let the empirical estimator of formula_34 using formula_123 samples as formula_124. Then central limit theorem for formula_125 and formula_126 holds: formula_127, where formula_128 is a Gaussian random element in formula_120 with mean 0 and covariance operator formula_129. Let the eigenvalue-eigenfunction pairs of formula_129 and the estimated covariance operator formula_130 as formula_131 and formula_132, respectively. Consider one-sample hypothesis testing formula_133 with formula_134. Denote formula_135 and formula_136 as the norm and inner product in formula_120. The test statistics and their limiting distributions are formula_137 where formula_138. The actual testing procedure can be done by employing the limiting distributions with Monte Carlo simulations, or bootstrap tests are possible. An extension to the two-sample test and paired test are also proposed. Distributional time series. Autoregressive (AR) models for distributional time series are constructed by defining stationarity and utilizing the notion of difference between distributions using formula_31 and formula_57. In Wasserstein autoregressive model (WAR), consider a stationary density time series formula_139 with Wasserstein mean formula_140. Denote the difference between formula_139 and formula_140 using the logarithm map, formula_141, where formula_142 is the optimal transport from formula_140 to formula_139 in which formula_143 and formula_144 are the cdf of formula_139 and formula_145. An formula_146 model on the tangent space formula_147 is defined as formula_148 for formula_149 with the autoregressive parameter formula_150 and mean zero random i.i.d. innovations formula_151. Under proper conditions, formula_152 with densities formula_139 and formula_153. Accordingly, formula_154, with a natural extension to order formula_21, is defined as formula_155 On the other hand, the spherical autoregressive model (SAR) considers the Fisher-Rao metric. Following the settings of ##Tests for the intrinsic mean, let formula_156 with Fréchet mean formula_157. Let formula_158, which is the geodesic distance between formula_159 and formula_157. Define a rotation operator formula_160 that rotates formula_159 to formula_157. The spherical difference between formula_159 and formula_157 is represented as formula_161. Assume that formula_162 is a stationary sequence with the Fréchet mean formula_163, then formula_164 is defined as formula_165 where formula_166 and mean zero random i.i.d innovations formula_151. An alternative model, the differenced based spherical autoregressive (DSAR) model is defined with formula_167, with natural extensions to order formula_21. A similar extension to the Wasserstein space was introduced. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": "D \\subset \\R^p" }, { "math_id": 3, "text": "p \\ge 1" }, { "math_id": 4, "text": "F" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "p = 1" }, { "math_id": 7, "text": "Q=F^{-1}" }, { "math_id": 8, "text": "\\mathcal{F}" }, { "math_id": 9, "text": "d" }, { "math_id": 10, "text": "(\\mathcal{F}, d)" }, { "math_id": 11, "text": "\\nu_1, \\; \\nu_2 \\in \\mathcal{F}" }, { "math_id": 12, "text": "f_1" }, { "math_id": 13, "text": "f_2" }, { "math_id": 14, "text": "\\nu_1" }, { "math_id": 15, "text": "\\nu_2" }, { "math_id": 16, "text": " d_{FR}(f_1, f_2) = \\arccos \\left( \\int_D \\sqrt{f_1(x) f_2(x)} dx \\right). " }, { "math_id": 17, "text": "Q_1" }, { "math_id": 18, "text": "Q_2" }, { "math_id": 19, "text": "L^p" }, { "math_id": 20, "text": "\\mathcal{W}_p" }, { "math_id": 21, "text": "p" }, { "math_id": 22, "text": "\\nu_1, \\; \\nu_2 \\in \\mathcal{W}_p" }, { "math_id": 23, "text": " d_{W_p}(\\nu_1, \\nu_2) = \\left( \\int_0^1 [Q_1(s) - Q_2(s)]^p ds \\right)^{1/p}. " }, { "math_id": 24, "text": "\\nu \\in \\mathcal{F}" }, { "math_id": 25, "text": "\\mathfrak{F}" }, { "math_id": 26, "text": "\\nu \\sim \\mathfrak{F}" }, { "math_id": 27, "text": "\\mu_\\oplus" }, { "math_id": 28, "text": "V_\\oplus" }, { "math_id": 29, "text": "\\begin{align}\n\\mu_\\oplus &= \\operatorname{argmin}_{\\mu \\in \\mathcal{F}} \\mathbb{E}[d^2(\\nu, \\mu)], \\\\\nV_\\oplus &= \\mathbb{E}[d^2(\\nu, \\mu_\\oplus)].\n\\end{align}" }, { "math_id": 30, "text": "L^2" }, { "math_id": 31, "text": "d_{W_2}" }, { "math_id": 32, "text": "\\nu, \\; \\mu \\in \\mathcal{W}_2" }, { "math_id": 33, "text": "Q_\\nu, \\; Q_\\mu" }, { "math_id": 34, "text": "\\mu" }, { "math_id": 35, "text": "\\begin{align}\n\\mu_\\oplus^* &= \\operatorname{argmin}_{\\mu \\in \\mathcal{W}_2} \\mathbb{E} \\left[ \\int_0^1 (Q_\\nu (s) - Q_\\mu (s))^2 ds \\right], \\\\\nV_\\oplus^* &= \\mathbb{E} \\left[ \\int_0^1 (Q_\\nu (s) - Q_{\\mu_\\oplus^*} (s))^2 ds \\right].\n\\end{align}" }, { "math_id": 36, "text": "\\mu(t) = \\mathbb{E}\\left[f(t)\\right]" }, { "math_id": 37, "text": "G(s,t) = \\operatorname{Cov}(f(s), f(t))" }, { "math_id": 38, "text": "\\{\\phi_j\\}_{j=1}^\\infty" }, { "math_id": 39, "text": "\\{\\lambda_j\\}_{j=1}^\\infty" }, { "math_id": 40, "text": "\nf(t) = \\mu(t) + \\sum_{j=1}^\\infty \\xi_j \\phi_j(t)" }, { "math_id": 41, "text": "\\xi_j = \\int_D [f(t) - \\mu(t)] \\phi_j(t) dt" }, { "math_id": 42, "text": "j" }, { "math_id": 43, "text": "\ng_{j}(t, \\alpha) = \\mu(t) + \\alpha \\sqrt{\\lambda_j} \\phi_j(t), \\quad t \\in D, \\; \\alpha \\in [-A, A]\n" }, { "math_id": 44, "text": "A" }, { "math_id": 45, "text": "\\mathcal{F}_f" }, { "math_id": 46, "text": "\\Psi: \\mathcal{F}_f \\to \\mathbb{H}" }, { "math_id": 47, "text": "\\mathbb{H}" }, { "math_id": 48, "text": "f \\in \\mathcal{F}_f" }, { "math_id": 49, "text": "Y = \\Psi(f)" }, { "math_id": 50, "text": "\\mu_Y(t) = \\mathbb{E}\\left[Y(t)\\right]" }, { "math_id": 51, "text": "G_Y(s,t) = \\operatorname{Cov}(Y(s), Y(t))" }, { "math_id": 52, "text": "\\{\\lambda_j, \\phi_j\\}_{j=1}^\\infty" }, { "math_id": 53, "text": "G_Y(s,t)" }, { "math_id": 54, "text": "Y(t) = \\mu_Y(t) + \\sum_{j=1}^\\infty \\xi_j \\phi_j(t)" }, { "math_id": 55, "text": "\\xi_j = \\int_D [Y(t) - \\mu_Y(t)] \\phi_j(t) dt" }, { "math_id": 56, "text": "\ng_{j}^{TF}(t, \\alpha) = \\Psi^{-1} \\left( \\mu_Y + \\alpha \\sqrt{\\lambda_j}\\phi_j \\right)(t), \\quad t \\in D, \\; \\alpha \\in [-A, A].\n" }, { "math_id": 57, "text": "d_{FR}" }, { "math_id": 58, "text": "T_{\\mu_\\oplus}" }, { "math_id": 59, "text": "\\log_{\\mu_\\oplus}:\\mathcal{F} \\to T_{\\mu_\\oplus}" }, { "math_id": 60, "text": "\\exp_{\\mu_\\oplus}: T_{\\mu_\\oplus} \\to \\mathcal{F}" }, { "math_id": 61, "text": "Y" }, { "math_id": 62, "text": "Y = \\log_{\\mu_\\oplus}(f)" }, { "math_id": 63, "text": "g_j^{Log}(t, \\alpha) = \\exp_{f_\\oplus} \\left( \\mu_{f_\\oplus} + \\alpha \\sqrt{\\lambda_j} \\phi_j \\right)(t), \\quad t \\in D, \\; \\alpha \\in [-A, A]." }, { "math_id": 64, "text": "\\mathcal{W}_2" }, { "math_id": 65, "text": "\\nu \\in \\mathcal{W}_2" }, { "math_id": 66, "text": "G \\subset \\mathcal{W}_2" }, { "math_id": 67, "text": "d_{W_2}(\\nu, G) = \\inf_{\\mu \\in G} d_{W_2}(\\nu, \\mu)" }, { "math_id": 68, "text": "K_{W_2}(G) = \\mathbb{E}\\left[d_{W_2}^2(\\nu, G) \\right]" }, { "math_id": 69, "text": "\\text{CL}(\\mathcal{W}_2)" }, { "math_id": 70, "text": "\n\\operatorname{CG}_{\\nu_0, k}(\\mathcal{W}_2) = \\{G \\in \\operatorname{CL}(\\mathcal{W}_2) : \\nu_0 \\in G, G \\text{ is a geodesic set s.t. }\\operatorname{dim}(G) \\le k \\}, \\; k \\ge 1.\n" }, { "math_id": 71, "text": "\\nu_0" }, { "math_id": 72, "text": "k" }, { "math_id": 73, "text": "G_k = \\operatorname{argmin}_{G \\in \\text{CG}_{\\nu_\\oplus, k}(\\mathcal{W}_2)} K_{W_2}(G)" }, { "math_id": 74, "text": "L^2_{\\mu_\\oplus}" }, { "math_id": 75, "text": "{\\mu_\\oplus}" }, { "math_id": 76, "text": "X\\in \\mathbb{R}" }, { "math_id": 77, "text": "K_h(\\cdot) = h^{-1}K(\\cdot/h)" }, { "math_id": 78, "text": "\\begin{align}\nl_\\oplus (x) &= \\operatorname{argmin}_{\\omega \\in \\mathcal{F}} \\mathbb{E}\\left[ s_L(X,x,h) d_{W_2}^2(\\nu,\\omega) \\right],\\\\\ns_L(X,x,h) &= \\sigma_0^{-2} \\{ K_h(X-x)[\\mu_2 - \\mu_1 (X-x)]\\},\n\\end{align}" }, { "math_id": 79, "text": "\\mu_j = \\mathbb{E} \\left[K_h(X-x)(X-x)^j \\right]" }, { "math_id": 80, "text": "j = 0,1,2," }, { "math_id": 81, "text": "\\sigma_0^2 = \\mu_0 \\mu_2 - \\mu_1^2" }, { "math_id": 82, "text": "\\omega" }, { "math_id": 83, "text": "\\omega{\\oplus}" }, { "math_id": 84, "text": "\\nu_{\\oplus}" }, { "math_id": 85, "text": "\\mathbb{E}(\\log_{\\nu_{\\oplus}} \\nu | \\log_{\\omega{\\oplus}} \\omega) = \\Gamma(\\log_{\\omega{\\oplus}} \\omega)," }, { "math_id": 86, "text": "\n\\Gamma g(t) = \\langle \\beta(\\cdot, t),g \\rangle_{\\omega{\\oplus}}, \\; t \\in D, \\; g \\in T_{\\omega{\\oplus}}, \\; \\beta:D^2 \\to \\R." }, { "math_id": 87, "text": "X \\in \\R^p" }, { "math_id": 88, "text": "\\mu_\\oplus^*" }, { "math_id": 89, "text": "\\hat{\\mu}_\\oplus^*" }, { "math_id": 90, "text": "m_\\oplus (x)" }, { "math_id": 91, "text": "X=x" }, { "math_id": 92, "text": "\\hat{m}_\\oplus (x)" }, { "math_id": 93, "text": "Q" }, { "math_id": 94, "text": "F_\\oplus^*" }, { "math_id": 95, "text": "Q_\\oplus^*" }, { "math_id": 96, "text": "f_\\oplus^*" }, { "math_id": 97, "text": "F_\\oplus(x)" }, { "math_id": 98, "text": "Q_\\oplus(x)" }, { "math_id": 99, "text": "f_\\oplus(x)" }, { "math_id": 100, "text": "m_\\oplus(x)" }, { "math_id": 101, "text": "(X, \\nu)" }, { "math_id": 102, "text": "T = Q \\circ F_\\oplus (X)" }, { "math_id": 103, "text": "m_\\oplus(X)" }, { "math_id": 104, "text": "S = Q_\\oplus (X) \\circ F_\\oplus^*" }, { "math_id": 105, "text": "K(u, v) = \\mathbb{E}[\\text{Cov}((T\\circ S)(u), (T\\circ S)(v) )]" }, { "math_id": 106, "text": "K(u, v) = \\sum_{j=1}^\\infty \\lambda_j \\phi_j(u) \\phi_j(v)" }, { "math_id": 107, "text": "\nH_0: m_\\oplus (x) \\equiv \\mu_\\oplus^* \\quad \\text{v.s.} \\quad H_1: \\text{Not }H_0.\n" }, { "math_id": 108, "text": "\nF_G = \\sum_{i=1}^n d_{W_2}^2( \\hat{m}_\\oplus (x), \\hat{\\mu}_\\oplus^*), \\quad F_G|X_1, \\cdots, X_n \\overset{d}{\\longrightarrow} \\sum_{j=1}^\\infty \\lambda_j V_j \\; a.s.,\n" }, { "math_id": 109, "text": "V_j \\overset{iid}{\\sim} \\chi_p^2" }, { "math_id": 110, "text": "\\mathcal{S}^\\infty" }, { "math_id": 111, "text": "\\mathcal{S}^\\infty = \\left\\{f \\in \\mathbb{H} : \\| f \\|_{\\mathbb{H}}=1 \\right\\}" }, { "math_id": 112, "text": "\\langle \\cdot, \\cdot \\rangle_{\\mathbb{H}}" }, { "math_id": 113, "text": "\\| \\cdot \\|_{\\mathbb{H}}" }, { "math_id": 114, "text": "\\mathcal{X} = \\left\\{ x:D \\to \\mathbb{R}: x = \\sqrt{f}, \\int_D f(t)dt = 1 \\right\\}" }, { "math_id": 115, "text": "\\mathcal{X}" }, { "math_id": 116, "text": "\\mathbb{H} = L^2(D)" }, { "math_id": 117, "text": "\\tau: U \\subset \\mathcal{S}^\\infty \\to \\mathbb{G}" }, { "math_id": 118, "text": "U" }, { "math_id": 119, "text": "\\tau(U)" }, { "math_id": 120, "text": "\\mathbb{G}" }, { "math_id": 121, "text": "\\tau" }, { "math_id": 122, "text": "x = \\sqrt{f} \\in \\mathcal{X}" }, { "math_id": 123, "text": "n" }, { "math_id": 124, "text": "\\hat{\\mu}" }, { "math_id": 125, "text": "\\hat{\\mu}_\\tau = \\tau(\\hat{\\mu})" }, { "math_id": 126, "text": "\\mu_\\tau = \\tau(\\mu)" }, { "math_id": 127, "text": "\\sqrt{n}(\\hat{\\mu}_\\tau - \\mu_\\tau ) \\overset{L}{\\longrightarrow} Z, \\; n \\to \\infty" }, { "math_id": 128, "text": "Z" }, { "math_id": 129, "text": "\\mathcal{T}" }, { "math_id": 130, "text": "\\hat{\\mathcal{T}}" }, { "math_id": 131, "text": "(\\lambda_k, \\phi_k)_{k=1}^\\infty" }, { "math_id": 132, "text": "(\\hat\\lambda_k, \\hat\\phi_k)_{k=1}^\\infty" }, { "math_id": 133, "text": "\nH_0: \\mu = \\mu_0 \\quad \\text{v.s.} \\quad H_1: \\mu \\neq \\mu_0,\n" }, { "math_id": 134, "text": "\\mu_0 \\in \\mathcal{S}^\\infty" }, { "math_id": 135, "text": "\\| \\cdot \\|_{\\mathbb{G}}" }, { "math_id": 136, "text": "\\langle \\cdot, \\cdot \\rangle_{\\mathbb{G}}" }, { "math_id": 137, "text": "\\begin{align}\nT_1 &= n \\| \\tau(\\hat{\\mu}) - \\tau(\\mu_0)\\|_\\mathbb{G}^2 \\overset{L}{\\longrightarrow} \\lambda_k W_k, \\\\\nS_1 &= n \\sum_{k=1}^K \\frac{\\langle \\tau(\\hat{\\mu}) - \\tau(\\mu_0), \\hat{\\phi}_k \\rangle_\\mathbb{G}^2}{\\hat{\\lambda}_k} \\overset{L}{\\longrightarrow} \\chi_K^2,\n\\end{align}" }, { "math_id": 138, "text": "W_k \\overset{iid}{\\sim} \\chi_1^2" }, { "math_id": 139, "text": "f_t" }, { "math_id": 140, "text": "f_\\oplus" }, { "math_id": 141, "text": "f_t \\ominus f_{\\oplus} = \\log_{f_\\oplus} f_t = T_t - \\text{id}" }, { "math_id": 142, "text": "T_t = Q_t \\circ F_\\oplus" }, { "math_id": 143, "text": "F_t" }, { "math_id": 144, "text": "F_{\\oplus}" }, { "math_id": 145, "text": "f_{\\oplus}" }, { "math_id": 146, "text": "AR(1)" }, { "math_id": 147, "text": "T_{f_\\oplus}" }, { "math_id": 148, "text": "V_t = \\beta V_{t-1} + \\epsilon_t, \\; t \\in \\mathbb{Z}," }, { "math_id": 149, "text": "V_t \\in T_{f_\\oplus}" }, { "math_id": 150, "text": "\\beta \\in \\mathbb{R}" }, { "math_id": 151, "text": "\\epsilon_t" }, { "math_id": 152, "text": "\\mu_t = \\exp_{f_\\oplus}(V_t)" }, { "math_id": 153, "text": "V_t = \\log_{f_\\oplus}(\\mu_t)" }, { "math_id": 154, "text": "WAR(1)" }, { "math_id": 155, "text": "\nT_t - \\text{id} = \\beta (T_{t-1} - \\text{id} ) + \\epsilon_t.\n" }, { "math_id": 156, "text": "x_t \\in \\mathcal{X}" }, { "math_id": 157, "text": "\\mu_x" }, { "math_id": 158, "text": "\\theta = \\arccos(\\langle x_t, \\mu_x \\rangle )" }, { "math_id": 159, "text": "x_t" }, { "math_id": 160, "text": "Q_{x_t, \\mu_x}" }, { "math_id": 161, "text": "R_t = x_t \\ominus \\mu_x = \\theta Q_{x_t, \\mu_x}" }, { "math_id": 162, "text": "R_t" }, { "math_id": 163, "text": "\\mu_R" }, { "math_id": 164, "text": "SAR(1)" }, { "math_id": 165, "text": "\nR_t - \\mu_R = \\beta (R_{t-1} - \\mu_R) + \\epsilon_t,\n" }, { "math_id": 166, "text": "\\mu_R = \\mathbb{E}R_t" }, { "math_id": 167, "text": "R_t = x_{t+1} \\ominus x_t" } ]
https://en.wikipedia.org/wiki?curid=75565127
7556651
Hyperconnected space
In the mathematical field of topology, a hyperconnected space or irreducible space is a topological space "X" that cannot be written as the union of two proper closed subsets (whether disjoint or non-disjoint). The name "irreducible space" is preferred in algebraic geometry. For a topological space "X" the following conditions are equivalent: A space which satisfies any one of these conditions is called "hyperconnected" or "irreducible". Due to the condition about neighborhoods of distinct points being in a sense the opposite of the Hausdorff property, some authors call such spaces anti-Hausdorff. The empty set is vacuously a hyperconnected or irreducible space under the definition above (because it contains no nonempty open sets). However some authors, especially those interested in applications to algebraic geometry, add an explicit condition that an irreducible space must be nonempty. An irreducible set is a subset of a topological space for which the subspace topology is irreducible. Examples. Two examples of hyperconnected spaces from point set topology are the cofinite topology on any infinite set and the right order topology on formula_0. In algebraic geometry, taking the spectrum of a ring whose reduced ring is an integral domain is an irreducible topological space—applying the lattice theorem to the nilradical, which is within every prime, to show the spectrum of the quotient map is a homeomorphism, this reduces to the irreducibility of the spectrum of an integral domain. For example, the schemesformula_1 , formula_2are irreducible since in both cases the polynomials defining the ideal are irreducible polynomials (meaning they have no non-trivial factorization). A non-example is given by the normal crossing divisorformula_3since the underlying space is the union of the affine planes formula_4, formula_5, and formula_6. Another non-example is given by the schemeformula_7where formula_8 is an irreducible degree 4 homogeneous polynomial. This is the union of the two genus 3 curves (by the genus–degree formula)formula_9 Hyperconnectedness vs. connectedness. Every hyperconnected space is both connected and locally connected (though not necessarily path-connected or locally path-connected). Note that in the definition of hyper-connectedness, the closed sets don't have to be disjoint. This is in contrast to the definition of connectedness, in which the open sets are disjoint. For example, the space of real numbers with the standard topology is connected but "not" hyperconnected. This is because it cannot be written as a union of two disjoint open sets, but it "can" be written as a union of two (non-disjoint) closed sets. Proof: "Let formula_10 be an open subset. Any two disjoint open subsets of formula_11 would themselves be disjoint open subsets of formula_12. So at least one of them must be empty." Proof: "Suppose formula_13 is a dense subset of formula_12 and formula_14 with formula_15, formula_16 closed in formula_13. Then formula_17. Since formula_12 is hyperconnected, one of the two closures is the whole space formula_12, say formula_18. This implies that formula_15 is dense in formula_13, and since it is closed in formula_13, it must be equal to formula_13." Counterexample: "formula_19 with formula_20 an algebraically closed field (thus infinite) is hyperconnected in the Zariski topology, while formula_21 is closed and not hyperconnected." Proof: "Suppose formula_22 where formula_13 is irreducible and write formula_23 for two closed subsets formula_24 (and thus in formula_12). formula_25 are closed in formula_13 and formula_26 which implies formula_27 or formula_28, but then formula_29 or formula_30 by definition of closure." Proof: "Firstly, we notice that if formula_34 is a non-empty open set in formula_12 then it intersects both formula_35 and formula_36; indeed, suppose formula_37, then formula_38 is dense in formula_35, thus formula_39 and formula_40 is a point of closure of formula_38 which implies formula_41 and a fortiori formula_42. Now formula_43 and taking the closure formula_44 therefore formula_34 is a non-empty open and dense subset of formula_12. Since this is true for every non-empty open subset, formula_12 is irreducible." Irreducible components. An irreducible component in a topological space is a maximal irreducible subset (i.e. an irreducible set that is not contained in any larger irreducible set). The irreducible components are always closed. Every irreducible subset of a space "X" is contained in a (not necessarily unique) irreducible component of "X". In particular, every point of "X" is contained in some irreducible component of "X". Unlike the connected components of a space, the irreducible components need not be disjoint (i.e. they need not form a partition). In general, the irreducible components will overlap. The irreducible components of a Hausdorff space are just the singleton sets. Since every irreducible space is connected, the irreducible components will always lie in the connected components. Every Noetherian topological space has finitely many irreducible components. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\text{Spec}\\left( \\frac{\\mathbb{Z}[x,y,z]}{x^4 + y^3 + z^2} \\right)" }, { "math_id": 2, "text": "\\text{Proj}\\left( \\frac{\\mathbb{C}[x,y,z]}{(y^2z - x(x-z)(x-2z))} \\right)" }, { "math_id": 3, "text": "\\text{Spec}\\left( \\frac{\\mathbb{C}[x,y,z]}{(xyz)} \\right)" }, { "math_id": 4, "text": "\\mathbb{A}^2_{x,y}" }, { "math_id": 5, "text": "\\mathbb{A}^2_{x,z}" }, { "math_id": 6, "text": "\\mathbb{A}^2_{y,z}" }, { "math_id": 7, "text": "\\text{Proj}\\left( \\frac{\\mathbb{C}[x,y,z,w]}{(xy, f_4)} \\right)" }, { "math_id": 8, "text": "f_4" }, { "math_id": 9, "text": "\\text{Proj}\\left( \\frac{\\mathbb{C}[y,z,w]}{(f_4(0,y,z,w))} \\right), \\text{ } \\text{Proj}\\left( \\frac{\\mathbb{C}[x,z,w]}{(f_4(x,0,z,w))} \\right)" }, { "math_id": 10, "text": "U\\subset X" }, { "math_id": 11, "text": "U" }, { "math_id": 12, "text": "X" }, { "math_id": 13, "text": "S" }, { "math_id": 14, "text": "S=S_1\\cup S_2" }, { "math_id": 15, "text": "S_1" }, { "math_id": 16, "text": "S_2" }, { "math_id": 17, "text": "X=\\overline S=\\overline{S_1}\\cup\\overline{S_2}" }, { "math_id": 18, "text": "\\overline{S_1}=X" }, { "math_id": 19, "text": "\\Bbbk^2" }, { "math_id": 20, "text": "\\Bbbk" }, { "math_id": 21, "text": "V=Z(XY)=Z(X)\\cup Z(Y)\\subset\\Bbbk^2" }, { "math_id": 22, "text": "S\\subseteq X" }, { "math_id": 23, "text": "\\operatorname{Cl}_X(S)=F\\cup G" }, { "math_id": 24, "text": "F,G\\subseteq \\operatorname{Cl}_X(S)" }, { "math_id": 25, "text": "F':=F\\cap S,\\,G':=G\\cap S" }, { "math_id": 26, "text": "S=F'\\cup G'" }, { "math_id": 27, "text": "S\\subseteq F" }, { "math_id": 28, "text": "S\\subseteq G" }, { "math_id": 29, "text": "\\operatorname{Cl}_X(S)=F" }, { "math_id": 30, "text": "\\operatorname{Cl}_X(S)=G" }, { "math_id": 31, "text": "X=U_1\\cup U_2" }, { "math_id": 32, "text": "U_1,U_2\\subset X" }, { "math_id": 33, "text": "U_1\\cap U_2\\ne\\emptyset" }, { "math_id": 34, "text": "V" }, { "math_id": 35, "text": "U_1" }, { "math_id": 36, "text": "U_2" }, { "math_id": 37, "text": "V_1:=U_1\\cap V\\ne\\emptyset" }, { "math_id": 38, "text": "V_1" }, { "math_id": 39, "text": "\\exists x\\in\\operatorname{Cl}_{U_1}(V_1)\\cap U_2=U_1\\cap U_2\\ne\\emptyset" }, { "math_id": 40, "text": "x\\in U_2" }, { "math_id": 41, "text": "V_1\\cap U_2\\ne\\emptyset" }, { "math_id": 42, "text": "V_2:=V\\cap U_2\\ne\\emptyset" }, { "math_id": 43, "text": "V=V\\cap(U_1\\cup U_2)=V_1\\cup V_2" }, { "math_id": 44, "text": "\\operatorname{Cl}_{X}(V)\\supseteq{\\operatorname{Cl}}_{U_1}(V_1)\\cup{\\operatorname{Cl}}_{U_2}(V_2)=U_1\\cup U_2=X," } ]
https://en.wikipedia.org/wiki?curid=7556651
75579307
Transition metal complexes of thiocyanate
Transition metal complexes of thiocyanate describes coordination complexes containing one or more thiocyanate (SCN-) ligands. The topic also includes transition metal complexes of isothiocyanate. These complexes have few applications but played significant role in the development of coordination chemistry. Structure and bonding. Hard metal cations, as classified by HSAB theory, tend to form "N"-bonded complexes (isothiocyanates), whereas class B or soft metal cations tend to form "S"-bonded thiocyanate complexes. For the isothiocyanates, the M-N-C angle is usually close to 180°. For the thiocyanates, the M-S-C angle is usually close to 100°. Homoleptic complexes. Most homoleptic complexes of NCS- feature isothiocyanate ligands (N-bonded). All first-row metals bind thiocyanate in this way. Octahedral complexes [M(NCS)6]z- include M = Ti(III), Cr(III), Mn(II), Fe(III), Ni(II), Mo(III), Tc(IV), and Ru(III). Four-coordinated tetrakis(isothiocyanate) complexes would be tetrahedral since isothiocyanate is a weak-field ligand. Two examples are the deep blue [Co(NCS)4]2- and the green [Ni(NCS)4]2-. Few homoleptic complexes of NCS- feature thiocyanate ligands (S-bonded). Octahedral complexes include [M(SCN)6]3- (M = Rh and Ir) and [Pt(SCN)6]2-. Square planar complexes include [M(SCN)4]z- (M = Pd(II), Pt(II), and Au(III)). Colorless [Hg(SCN)4]2- is tetrahedral. Some octahedral isothiocyanate complexes undergo redox reactions reversibly. Orange [Os(NCS)6]3- can be oxidized to violet [Os(NCS)6]2-. The Os-N distances in both derivatives are almost identical at 200 picometers. Linkage isomerism. formula_0Resonance structures of the thiocyanate ion Thiocyanate shares its negative charge approximately equally between sulfur and nitrogen. Thiocyanate can bind metals at either sulfur or nitrogen — it is an ambidentate ligand. Other factors, e.g. kinetics and solubility, sometimes influence the observed isomer. For example, [Co(NH3)5(NCS)]+ is the thermodynamic isomer, but [Co(NH3)5(SCN)]2+ forms as the kinetic product of the reaction of thiocyanate salts with [Co(NH3)5(H2O)]3+. Some complexes of SCN- feature both but only thiocyanate and isothiocyanate ligands. Examples are found for heavy metals in the middle of the d-period: Ir(III), and Re(IV). SCN-bridged complexes. As a ligand, [SCN]− can also bridge two (M−SCN−M) or even three metals (&gt;SCN− or −SCN&lt;). One example of an SCN-bridged complex is [Ni2(SCN)8]4-. Mixed ligand complexes. This article focuses on homoleptic complexes, which are simpler to describe and analyze. Most complexes of SCN-, however are mixed ligand species. Mentioned above is one example, [Co(NH3)5(NCS)]2+. Another example is [OsCl2(SCN)2(NCS)2]2-. Reinecke's salt, a precipitating agent, is a derivative of [Cr(NCS)4(NH3)2]-. Applications and occurrence. Thiocyanate complexes are not widely used commercially. Possibly the oldest application of thiocyanate complexes was the use of thiocyanate as a test for ferric ions in aqueous solution. The reverse was also used: testing for the presence of thiocyanate by the addition of ferric salts. The 1:1 complex of thiocyanate and iron is deeply red. The effect was first reported in 1826. The structure of this species has never been confirmed by X-ray crystallography. The test is largely archaic. Copper(I) thiocyanate is a reagent for the conversion of aryl diazonium salts to arylthiocyanates, a version of the Sandmeyer reaction. Since thiocyanate occurs naturally, it is to be expected that it serves as a substrate for enzymes. Two metalloenzymes, thiocyanate hydrolases, catalyze the hydrolysis of thiocyanate. A cobalt-containing hydrolase catalyzes its conversion to carbonyl sulfide: A copper-containing thiocyanate hydrolase catalyzes its conversion to cyanate: In both cases, metal-SCN complexes are invoked as intermediates. Synthesis. Almost all thiocyanate complexes are prepared from thiocyanate salts using ligand substitution reactions. Typical thiocyanate sources include ammonium thiocyanate and potassium thiocyanate. An unusual route to thiocyanate complexes involves oxidative addition of thiocyanogen to low valent metal complexes: , where Ph = C6H5 Even though the reaction involves cleavage of the S-S bond in thiocyanogen, the product is the Ru-NCS linkage isomer. In another unusual method, thiocyanate functions as both a ligand and as a reductant in its reaction with dichromate to give [Cr(NCS)4(NH3)2]-. In this conversion, Cr(VI) converts to Cr(III).
[ { "math_id": 0, "text": "\\ce{S=C=N^\\ominus <-> {^{\\ominus}S}-C}\\ce{#N}" } ]
https://en.wikipedia.org/wiki?curid=75579307
75581959
Phase kickback
In quantum computing, phase kickback refers to the fact that controlled operations have effects on their controls, in addition to on their targets, and that these effects correspond to phasing operations. The phase of one qubit is effectively transferred to another qubit during a controlled operation, creating entanglement and computational advantages that enable various popular quantum algorithms and protocols. In classical computing, operations are deterministic and reversible. However, in quantum computing, operations have the ability to introduce phase changes to quantum states. This is the basis for complex interference patterns and quantum entanglement. When a controlled operation, such as a Controlled NOT (CNOT) gate, is applied to two qubits, the phase of the second (target) qubit is conditioned on the state of the first (control) qubit. Because the phase of the second qubit is being “kicked back” to the first qubit, this phenomenon was coined “phase kickback” in 1997 by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca through a paper that solved the Deustch-Jozsa problem. As an example, when a controlled NOT gate's target qubit is in the state formula_0, the effect of the controlled NOT gate is equivalent to the effect of applying a Pauli Z gate to the controlled NOT's control qubit. Phase kickback is one of the key effects that distinguishes quantum computation from classical computation. Phase kickback also provides a justification for why qubits would be disrupted by measurements: a measurement is an operation that flips a classical bit (the result) with the flip being controlled by a quantum bit (the qubit being measured). This creates kickback from the bit to the qubit, randomizing the qubit's phase. Phase kickback occurs because the basis transformations that distinguish targets from controls are available as operations. For example, surrounding a controlled NOT gate with four Hadamard gates produces a compound operation whose effect is equivalent to a controlled NOT gate, but with the roles of its control qubit and target qubit exchanged. More abstractly, phase kickback occurs because the eigendecomposition of controlled operations makes no significant distinction between controls and targets. For example, the controlled Z gate is a symmetric operation that has the same effect if its target and control are switched, and a controlled NOT gate can be decomposed into a Hadamard gate on its target, then a controlled Z gate, then a second Hadamard gate on its target. This decomposition reveals that, at the core of the apparently-asymmetric controlled-NOT gate, there is a symmetric effect that does not distinguish between control and target. Phase kickback can be used to measure an operator formula_1 whose eigenvalues are +1 and -1. This is a common technique for measuring operators in quantum error correcting codes, such as the surface code. The procedure is as follows. Initialize a control qubit formula_2 in the formula_3 state, then apply a Hadamard gate formula_4 to formula_2, then apply formula_1 controlled by formula_2, then apply another Hadamard gate formula_4 to formula_2, then measure formula_2 in the computational basis. Phase kickback results in the +1 eigenstates of formula_1 having no effect on formula_2, while -1 eigenstates apply a Pauli formula_5 to formula_2. The surrounding Hadamard gates turn the Pauli formula_5 (a phase flip) into a Pauli formula_6 (a bit flip). So formula_2 gets flipped from formula_3 to formula_7 when the state is in the -1 eigenstate of formula_1. The measurement operation reveals whether formula_2 is formula_3 or formula_7, which reveals whether the state was in the +1 or -1 eigenspace of formula_1. Requirements. Phase kickback requires the following conditions to be met: formula_8 This shows that if the control qubit is not in a superposition, phase kickback will not occur and the output of the controlled operation will be equal to the input. Applications. Quantum Fourier Transform. Quantum Fourier Transform is the quantum analogue of the classical discrete Fourier transform (DFT), as it takes quantum states represented as superpositions of basis states, and utilizes phase kickback to transform them into frequency-domain representation. The phase kickback phenomenon occurs in the QFT algorithm when a controlled phase rotation gate is applied to a qubit in superposition – the Fourier transform will take the output of the phase kickback state back to the initial control qubit. Quantum Phase Estimation. Quantum phase estimation (QPE) is a quantum algorithm that exploits phase kickback to efficiently estimate the eigenvalues of unitary operators. It is a crucial part of many quantum algorithms, including Shor’s algorithm, for integer factorization. To estimate the phase angle corresponding to the eigenvalue formula_9 of a unitary operator formula_10, the algorithm must: Phase kickback allows a quantum setup to estimate eigenvalues exponentially quicker than classical algorithms. This is essential for quantum algorithms such as Shor’s algorithm, where quantum phase estimation is used to factor large integers efficiently. Deustch-Josza Algorithm. The Deutsch-Josza algorithm, and by association the Bernstein-Vazirani algorithm, determines whether an inputted function is constant (same value for all inputs) or balanced (half 0s and half 1s) using as few queries to the black box function as possible. Phase kickback is critical; when the oracle is applied to the superposition state, it introduces phase kickback depending on whether the function is constant or balanced. If the function is constant, the oracle flips the sign of the amplitude of all input states, leading to constructive interference among all states. This allows a high probability of measuring the all-zero state. The flipping of signs of the input states requires phase kickback. On the other hand, when the function is balanced, the oracle does not introduce any phase kickback and the interference pattern among the states already cancels out as it is. This leads to an equal probability of measuring any of the input states. Grover's Algorithm. Grover’s algorithm is a quantum algorithm for unstructured search that finds the unique input to a black box function given its output. Phase kickback occurs in Grover's algorithm during the application of the oracle, which is typically a controlled operator that flips the sign of the target qubit's state. When this controlled operation is applied to the target qubit, the sign is flipped, and the phase of the target qubit is transferred backwards to the control qubit. In other words, the oracle can highlight certain target states by modifying the phase of the corresponding control qubit. This has impactful applications as a problem-solving tool, demonstration of performance advantages in quantum computing, and quantum cryptography. As seen, phase kickback is a crucial step in many famous, powerful quantum algorithms and applications. Its ability to transfer states backwards also enables other concepts such as quantum error correction and quantum teleportation.   References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/\\sqrt{2} (|0\\rangle - |1\\rangle)" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "|0\\rangle" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "Z" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "|1\\rangle" }, { "math_id": 8, "text": "\n|1\\rangle |\\psi\\rangle \\xrightarrow{Controlled-U}|1 \\rangle U | \\psi \\rangle = |1 \\rangle e^{i \\phi} | \\psi \\rangle \\cong |1 \\rangle | \\psi \\rangle" }, { "math_id": 9, "text": "|\\psi\\rangle" }, { "math_id": 10, "text": "U" }, { "math_id": 11, "text": "U| \\psi \\rangle = e^{i \\phi} | \\psi \\rangle" }, { "math_id": 12, "text": "\\frac{|0 \\rangle | \\psi \\rangle + | 1 \\rangle e^{i \\phi} | \\psi \\rangle} {\\sqrt{2}} = \\frac{|0 \\rangle + e^{i \\phi} | 1 \\rangle} {\\sqrt{2}}|\\psi \\rangle" }, { "math_id": 13, "text": "e^{i \\phi}" }, { "math_id": 14, "text": "|+\\rangle" } ]
https://en.wikipedia.org/wiki?curid=75581959
75584466
Bunkbed conjecture
Conjecture in probabilistic combinatorics The Bunkbed Conjecture (also spelled Bunk Bed Conjecture) is a statement in percolation theory, a branch of mathematics that studies the behavior of connected clusters in a random graph. The conjecture is named after its analogy to a bunk bed structure. It was first posited by Kasteleyn. Description. The conjecture has many equivalent formulations. In the most general formulation it involves two identical graphs, referred to as the 'upper bunk' and the 'lower bunk'. These graphs are isomorphic, meaning they share the same structure. Additional edges, termed 'posts', are added to connect each vertex in the upper bunk with the corresponding vertex in the lower bunk. Each edge in the graph is assigned a probability. The edges in the upper bunk and their corresponding edges in the lower bunk share the same probability. The probabilities assigned to the posts can be arbitrary. A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability. Statement of the conjecture. The Bunkbed Conjecture states that in the resulting random subgraph, the probability that a vertex formula_0 in the upper bunk is connected to some vertex formula_1 in the upper bunk is greater than or equal to the probability that formula_0 is connected to formula_2, the isomorphic copy of formula_1 in the lower bunk. Interpretation and significance. The conjecture suggests that two vertices of a graph are more likely to remain connected after randomly removing some edges if the graph distance between the vertices is smaller. This is intuitive, but proving this conjecture is not straightforward and is an active area of research in percolation theory. Recently, it was resolved for particular types of graphs, such as wheels, complete graphs, complete bipartite graphs and graphs with a local symmetry. It was also proven in the limit formula_3 for any graph References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "p \\to 1" } ]
https://en.wikipedia.org/wiki?curid=75584466
75586835
Subexponential distribution (light-tailed)
Type of light-tailed probability distribution In probability theory, one definition of a subexponential distribution is as a probability distribution whose tails decay at an exponential rate, or faster: a real-valued distribution formula_0 is called subexponential if, for a random variable formula_1, formula_2, for large formula_3 and some constant formula_4. The subexponential norm, formula_5, of a random variable is defined by formula_6 where the infimum is taken to be formula_7 if no such formula_8 exists. This is an example of a Orlicz norm. An equivalent condition for a distribution formula_9 to be subexponential is then that formula_10§2.7 Subexponentiality can also be expressed in the following equivalent ways:§2.7 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\cal D " }, { "math_id": 1, "text": "X\\sim {\\cal D} " }, { "math_id": 2, "text": "{\\Bbb P}(|X|\\ge x)=O(e^{-K x}) " }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "K>0" }, { "math_id": 5, "text": "\\|\\cdot\\|_{\\psi_1}" }, { "math_id": 6, "text": "\\|X\\|_{\\psi_1}:=\\inf\\ \\{ K>0\\mid {\\Bbb E}(e^{|X|/K})\\le 2\\}," }, { "math_id": 7, "text": "+\\infty" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "\\cal D" }, { "math_id": 10, "text": "\\|X\\|_{\\psi_1}<\\infty." }, { "math_id": 11, "text": "{\\Bbb P}(|X|\\ge x)\\le 2 e^{-K x}," }, { "math_id": 12, "text": "x\\ge 0" }, { "math_id": 13, "text": "{\\Bbb E}(|X|^p)^{1/p}\\le K p," }, { "math_id": 14, "text": "p\\ge 1" }, { "math_id": 15, "text": "{\\Bbb E}(e^{\\lambda |X|}) \\le e^{K\\lambda}" }, { "math_id": 16, "text": "0\\le \\lambda \\le 1/K" }, { "math_id": 17, "text": "{\\Bbb E}(X)" }, { "math_id": 18, "text": "{\\Bbb E}(e^{\\lambda (X-{\\Bbb E}(X))})\\le e^{K^2 \\lambda^2}" }, { "math_id": 19, "text": "-1/K\\le \\lambda\\le 1/K" }, { "math_id": 20, "text": "\\sqrt{|X|}" } ]
https://en.wikipedia.org/wiki?curid=75586835
75599532
Quantum computational chemistry
Quantum Computational Chemistry Quantum computational chemistry is an emerging field that exploits quantum computing to simulate chemical systems. Despite quantum mechanics' foundational role in understanding chemical behaviors, traditional computational approaches face significant challenges, largely due to the complexity and computational intensity of quantum mechanical equations. This complexity arises from the exponential growth of a quantum system's wave function with each added particle, making exact simulations on classical computers inefficient. Efficient quantum algorithms for chemistry problems are expected to have run-times and resource requirements that scale polynomially with system size and desired accuracy. Experimental efforts have validated proof-of-principle chemistry calculations, though currently limited to small systems. Common Methods in Quantum Computational Chemistry. While there are several common methods in quantum chemistry, the section below lists only a few examples. Qubitization. Qubitization is a mathematical and algorithmic concept in quantum computing for the simulation of quantum systems via Hamiltonian dynamics. The core idea of qubitization is to encode the problem of Hamiltonian simulation in a way that is more efficiently processable by quantum algorithms. Qubitization involves a transformation of the Hamiltonian operator, a central object in quantum mechanics representing the total energy of a system. In classical computational terms, a Hamiltonian can be thought of as a matrix describing the energy interactions within a quantum system. The goal of qubitization is to embed this Hamiltonian into a larger, unitary operator, which is a type of operator in quantum mechanics that preserves the norm of vectors upon which it acts. Mathematically, the process of qubitization constructs a unitary operator formula_0 such that a specific projection of formula_1 is proportional to the Hamiltonian formula_2 of interest. This relationship can often be represented as formula_3, where  formula_4 is a specific quantum state and formula_5 is its conjugate transpose. The efficiency of this method comes from the fact that the unitary operator formula_6 can be implemented on a quantum computer with fewer resources (like qubits and quantum gates) than would be required for directly simulating formula_7 A key feature of qubitization is in simulating Hamiltonian dynamics with high precision while reducing the quantum resource overhead. This efficiency is especially beneficial in quantum algorithms where the simulation of complex quantum systems is necessary, such as in quantum chemistry and materials science simulations. Qubitization also develops quantum algorithms for solving certain types of problems more efficiently than classical algorithms. For instance, it has implications for the Quantum Phase Estimation algorithm, which is fundamental in various quantum computing applications, including factoring and solving linear systems of equations. Applications of qubitization in chemistry. Gaussian orbital basis sets. In Gaussian orbital basis sets, phase estimation algorithms have been optimized empirically from formula_8 to formula_9 where formula_10 is the number of basis sets. Advanced Hamiltonian simulation algorithms have further reduced the scaling, with the introduction of techniques like Taylor series methods and qubitization, providing more efficient algorithms with reduced computational requirements. Plane wave basis sets. Plane wave basis sets, suitable for periodic systems, have also seen advancements in algorithm efficiency, with improvements in product formula-based approaches and Taylor series methods. Quantum phase estimation in chemistry. Overview. Phase estimation, as proposed by Kitaev in 1996, identifies the lowest energy eigenstate ( formula_11 ) and excited states ( formula_12 ) of a physical Hamiltonian, as detailed by Abrams and Lloyd in 1999. In quantum computational chemistry, this technique is employed to encode fermionic Hamiltonians into a qubit framework. Brief methodology. Initialization. The qubit register is initialized in a state, which has a nonzero overlap with the Full Configuration Interaction (FCI) target eigenstate of the system. This state formula_13 is expressed as a sum over the energy eigenstates of the Hamiltonian, formula_14 , where formula_15represents complex coefficients. Application of Hadamard gates. Each ancilla qubit undergoes a Hadamard gate application, placing the ancilla register in a superposed state. Subsequently, controlled gates, as shown above, modify this state. Inverse quantum fourier transform. This transform is applied to the ancilla qubits, revealing the phase information that encodes the energy eigenvalues. Measurement. The ancilla qubits are measured in the Z basis, collapsing the main register into the corresponding energy eigenstate formula_12 based on the probability formula_16. Requirements. The algorithm requires formula_17 ancilla qubits, with their number determined by the desired precision and success probability of the energy estimate. Obtaining a binary energy estimate precise to n bits with a success probability formula_18 necessitates.formula_19 ancilla qubits. This phase estimation has been validated experimentally across various quantum architectures. Applications of QPEs in chemistry. Time evolution and error analysis. The total coherent time evolution formula_20 required for the algorithm is approximately formula_21. The total evolution time is related to the binary precision formula_22, with an expected repeat of the procedure for accurate ground state estimation. Errors in the algorithm include errors in energy eigenvalue estimation (formula_23), unitary evolutions (formula_24), and circuit synthesis errors (formula_25), which can be quantified using techniques like the Solovay-Kitaev theorem. The phase estimation algorithm can be enhanced or altered in several ways, such as using a single ancilla qubit  for sequential measurements, increasing efficiency, parallelization, or enhancing noise resilience in analytical chemistry. The algorithm can also be scaled using classically obtained knowledge about energy gaps between states. Limitations. Effective state preparation is needed, as a randomly chosen state would exponentially decrease the probability of collapsing to the desired ground state. Various methods for state preparation have been proposed, including classical approaches and quantum techniques like adiabatic state preparation. Variational quantum eigensolver (VQE). Overview. The variational quantum eigensolver is an algorithm in quantum computing, crucial for near-term quantum hardware. Initially proposed by Peruzzo et al. in 2014 and further developed by McClean et al. in 2016, VQE finds the lowest eigenvalue of Hamiltonians, particularly those in chemical systems. It employs the variational method (quantum mechanics), which guarantees that the expectation value of the Hamiltonian for any parameterized trial wave function is at least the lowest energy eigenvalue of that Hamiltonian. VQE is a hybrid algorithm that utilizes both quantum and classical computers. The quantum computer prepares and measures the quantum state, while the classical computer processes these measurements and updates the system. This synergy allows VQE to overcome some limitations of purely quantum methods. Applications of VQEs in chemistry. 1-RDM and 2-RDM calculations. The reduced density matrices (1-RDM and 2-RDM) can be used to extrapolate the electronic structure of a system. Ground state energy extrapolation. In the Hamiltonian variational ansatz, the initial state formula_26  is prepared to represent the ground state of the molecular Hamiltonian without electron correlations. The evolution of this state under the Hamiltonian, split into commuting segments formula_27 , is given by the equation below. formula_28 where formula_29  are variational parameters optimized to minimize the energy, providing insights into the electronic structure of the molecule. Measurement scaling. McClean et al. (2016) and Romero et al. (2019) proposed a formula to estimate the number of measurements ( formula_30 ) required for energy precision. The formula is given by formula_31 , where formula_32 are coefficients of each Pauli string in the Hamiltonian. This leads to a scaling of formula_33 in a Gaussian orbital basis and formula_34 in a plane wave dual basis. Note that formula_35 is the number of basis functions in the chosen basis set. Fermionic level grouping. A method by Bonet-Monroig, Babbush, and O'Brien (2019) focuses on grouping terms at a fermionic level rather than a qubit level, leading to a measurement requirement of only formula_36 circuits with an additional gate depth of formula_37. Limitations of VQE. While VQE's application in solving the electronic Schrödinger equation for small molecules has shown success, its scalability is hindered by two main challenges: the complexity of the quantum circuits required and the intricacies involved in the classical optimization process. These challenges are significantly influenced by the choice of the variational ansatz, which is used to construct the trial wave function. Modern quantum computers face limitations in running deep quantum circuits, especially when using the existing ansatzes for problems that exceed several qubits. Jordan-Wigner encoding. Jordan-Wigner encoding is a method in quantum computing used for simulating fermionic systems like molecular orbitals and electron interactions in quantum chemistry. Overview. In quantum chemistry, electrons are modeled as fermions with antisymmetric wave functions. The Jordan-Wigner encoding maps these fermionic orbitals to qubits, preserving their antisymmetric nature. Mathematically, this is achieved by associating each fermionic creation formula_38 and annihilation formula_39 operator with corresponding qubit operators through the Jordan-Wigner transformation: formula_40 Where formula_41 , formula_42 , and formula_43 are Pauli matrices acting on the formula_44 qubit. Applications of Jordan-Wigner encoding in chemistry. Electron hopping. Electron hopping between orbitals, central to chemical bonding and reactions, is represented by terms like formula_45. Under Jordan-Wigner encoding, these transform as follows:formula_46This transformation captures the quantum mechanical behavior of electron movement and interaction within molecules. Computational complexity in molecular systems. The complexity of simulating a molecular system using Jordan-Wigner encoding is influenced by the structure of the molecule and the nature of electron interactions. For a molecular system with formula_47 orbitals, the number of required qubits scales linearly with formula_47 , but the complexity of gate operations depends on the specific interactions being modeled. Limitations of Jordan–Wigner encoding. The Jordan-Wigner transformation encodes fermionic operators into qubit operators, but it introduces non-local string operators that can make simulations inefficient. The FSWAP gate is used to mitigate this inefficiency by rearranging the ordering of fermions (or their qubit representations), thus simplifying the implementation of fermionic operations. Fermionic SWAP (FSWAP) network. FSWAP networks rearrange qubits to efficiently simulate electron dynamics in molecules. These networks are essential for reducing the gate complexity in simulations, especially for non-neighboring electron interactions. When two fermionic modes (represented as qubits after the Jordan-Wigner transformation) are swapped, the FSWAP gate not only exchanges their states but also correctly updates the phase of the wavefunction to maintain fermionic antisymmetry. This is in contrast to the standard SWAP gate, which does not account for the phase change required in the antisymmetric wavefunctions of fermions. The use of FSWAP gates can significantly reduce the complexity of quantum circuits for simulating fermionic systems. By intelligently rearranging the fermions, the number of gates required to simulate certain fermionic operations can be reduced, leading to more efficient simulations. This is particularly useful in simulations where fermions need to be moved across large distances within the system, as it can avoid the need for long chains of operations that would otherwise be required. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " U " }, { "math_id": 1, "text": " U " }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": " H = \\langle G | U | G \\rangle " }, { "math_id": 4, "text": " | G \\rangle " }, { "math_id": 5, "text": " \\langle G | " }, { "math_id": 6, "text": " U " }, { "math_id": 7, "text": " H.\n\n " }, { "math_id": 8, "text": " \\mathcal{O}(M^{11}) " }, { "math_id": 9, "text": " \\mathcal{O}(M^{5}) " }, { "math_id": 10, "text": " M " }, { "math_id": 11, "text": "| E_0 \\rangle" }, { "math_id": 12, "text": "| E_i \\rangle" }, { "math_id": 13, "text": "| \\psi \\rangle " }, { "math_id": 14, "text": "|\\psi \\rangle = \\sum_{i = 1} c_i |E_i \\rangle" }, { "math_id": 15, "text": "c_i" }, { "math_id": 16, "text": " |c_i|^2" }, { "math_id": 17, "text": "\\omega" }, { "math_id": 18, "text": " p " }, { "math_id": 19, "text": "\\omega = n + \\lceil \\log_2 \\left(2 + \\frac{1}{2p}\\right) \\rceil" }, { "math_id": 20, "text": " T " }, { "math_id": 21, "text": "T = 2^{(\\omega + 1)}\\pi" }, { "math_id": 22, "text": "\\varepsilon_{\\text{PE}} = \\frac{1}{2^n}" }, { "math_id": 23, "text": "\\varepsilon_{PE}" }, { "math_id": 24, "text": "\\varepsilon_{U}" }, { "math_id": 25, "text": "\\varepsilon_{CS}" }, { "math_id": 26, "text": "|\\psi_0\\rangle" }, { "math_id": 27, "text": "H_j" }, { "math_id": 28, "text": "|\\psi(\\theta)\\rangle = \\prod_d \\prod_j e^{i\\theta_{d,j} H_j} |\\psi_0\\rangle" }, { "math_id": 29, "text": "\\theta_{d,j}" }, { "math_id": 30, "text": "N_m" }, { "math_id": 31, "text": "N_m = \\left(\\sum_i |h_i|\\right)^2/\\epsilon^2" }, { "math_id": 32, "text": "h_i" }, { "math_id": 33, "text": "\\mathcal{O}(M^6/\\epsilon^2)" }, { "math_id": 34, "text": "\\mathcal{O}(M^4/\\epsilon^2)" }, { "math_id": 35, "text": "M" }, { "math_id": 36, "text": "\\mathcal{O}(M^2)" }, { "math_id": 37, "text": "\\mathcal{O}(M)" }, { "math_id": 38, "text": " ( a^\\dagger_i ) " }, { "math_id": 39, "text": " ( a_i ) " }, { "math_id": 40, "text": "a^\\dagger_i \\rightarrow \\frac{1}{2} \\left( \\prod_{k=1}^{i-1} Z_k \\right) (X_i - iY_i)" }, { "math_id": 41, "text": " X_i " }, { "math_id": 42, "text": " Y_i " }, { "math_id": 43, "text": " Z_i " }, { "math_id": 44, "text": " i^{\\text{th}} " }, { "math_id": 45, "text": "a^{\\dagger_i} a_j + a^{\\dagger_j} a_i\n" }, { "math_id": 46, "text": "a^\\dagger_i a_j + a^\\dagger_j a_i \\rightarrow \\frac{1}{2} (X_i X_j + Y_i Y_j) Z_{i+1} \\cdots Z_{j-1} " }, { "math_id": 47, "text": " K " } ]
https://en.wikipedia.org/wiki?curid=75599532
75602409
Second continuum hypothesis
The second continuum hypothesis, also called Luzin's hypothesis or Luzin's second continuum hypothesis, is the hypothesis that formula_0. It is the negation of a weakened form, formula_1, of the Continuum Hypothesis (CH). It was discussed by Nikolai Luzin in 1935, although he did not claim to be the first to postulate it.§3 The statement formula_1 may also be called Luzin's hypothesis. The second continuum hypothesis is independent of Zermelo–Fraenkel set theory with the Axiom of Choice (ZFC): its truth is consistent with ZFC since it is true in Cohen's model of ZFC with the negation of the Continuum Hypothesis; its falsity is also consistent since it's contradicted by the Continuum Hypothesis, which follows from V=L. It is implied by Martin's Axiom together with the negation of the CH. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{\\aleph_0}=2^{\\aleph_1}" }, { "math_id": 1, "text": "2^{\\aleph_0}<2^{\\aleph_1}" } ]
https://en.wikipedia.org/wiki?curid=75602409
75603109
Weak continuum hypothesis
The term weak continuum hypothesis can be used to refer to the hypothesis that formula_0, which is the negation of the second continuum hypothesis.80Lecture 73616 It is equivalent to a weak form of ◊ on formula_1.2 F. Burton Jones proved that if it is true, then every separable normal Moore space is metrizable.Theorem 5 "Weak continuum hypothesis" may also refer to the assertion that every uncountable set of real numbers can be placed in bijective correspondence with the set of all reals. This second assertion was Cantor's original form of the Continuum Hypothesis (CH). Given the Axiom of Choice, it is equivalent to the usual form of CH, that formula_2.155289 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{\\aleph_0}<2^{\\aleph_1}" }, { "math_id": 1, "text": "\\aleph_1" }, { "math_id": 2, "text": "2^{\\aleph_0}=\\aleph_1" } ]
https://en.wikipedia.org/wiki?curid=75603109
75604189
Zhegalkin algebra
Boolean algebra concept In mathematics, Zhegalkin algebra is a set of Boolean functions defined by the nullary operation taking the value formula_0, use of the binary operation of conjunction formula_1, and use of the binary sum operation for modulo 2 formula_2. The constant formula_3 is introduced as formula_4. The negation operation is introduced by the relation formula_5. The disjunction operation follows from the identity formula_6. Using Zhegalkin Algebra, any perfect disjunctive normal form can be uniquely converted into a Zhegalkin polynomial (via the Zhegalkin Theorem). Basic identities. Thus, the basis of Boolean functions formula_14 is functionally complete. Its inverse logical basis formula_15 is also functionally complete, where formula_16 is the inverse of the XOR operation (via equivalence). For the inverse basis, the identities are inverse as well: formula_17 is the output of a constant, formula_18 is the output of the negation operation, and formula_19 is the conjunction operation. The functional completeness of the these two bases follows from completeness of the basis formula_20.
[ { "math_id": 0, "text": "1" }, { "math_id": 1, "text": "\\land" }, { "math_id": 2, "text": "\\oplus" }, { "math_id": 3, "text": "0" }, { "math_id": 4, "text": "1 \\oplus 1 = 0" }, { "math_id": 5, "text": "\\neg x = x \\oplus 1" }, { "math_id": 6, "text": "x \\lor y = x \\land y \\oplus x \\oplus y" }, { "math_id": 7, "text": "x \\land ( y \\land z) = (x \\land y) \\land z" }, { "math_id": 8, "text": "x \\land y = y \\land x" }, { "math_id": 9, "text": "x \\oplus ( y \\oplus z) = (x \\oplus y) \\oplus z" }, { "math_id": 10, "text": "x \\oplus y = y \\oplus x" }, { "math_id": 11, "text": "x \\oplus x = 0" }, { "math_id": 12, "text": "x \\oplus 0 = x" }, { "math_id": 13, "text": "x \\land ( y \\oplus z) = x \\land y \\oplus x \\land z" }, { "math_id": 14, "text": "\\bigl\\langle \\wedge, \\oplus, 1 \\bigr\\rangle" }, { "math_id": 15, "text": "\\bigl\\langle \\lor, \\odot, 0 \\bigr\\rangle" }, { "math_id": 16, "text": "\\odot" }, { "math_id": 17, "text": "0 \\odot 0 = 1" }, { "math_id": 18, "text": "\\neg x = x \\odot 0" }, { "math_id": 19, "text": "x \\land y = x \\lor y \\odot x \\odot y" }, { "math_id": 20, "text": "\\{\\neg, \\land, \\lor\\}" } ]
https://en.wikipedia.org/wiki?curid=75604189
75605938
Trioker
Le Trioker is an corner-matching puzzle game played using 25 equilateral triangle-shaped tiles. Each corner is marked with zero, one, two, or three dots and newly placed pieces must match the values on pieces already placed on the game board, similar to the gameplay of the earlier Triominoes. History. In the 1921 book "New Mathematical Pastimes", Percy Alexander MacMahon showed there were 24 possible combinations when each of the three edges of an equilateral triangle are assigned one of four colors. In general, the number of unique pieces that can be made in this way is formula_0 and so for formula_1 there are 24 unique combinations possible. MacMahon suggested an edge-matching puzzle game could be played with these pieces on a regular hexagonal board, constraining colors to match on adjacent edges and on the borders of the board itself. The similar square tiles proposed by MacMahon in the same book have since been adopted into several commercial games. The notation used here for the MacMahon equilateral triangular tiles is a simple enumeration of each edge, in anti-clockwise order, starting from the bottom left edge. Rotating each tile by 120° may change this notation (e.g., after rotating the 314 tile, the revised notation would become 431 and 143 in turn) but the actual tile itself remains physically unchanged regardless of its orientation; in other words, these tiles are identical up to rotation despite the change in notation. For consistency, a notation may be adopted in which the count starts from the lowest-numbered edge, proceeding anti-clockwise; in that case, this tile would always be denoted 143. A separate signifier could be used to indicate orientation, for example, the clock hour position of the ones digit, so 143.08 = [314] and 143.04 = [431]. Marc Odier developed the Trioker tiles by shifting the markings from the edges to the corners, as patented and published by Robert Laffont Games in 1969. In addition to the 24 combinations, Odier introduced a tile with a wild card value in one corner, marked by a solid square. "Spirou" published a supplement in 1970 (with issue 1661), providing the game pieces and a description of how to play it, followed by a regular column through the end of the year. Like Triominoes, which also use equilateral triangle tiles with values marked in each corner, Trioker requires that adjacent tiles must have matching corners. However, Triominoes are marked from zero to five (or more) and have an additional marking restriction that values may not decrease when counted in a clockwise direction from the lowest value(s), so there are pieces in common between the two games, but neither game is a subset of the other. Gameplay. "Le Trioker" can be played either as a competitive dominoes-like game against one or more opponents, or as a puzzle game to fill a shape. General. All variants of the game require the corners of any newly-placed tile to match the corners of adjacent tile(s) that are already on the board, consistent with the placement rule of triominoes. For the competitive variants, the winner is the first to place all their tiles on the board. Original. The original competitive version of the game published in 1970 used an irregular seven-sided board shaped like a truncated triangle with several marked spaces that require an additional action when a tile is played in that space: Sid Sackson's holographic notes omit the F5 space. This version uses the full 25-tile set along with red and black "coin" pieces, with the red coins worth 5 and the black coins worth 2. Rapid. The rapid version of the game is intended for two, three, or four players, with each player receiving the following equipment prior to starting: The players draw from the full 25-tile set; red coins are worth 5 points and the black coins are worth 2. Each player hides their triangular tiles. The remaining tiles and coins are held for common use in the bank. The rapid variant board is an irregular ten-sided shape, taking the form of an irregular hexagon with two indented corners; similar to the original (1970) version, certain spaces are marked with additional actions that are taken when a tile is played in them: Play starts with the player holding the triple-three ('333') tile; that player places that tile in the "GO" spot, and the next turn proceeds to the player on their left (clockwise). If the '333' tile is not held, the player holding the tile with the largest sum of pips (e.g., '332' = 8; '331' or '322' = 7; etc.) uses that tile to start instead. The next player must play a tile from their hand that can be placed legally adjacent to the starting tile. If a player does not have a tile that can be placed legally, they must draw one of the remaining tiles from the bank, then either play the drawn tile or pass their turn. When there are no free tiles left in the bank, players must pass instead. Scoring. After one player empties their hand, that player receives ten points from the bank, while the other players must pay the bank two points plus the sum of the pips on each tile. For example, a player with the '000' tile will pay a total of two points to the bank for that tile, while a player with the '230' tile will pay a total of seven points to the bank (2+[2+3+0]). Because of this, it is advantageous to use higher-value tiles early in the game. The overall winner is the player with the most points. The game also may end in a draw if no player is able to empty their hand. In that case, the game should be replayed with a fresh draw. Even-Odd. The even-odd version is an advanced variant intended for two players that uses a 24-tile subset, omitting the "joker" piece. One player takes the 12 "even" pieces, while the other player takes the 12 "odd" pieces. Throughout the game, each player's pieces are kept face-up so the opposing player knows which pieces are remaining. Comic characters from "Spirou" were assigned to each piece in the version of the game reprinted in that magazine. The board has 73 triangular cells arranged in an irregular hexagon. Advanced players may choose to reduce this to a 52-cell board by mutual agreement to avoid the shaded cells, which increases the difficulty. One of the two players flips a coin, which determines whether they will play the "even" (by flipping tails) or "odd" (heads) side. The even player places the first piece in the center of the board; they may choose any piece they hold, aside from the two triples ('000' or '222'), as that would unfairly limit the opening move for the odd player. The players take turns placing tiles adjacent to placed tiles, observing the corner-matching rules. When a player completes a hexagon by adding a sixth tile using a legal placement, that player takes another turn immediately. This also applies if two hexagons are completed by a single tile, in which case that player is awarded two more turns. A player may choose to block their opponent by making it impossible to place one of their remaining pieces legally, which is facilitated by both players keeping the pieces face-up. If there are no legal moves for a player, that player is forced to pass their turn. However, if a player passes and their opponent notices they have a legal move available, the passing player is obligated to place that tile. The winner is the first player to place their last tile. If there are no legal moves for both players, the winner is the player with fewer tiles remaining; if both players have the same number of tiles, the game is a draw and should be replayed, switching the even and odd roles. Simple. In "Surprenants triangles" (1976), Odier proposes several simple game variants, including single, double, and triple linear paths starting from the triple-three tile. Puzzles. The puzzle variant is intended for solo players to fill a shape while observing corner-matching rules for adjacent tiles. There are many possible shapes, and at least one book has been published with additional shapes beyond those contained in the rulebook. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{n}{3}\\cdot(n^2+2)" }, { "math_id": 1, "text": "n=4" } ]
https://en.wikipedia.org/wiki?curid=75605938
7561
Classical Kuiper belt object
Kuiper belt object, not controlled by an orbital resonance with Neptune A classical Kuiper belt object, also called a cubewano ( "QB1-o"), is a low-eccentricity Kuiper belt object (KBO) that orbits beyond Neptune and is not controlled by an orbital resonance with Neptune. Cubewanos have orbits with semi-major axes in the 40–50 AU range and, unlike Pluto, do not cross Neptune's orbit. That is, they have low-eccentricity and sometimes low-inclination orbits like the classical planets. The name "cubewano" derives from the first trans-Neptunian object (TNO) found after Pluto and Charon: 15760 Albion, which until January 2018 had only the provisional designation (15760) 1992 QB1. Similar objects found later were often called "QB1-o's", or "cubewanos", after this object, though the term "classical" is much more frequently used in the scientific literature. Objects identified as cubewanos include: 136108 Haumea was provisionally listed as a cubewano by the Minor Planet Center in 2006, but was later found to be in a resonant orbit. &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Orbits: 'hot' and 'cold' populations. There are two basic dynamical classes of classical Kuiper-belt bodies: those with relatively unperturbed ('cold') orbits, and those with markedly perturbed ('hot') orbits. Most cubewanos are found between the 2:3 orbital resonance with Neptune (populated by plutinos) and the 1:2 resonance. 50000 Quaoar, for example, has a near-circular orbit close to the ecliptic. Plutinos, on the other hand, have more eccentric orbits bringing some of them closer to the Sun than Neptune. The majority of classical objects, the so-called "cold population", have low inclinations (&lt; 5°) and near-circular orbits, lying between 42 and 47 AU. A smaller population (the "hot population") is characterised by highly inclined, more eccentric orbits. The terms 'hot' and 'cold' has nothing to do with surface or internal temperatures, but rather refer to the orbits of the objects, by analogy to molecules in a gas, which increase their relative velocity as they heat up. The Deep Ecliptic Survey reports the distributions of the two populations; one with the inclination centered at 4.6° (named "Core") and another with inclinations extending beyond 30° ("Halo"). Distribution. The vast majority of KBOs (more than two-thirds) have inclinations of less than 5° and eccentricities of less than 0.1 . Their semi-major axes show a preference for the middle of the main belt; arguably, smaller objects close to the limiting resonances have been either captured into resonance or have their orbits modified by Neptune. The 'hot' and 'cold' populations are strikingly different: more than 30% of all cubewanos are in low inclination, near-circular orbits. The parameters of the plutinos’ orbits are more evenly distributed, with a local maximum in moderate eccentricities in 0.15–0.2 range, and low inclinations 5–10°. See also the comparison with scattered disk objects. When the orbital eccentricities of cubewanos and plutinos are compared, it can be seen that the cubewanos form a clear 'belt' outside Neptune's orbit, whereas the plutinos approach, or even cross Neptune's orbit. When orbital inclinations are compared, 'hot' cubewanos can be easily distinguished by their higher inclinations, as the plutinos typically keep orbits below 20°. (No clear explanation currently exists for the inclinations of 'hot' cubewanos.) Cold and hot populations: physical characteristics. In addition to the distinct orbital characteristics, the two populations display different physical characteristics. The difference in colour between the red cold population, such as 486958 Arrokoth, and more heterogeneous hot population was observed as early as in 2002. Recent studies, based on a larger data set, indicate the cut-off inclination of 12° (instead of 5°) between the cold and hot populations and confirm the distinction between the homogenous red cold population and the bluish hot population. Another difference between the low-inclination (cold) and high-inclination (hot) classical objects is the observed number of binary objects. Binaries are quite common on low-inclination orbits and are typically similar-brightness systems. Binaries are less common on high-inclination orbits and their components typically differ in brightness. This correlation, together with the differences in colour, support further the suggestion that the currently observed classical objects belong to at least two different overlapping populations, with different physical properties and orbital history. Toward a formal definition. There is no official definition of 'cubewano' or 'classical KBO'. However, the terms are normally used to refer to objects free from significant perturbation from Neptune, thereby excluding KBOs in orbital resonance with Neptune (resonant trans-Neptunian objects). The Minor Planet Center (MPC) and the Deep Ecliptic Survey (DES) do not list cubewanos (classical objects) using the same criteria. Many TNOs classified as cubewanos by the MPC, such as dwarf planet Makemake, are classified as ScatNear (possibly scattered by Neptune) by the DES. may be an inner cubewano near the plutinos. Furthermore, there is evidence that the Kuiper belt has an 'edge', in that an apparent lack of low-inclination objects beyond 47–49 AU was suspected as early as 1998 and shown with more data in 2001. Consequently, the traditional usage of the terms is based on the orbit's semi-major axis, and includes objects situated between the 2:3 and 1:2 resonances, that is between 39.4 and 47.8 AU (with exclusion of these resonances and the minor ones in-between). These definitions lack precision: in particular the boundary between the classical objects and the scattered disk remains blurred. , there are 870 objects with perihelion (q) &gt; 40 AU and aphelion (Q) &lt; 48 AU. DES classification. Introduced by the report from the Deep Ecliptic Survey by J. L. Elliott et al. in 2005 uses formal criteria based on the mean orbital parameters. Put informally, the definition includes the objects that have never crossed the orbit of Neptune. According to this definition, an object qualifies as a classical KBO if: SSBN07 classification. An alternative classification, introduced by B. Gladman, B. Marsden and C. van Laerhoven in 2007, uses a 10-million-year orbit integration instead of the Tisserand's parameter. Classical objects are defined as not resonant and not being currently scattered by Neptune. Formally, this definition includes as "classical" all objects with their "current" orbits that Unlike other schemes, this definition includes the objects with major semi-axis less than 39.4 AU (2:3 resonance)—termed inner classical belt, or more than 48.7 (1:2 resonance) – termed outer classical belt, and reserves the term main classical belt for the orbits between these two resonances. Families. The first known collisional family in the classical Kuiper belt—a group of objects thought to be remnants from the breakup of a single body—is the Haumea family. It includes Haumea, its moons, and seven smaller bodies. The objects not only follow similar orbits but also share similar physical characteristics. Unlike many other KBO their surface contains large amounts of water ice (H2O) and no or very little tholins. The surface composition is inferred from their neutral (as opposed to red) colour and deep absorption at 1.5 and 2. μm in infrared spectrum. Several other collisional families might reside in the classical Kuiper belt. Exploration. As of January 2019, only one classical Kuiper belt object has been observed up close by spacecraft. Both Voyager spacecraft have passed through the region before the discovery of the Kuiper belt. New Horizons was the first mission to visit a classical KBO. After its successful exploration of the Pluto system in 2015, the NASA spacecraft has visited the small KBO 486958 Arrokoth at a distance of on 1 January 2019. List. Here is a very generic list of classical Kuiper belt objects. As of  2023[ [update]], there are about 870 objects with q &gt; 40 AU and Q &lt; 48 AU. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e < 0.240" } ]
https://en.wikipedia.org/wiki?curid=7561
75629307
Supersolvable lattice
Graded lattice with modular maximal chain In mathematics, a supersolvable lattice is a graded lattice that has a maximal chain of elements, each of which obeys a certain modularity relationship. The definition encapsulates many of the nice properties of lattices of subgroups of supersolvable groups. Motivation. A finite group formula_0 is said to be "supersolvable" if it admits a maximal chain (or "series") of subgroups so that each subgroup in the chain is normal in formula_0. A normal subgroup has been known since the 1940s to be left and (dual) right modular as an element of the lattice of subgroups. Richard Stanley noticed in the 1970s that certain geometric lattices, such as the partition lattice, obeyed similar properties, and gave a lattice-theoretic abstraction. Definition. A finite graded lattice formula_1 is supersolvable if it admits a maximal chain formula_2 of elements (called an M-chain or chief chain) obeying any of the following equivalent properties. For comparison, a finite lattice is geometric if and only if it is atomistic and the elements of the antichain of atoms are all left modular. An extension of the definition is that of a left modular lattice: a not-necessarily graded lattice with a maximal chain consisting of left modular elements. Thus, a left modular lattice requires the condition of (2), but relaxes the requirement of gradedness. Examples. A group is supersolvable if and only if its lattice of subgroups is supersolvable. A chief series of subgroups forms a chief chain in the lattice of subgroups. The partition lattice of a finite set is supersolvable. A partition is left modular in this lattice if and only if it has at most one non-singleton part. The noncrossing partition lattice is similarly supersolvable, although it is not geometric. The lattice of flats of the graphic matroid for a graph is supersolvable if and only if the graph is chordal. Working from the top, the chief chain is obtained by removing vertices in a perfect elimination ordering one by one. Every modular lattice is supersolvable, as every element in such a lattice is left modular and rank modular. Properties. A finite matroid with a supersolvable lattice of flats (equivalently, a lattice that is both geometric and supersolvable) has a real-rooted characteristic polynomial. This is a consequence of a more general factorization theorem for characteristic polynomials over modular elements. The Orlik-Solomon algebra of an arrangement of hyperplanes with a supersolvable intersection lattice is a Koszul algebra. For more information, see Supersolvable arrangement. Any finite supersolvable lattice has an edge lexicographic labeling (or EL-labeling), hence its order complex is shellable and Cohen-Macaulay. Indeed, supersolvable lattices can be characterized in terms of edge lexicographic labelings: a finite lattice of height formula_10 is supersolvable if and only if it has an edge lexicographic labeling that assigns to each maximal chain a permutation of formula_11 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "\\mathbf{m}" }, { "math_id": 3, "text": "\\mathbf{c}" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "x \\leq y" }, { "math_id": 6, "text": "(x\\vee m)\\wedge y=x\\vee(m\\wedge y)." }, { "math_id": 7, "text": "\\rho" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "\\rho(m\\wedge x)+\\rho(m\\vee x)=\\rho(m)+\\rho(x)." }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\{ 1, \\dots, n\\}." } ]
https://en.wikipedia.org/wiki?curid=75629307
75634884
Non-normal modal logic
A less restrictive form of modal logic A non-normal modal logic is a variant of modal logic that deviates from the basic principles of normal modal logics. Normal modal logics adhere to the distributivity axiom (formula_0) and the necessitation principle which states that "a tautology must be necessarily true" (formula_1 entails formula_2). On the other hand, non-normal modal logics do not always have such requirements. The minimal variant of non-normal modal logics is logic E, which contains the congruence rule in its Hilbert calculus or the "E" rule in its sequent calculus upon the corresponding proof systems for classical propositional logic. Additional axioms, namely axioms "M", "C" and "N", can be added to form stronger logic systems. With all three axioms added to logic E, a logic system equivalent to normal modal logic K is obtained. Whilst Kripke semantics is the most common formal semantics for normal modal logics (e.g., logic K), non-normal modal logics are often interpreted with neighbourhood semantics. Syntax. The syntax of non-normal modal logic systems resembles that of normal modal logics, which is founded upon propositional logic. An atomic statement is represented with propositional variables (e.g., formula_3); logical connectives include negation (formula_4), conjunction (formula_5), disjunction (formula_6) and implication (formula_7). The modalities are most commonly represented with the box (formula_8) and the diamond (formula_9). A formal grammar for this syntax can minimally be defined using only the negation, disjunction and box symbols. In such a language, formula_10 where formula_11 is any propositional name. The conjunction formula_12 may then be defined as equivalent to formula_13. For any modal formula formula_14, the formula formula_15 is defined by formula_16. Alternatively, if the language is first defined with the diamond, then the box can be analogously defined by formula_17. For any propositional name formula_11, the formulae formula_11 and formula_18 are considered "propositional literals" whilst formula_19 and formula_20 are considered "modal literals". Proof systems. Logic E, the minimal variant of non-normal modal logics, includes the "RE" congruence rule in its Hilbert calculus or the "E" rule in its sequent calculus. Hilbert calculus. The Hilbert calculus for logic E is built upon the one for classical propositional logic with the congruence rule ("RE"): formula_21. Alternatively, the rule can be defined by formula_22. Logics containing this rule are called "congruential". Sequent calculus. The sequent calculus for logic E, another proof system that operates on sequents, consists of the inference rules for propositional logic and the "E" rule of inference: formula_23. The sequent formula_24 means formula_25 entails formula_26, with formula_25 being the "antecedent" (a conjunction of formulae as premises) and formula_26 being the "precedent" (a disjunction of formulae as the conclusion). Resolution calculus. The resolution calculus for non-normal modal logics introduces the concept of global and local modalities. The formula formula_27 denotes the global modality of the modal formula formula_14, which means that formula_14 holds true in all worlds in a neighbourhood model. For logic E, the resolution calculus consists of LRES, GRES, G2L, LERES and GERES rules. The LRES rule resembles the resolution rule for classical propositional logic, where any propositional literals formula_11 and formula_18 are eliminated: formula_28. The LERES rule states that if two propositional names formula_11 and formula_29 are equivalent, then formula_19 and formula_30 can be eliminated. The G2L rule states that any globally true formula is also locally true. The GRES and GERES inference rules, whilst variants of LRES and LERES, apply to formulae featuring the global modality. Given any modal formula, the proving process with this resolution calculus is done by recursively renaming a complex modal formula as a propositional name and using the global modality to assert their equivalence. Semantics. Whilst Kripke semantics is often applied as the semantics of normal modal logics, the semantics of non-normal modal logics are commonly defined with neighbourhood models. A standard neighbourhood model formula_31 is defined with the triple formula_32 where: The semantics can be further generalised as bi-neighbourhood semantics. Additional axioms. The classical cube of non-normal modal logic considers axioms M, C and N that can be added to logic E defined as follows. A logic system containing axiom "M" is "monotonic". With axioms M and C, the logic system is "regular". Including all three axioms, the logic system is "normal". With these axioms, additional rules are included in their proof systems accordingly. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Box (p \\to q) \\to (\\Box p \\to \\Box q)" }, { "math_id": 1, "text": "\\vdash A" }, { "math_id": 2, "text": "\\vdash \\Box A" }, { "math_id": 3, "text": "p, q, r" }, { "math_id": 4, "text": "\\neg" }, { "math_id": 5, "text": "\\land" }, { "math_id": 6, "text": "\\lor" }, { "math_id": 7, "text": "\\to" }, { "math_id": 8, "text": "\\Box" }, { "math_id": 9, "text": "\\Diamond" }, { "math_id": 10, "text": "\\varphi, \\psi := p\\ |\\ \\neg\\varphi\\ |\\ \\Box\\varphi\\ |\\ \\varphi \\lor \\psi" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "\\varphi \\land \\psi" }, { "math_id": 13, "text": "\\neg (\\neg\\varphi \\lor \\neg\\psi)" }, { "math_id": 14, "text": "\\varphi" }, { "math_id": 15, "text": "\\Diamond\\varphi" }, { "math_id": 16, "text": "\\neg\\Box\\neg\\varphi" }, { "math_id": 17, "text": "\\Box\\varphi \\equiv \\neg\\Diamond\\neg\\varphi" }, { "math_id": 18, "text": "\\neg p" }, { "math_id": 19, "text": "\\Box p" }, { "math_id": 20, "text": "\\neg\\Box p" }, { "math_id": 21, "text": "\\frac{A \\leftrightarrow B}{\\Box A \\leftrightarrow \\Box B}" }, { "math_id": 22, "text": "\\frac{A \\leftrightarrow B}{\\Diamond A \\leftrightarrow \\Diamond B}" }, { "math_id": 23, "text": "\\frac{A \\vdash B \\qquad B \\vdash A}{\\Gamma, \\Box A \\vdash \\Box B, \\Delta}" }, { "math_id": 24, "text": "\\Gamma \\vdash \\Delta" }, { "math_id": 25, "text": "\\Gamma" }, { "math_id": 26, "text": "\\Delta" }, { "math_id": 27, "text": "\\mathsf{G}(\\varphi)" }, { "math_id": 28, "text": "\\frac{D \\lor l \\qquad D' \\lor \\neg l}{D \\lor D'}" }, { "math_id": 29, "text": "p'" }, { "math_id": 30, "text": "\\neg\\Box p'" }, { "math_id": 31, "text": "\\mathcal{M}" }, { "math_id": 32, "text": "\\langle \\mathcal{W}, \\mathcal{N}, \\mathcal{V} \\rangle" }, { "math_id": 33, "text": "\\mathcal{W}" }, { "math_id": 34, "text": "\\mathcal{N}: \\mathcal{W} \\to \\mathcal{PP}(\\mathcal{W})" }, { "math_id": 35, "text": "\\mathcal{P}" }, { "math_id": 36, "text": "\\mathcal{V}: Atm \\to \\mathcal{P}(\\mathcal{W})" } ]
https://en.wikipedia.org/wiki?curid=75634884
75638893
Melanie Schmidt
German computer scientist Melanie Schmidt is a German computer scientist whose research involves algorithms for cluster analysis, including approximation algorithms, coresets, algorithmic fairness, and inapproximability. She holds the chair for Algorithms and Data Structures in the Computer Science Department at Heinrich Heine University Düsseldorf. Education and career. Schmidt earned a diploma in computer science in 2009 through study at both the Technical University of Dortmund and the University of Verona in Italy. She continued at the Technical University of Dortmund for doctoral study in computer science, and completed her doctorate (Dr. rer. nat.) in 2014 with the dissertation "Coresets and streaming algorithms for the formula_0-means problem and related clustering objectives", jointly supervised by Christian Sohler, Johannes Blömer, and Gernot Fink. After postdoctoral research at Carnegie Mellon University in the US and at the University of Bonn, she took a position at the University of Cologne in 2019 as junior professor of machine learning. She moved to her present position in Düsseldorf in 2021. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=75638893
7564265
Steinmetz solid
Intersection of cylinders In geometry, a Steinmetz solid is the solid body obtained as the intersection of two or three cylinders of equal radius at right angles. Each of the curves of the intersection of two cylinders is an ellipse. The intersection of two cylinders is called a bicylinder. Topologically, it is equivalent to a square hosohedron. The intersection of three cylinders is called a tricylinder. A bisected bicylinder is called a vault, and a cloister vault in architecture has this shape. Steinmetz solids are named after mathematician Charles Proteus Steinmetz, who solved the problem of determining the volume of the intersection. However, the same problem had been solved earlier, by Archimedes in the ancient Greek world, Zu Chongzhi in ancient China, and Piero della Francesca in the early Italian Renaissance. They appear prominently in the sculptures of Frank Smullin. Bicylinder. A bicylinder generated by two cylinders with radius r has the volume formula_0 and the surface area formula_1 The upper half of a bicylinder is the square case of a domical vault, a dome-shaped solid based on any convex polygon whose cross-sections are similar copies of the polygon, and analogous formulas calculating the volume and surface area of a domical vault as a rational multiple of the volume and surface area of its enclosing prism hold more generally. In China, the bicylinder is known as "Mou he fang gai", literally "two square umbrella"; it was described by the third-century mathematician Liu Hui. Proof of the volume formula. For deriving the volume formula it is convenient to use the common idea for calculating the volume of a sphere: collecting thin cylindric slices. In this case the thin slices are square cuboids (see diagram). This leads to formula_2 It is well known that the relations of the volumes of a right circular cone, one half of a sphere and a right circular cylinder with same radii and heights are 1 : 2 : 3. For one half of a bicylinder a similar statement is true: Using Multivariable Calculus. Consider the equations of the cylinders: formula_7 The volume will be given by: formula_8 With the limits of integration: formula_9 Substituting, we have: formula_10 Proof of the area formula. The surface area consists of two red and two blue cylindrical biangles. One red biangle is cut into halves by the yz-plane and developed into the plane such that half circle (intersection with the yz-plane) is developed onto the positive ξ-axis and the development of the biangle is bounded upwards by the sine arc formula_11 Hence the area of this development is formula_12 and the total surface area is: formula_13 Alternate proof of the volume formula. To derive the volume of a bicylinder (white), one can enclose it within a cube (red). When a plane, parallel to the axes of the cylinders, intersects the bicylinder, it forms a square. This plane’s intersection with the cube results in a larger square. The area difference between these two squares corresponds to four smaller squares (blue). As the plane traverses through the solids, these blue squares form square pyramids with isosceles faces at the cube’s corners. The apexes of these pyramids are located at the midpoints of the cube’s four edges. Moving the plane through the entire bicylinder results in a total of eight pyramids. The volume of the cube (red) minus the volume of the eight pyramids (blue) is the volume of the bicylinder (white). The volume of the 8 pyramids is: formula_14 and then we can calculate that the bicylinder volume is formula_15 Tricylinder. The intersection of three cylinders with perpendicularly intersecting axes generates a surface of a solid with vertices where 3 edges meet and vertices where 4 edges meet. The set of vertices can be considered as the edges of a rhombic dodecahedron. The key for the determination of volume and surface area is the observation that the tricylinder can be resampled by the cube with the vertices where 3 edges meet (s. diagram) and 6 curved pyramids (the triangles are parts of cylinder surfaces). The volume and the surface area of the curved triangles can be determined by similar considerations as it is done for the bicylinder above. The volume of a tricylinder is formula_16 and the surface area is formula_17 More cylinders. With four cylinders, with axes connecting the vertices of a tetrahedron to the corresponding points on the other side of the solid, the volume is formula_18 With six cylinders, with axes parallel to the diagonals of the faces of a cube, the volume is: formula_19 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V = \\frac{16}{3} r^3," }, { "math_id": 1, "text": "A = 16 r^2." }, { "math_id": 2, "text": "\\begin{align}\nV &= \\int_{-r}^{r} (2x)^2 \\ \\mathrm{d}z \\\\[2pt]\n &= 4\\cdot \\int_{-r}^{r} x^2 \\ \\mathrm{d}z \\\\[2pt]\n &= 4\\cdot \\int_{-r}^{r} (r^2-z^2) \\ \\mathrm{d}z \\\\[2pt]\n &= \\frac{16}{3} r^3.\n\\end{align}" }, { "math_id": 3, "text": "(a=2r,\\ h=r,\\ V=\\tfrac{4}{3}r^3)," }, { "math_id": 4, "text": "(V=\\tfrac{8}{3} r^3)" }, { "math_id": 5, "text": "(a= 2r,\\ h=r,\\ V=4r^3)" }, { "math_id": 6, "text": "\\begin{array}{ccccc}\n \\frac{4}{3}r^3 &:& \\frac{8}{3}r^3 &:& 4r^3 \\\\[2pt]\n 1 &:& 2 &:& 3\n\\end{array}" }, { "math_id": 7, "text": "\\begin{align}\n x^2+z^2 &= r^2 \\\\\n x^2+y^2 &= r^2\n\\end{align}" }, { "math_id": 8, "text": "V = \\iiint_V \\mathrm{d}z\\,\\mathrm{d}y\\,\\mathrm{d}x" }, { "math_id": 9, "text": "\\begin{array}{rcccl}\n -\\sqrt{r^2-x^2} &\\leqslant& z &\\leqslant& \\sqrt{r^2-x^2} \\\\[4pt]\n -\\sqrt{r^2-x^2} &\\leqslant& y &\\leqslant& \\sqrt{r^2-x^2} \\\\[4pt]\n -r &\\leqslant& x &\\leqslant& r\n\\end{array}" }, { "math_id": 10, "text": "\\begin{align}\n V &= \\int_{-r}^{r}\\int_{-\\sqrt{r^2-x^2}}^{\\sqrt{r^2-x^2}}\\int_{-\\sqrt{r^2-x^2}}^{\\sqrt{r^2-x^2}} \\mathrm{d}z\\,\\mathrm{d}y\\,\\mathrm{d}x \\\\[2pt]\n &= 8r^3-\\frac{8r^3}{3} \\\\[2pt]\n &= \\frac{16r^3}{3}\n\\end{align}" }, { "math_id": 11, "text": "\\eta=r\\sin\\tfrac{\\xi}{r}, \\ 0\\le\\xi\\le\\pi r." }, { "math_id": 12, "text": "B = \\int_{0}^{\\pi r} r\\sin\\frac{\\xi}{r} \\ \\mathrm{d}\\xi = r^2\\cos{0}-r^2\\cos{\\pi} = 2r^2" }, { "math_id": 13, "text": "A = 8B=16r^2." }, { "math_id": 14, "text": "8 \\times \\frac{1}{3} r^2 \\times r = \\frac{8}{3} r^3," }, { "math_id": 15, "text": "(2 r)^3 - \\frac{8}{3} r^3 = \\frac{16}{3} r^3." }, { "math_id": 16, "text": "V = 8(2 - \\sqrt{2}) r^3" }, { "math_id": 17, "text": "A = 24(2 - \\sqrt{2}) r^2." }, { "math_id": 18, "text": "V_4 = 12 \\left( 2\\sqrt{2} - \\sqrt{6} \\right) r^3 \\, " }, { "math_id": 19, "text": "V_6 = \\frac{16}{3} \\left( 3 + 2\\sqrt{3} - 4\\sqrt{2} \\right) r^3 \\, " } ]
https://en.wikipedia.org/wiki?curid=7564265
75647518
Counterexample-guided abstraction refinement
A technique for symbolic model checking and logic calculi Counterexample-guided abstraction refinement (CEGAR) is a technique for symbolic model checking. It is also applied in modal logic tableau calculi algorithms to optimise their efficiency. In computer-aided verification and analysis of programs, models of computation often consist of states. Models for even small programs, however, may have an enormous number of states. This is identified as the state explosion problem. CEGAR addresses this problem with two stages — "abstraction", which simplifies a model by grouping states, and "refinement", which increases the precision of the abstraction to better approximate the original model. If a desired property for a program is not satisfied in the abstract model, a counterexample is generated. The CEGAR process then checks whether the counterexample is spurious, i.e., if the counterexample also applies to the under-abstraction but not the actual program. If this is the case, it concludes that the counterexample is attributed to inadequate precision of the abstraction. Otherwise, the process finds a bug in the program. Refinement is performed when a counterexample is found to be spurious. The iterative procedure terminates either if a bug is found or when the abstraction has been refined to the extent that it is equivalent to the original model. Program verification. Abstraction. To reason about the correctness of a program, particularly those involving the concept of time for concurrency, state transition models are used. In particular, finite-state models can be used along with temporal logic in automatic verification. The concept of abstraction is thus founded upon a mapping between two Kripke structures. Specifically, programs can be described with control flow automata (CFA). Define a Kripke structure formula_0 as formula_1, where An abstraction of formula_0 is defined by formula_6 where formula_7 is an abstraction mapping that maps every state in formula_2 to a state in formula_8. To preserve the critical properties of the model, the abstraction mapping maps the initial state in the original model formula_3 to its counterpart formula_9 in the abstract model. The abstraction mapping also guarantees that the transition relations between two states are preserved. Model Checking. In each iteration, model checking is performed for the abstract model. Bounded model checking, for instance, generates a propositional formula that is then checked for Boolean satisfiability by a SAT solver. Refinement. When counterexamples are found, they are examined to determine if they are spurious examples, i.e., they are unauthentic ones that emerge from the under-abstraction of the model. A non-spurious counterexample reflects the incorrectness of the program, which may be sufficient to terminate the program verification process and conclude that the program is incorrect. The main objective of the refinement process handle spurious counterexamples. It eliminates them by increasing the granularity of the abstraction. The refinement process ensures that the dead-end states and the bad states do not belong to the same abstract state. A dead-end state is a reachable one with no outgoing transition whereas a bad-state is one with transitions causing the counterexample. Tableau calculi. Since modal logic is often interpreted with Kripke semantics, where a Kripke frame resembles the structure of state transition systems concerned in program verification, the CEGAR technique is also implemented for automated theorem proving. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\langle S, s_0, R, L\\rangle" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "s_0" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "L" }, { "math_id": 6, "text": "\\langle S_\\alpha, s_0^\\alpha, R_\\alpha, L_\\alpha \\rangle" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "S_\\alpha" }, { "math_id": 9, "text": "s_0^\\alpha" } ]
https://en.wikipedia.org/wiki?curid=75647518
75647565
Modal clausal form
A normal form for modal logic formulae Modal clausal form, also known as separated normal form by modal levels (SNF"ml") and Mints normal form, is a normal form for modal logic formulae. Such a normal form is commonly used for automated theorem proving using tableau calculi and resolution calculi techniques due to its benefits of better space bounds and improved decision procedures. In normal modal logic, any set of formulae can be transformed into an equisatisfiable set of formulae in this normal form. In multimodal logic where "a" represents an agent corresponding to an accessibility relation function in Kripke semantics, a formula in this normal form is a conjunction of clauses labelled by the modal level (i.e., the number of nested modalities). Each modal level consists of three forms as follows. These three forms are also called "cpl"-clauses, "box"-clauses and "dia"-clauses respectively. Note that any clause in conjunctive normal form (CNF) is also a literal clause in this normal form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bigvee_{b=1}^r l_b" }, { "math_id": 1, "text": "l' \\to \\Box_a l" }, { "math_id": 2, "text": "l'" }, { "math_id": 3, "text": "l" }, { "math_id": 4, "text": "l' \\to \\Diamond_a l" } ]
https://en.wikipedia.org/wiki?curid=75647565
7564980
Gaugino condensation
Nonzero gaugino vacuum expectation value in supersymmetry In quantum field theory, gaugino condensation is the nonzero vacuum expectation value in some models of a bilinear expression constructed in theories with supersymmetry from the superpartner of a gauge boson called the gaugino. The gaugino and the bosonic gauge field and the D-term are all components of a supersymmetric vector superfield in the Wess–Zumino gauge. formula_0 where formula_1 represents the gaugino field (a spinor) and formula_2 is an energy scale, a and b represent Lie algebra indices and α and β represent van der Waerden (two component spinor) indices. The mechanism is somewhat analogous to chiral symmetry breaking and is an example of a fermionic condensate. In the superfield notation, formula_3 is the gauge field strength and is a chiral superfield. formula_4 formula_5 is also a chiral superfield and we see that what acquires a nonzero VEV is not the F-term of this chiral superfield. Because of this, gaugino condensation in and of itself does not lead to supersymmetry breaking. If we also have supersymmetry breaking, it is caused by something other than the gaugino condensate. However, a gaugino condensate definitely breaks U(1)R symmetry as formula_6 has an R-charge of 2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\langle \\lambda^a_\\alpha \\lambda^b_\\beta\\rangle \\sim \\delta^{ab}\\epsilon_{\\alpha\\beta}\\Lambda^3 " }, { "math_id": 1, "text": "\\lambda" }, { "math_id": 2, "text": "\\Lambda" }, { "math_id": 3, "text": "W_\\alpha \\equiv \\overline{D}^2 D_\\alpha V" }, { "math_id": 4, "text": " \\langle W^a_\\alpha W^b_\\beta \\rangle = \\langle \\lambda^a_\\alpha \\lambda^b_\\beta\\rangle \\sim \\delta^{ab}\\epsilon_{\\alpha\\beta}\\Lambda^3 " }, { "math_id": 5, "text": "W_\\alpha W_\\beta" }, { "math_id": 6, "text": "\\lambda^a_\\alpha \\lambda^b_\\beta" } ]
https://en.wikipedia.org/wiki?curid=7564980
7565022
Logarithmic conformal field theory
Conformal field theory with logarithmic short distance behavior In theoretical physics, a logarithmic conformal field theory is a conformal field theory in which the correlators of the basic fields are allowed to be logarithmic at short distance, instead of being powers of the fields' distance. Equivalently, the dilation operator is not diagonalizable. Examples of logarithmic conformal field theories include critical percolation. In two dimensions. Just like conformal field theory in general, logarithmic conformal field theory has been particularly well-studied in two dimensions. Some two-dimensional logarithmic CFTs have been solved: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c=-2" }, { "math_id": 1, "text": "GL(1|1)" }, { "math_id": 2, "text": "c=0" } ]
https://en.wikipedia.org/wiki?curid=7565022
75651790
The Eternal Flame (novel)
2012 science-fiction book The Eternal Flame is a hard science-fiction novel by Australian author Greg Egan and the second part of the "Orthogonal" trilogy. The novel was published by Night Shade Books on 26 August 2012 with a cover art by Cody Tilson and by Gollancz on 8 August 2013 with a cover art by Greg Egan."" The novel describes the journey of the generation ship "Peerless", which has departed in "The Clockwork Rocket", and the development of new technology as well as changes of the society on board. An essential task is the construction of an engine not needing any fuel to generate thrust, but instead perfectly balancing out the radiation it emits with the energy this generates. To make such a process work, the universe of the novel is based on a Riemannian instead of a Lorentzian manifold (which describes our own universe, where emitting radiation instead consumes energy), changing the rules of physics. The details are described by Greg Egan on his website. The story is continued in "The Arrows of Time." Plot. Three generations after the departure of the "Peerless" from the home world, the ever increasing population (which can't be controlled due to spontaneous reproduction) overstreches its limited capacities. For a drastic solution, newborns are euthanized. The long mission of research is continued in the meantime: Tamara, an astronomer, studies an object on a course close to the "Peerless" and plans to dispatch the spacecraft "Gnat" to it. Carla, a physicist, studies light and finds a seemingly perfect ratio of five to four in one of her experiments. It has already been discovered that Nereo's equation (corresponding to the Maxwell equations) makes wrong predictions. She conducts further experiments with her students to gather more data. Patrizia, one of these students, later goes over the mathematics of collisions and comes up with the hypothesis of quantization for particles of light, later known as Patrizia's principle (corresponding to the Planck relation). She proposes luxites as names for the quanta after the luxagens causing them and an already proposed idea by an ancient philiosopher without evidence. Carla renames them into photons to avoid confusion. First surprised by the ingenious idea, which explains the data and in which an integer number of photons is necessary to give the luxagens enough energy to leave an energy valley, she soon notices some problems. She had studied wavelengths for which four or five photons carry enough energy and now wants to conduct new experiments for which six are necessary. Carlo, a biologist, measures signals used by animals and himself to shapeshift their bodies. Previously it has been discovered through coloring, how their flesh can end up in different parts of their limbs during this process, giving rise to the question about how the ever changing communication with the brain works. Special attention is given to the signals by males to initiate reproduction. There are two kinds of species, where the females either divide in two or four children (biparteous and quadruparteous, latter of which is their own species). Carlo intends to measures those signals and swap them, potentially transferring the division into only two children artificially to their own species, solving the population crisis. Their way of reproduction also poses a problem to Tamaro, the co of Tamara and therefore supposed to trigger hers, who doesn't want her to go on board the "Gnat" and hence risk the lives of their children. Together with their father Erminio, he imprisons Tamara and spreads news about her having given birth on the "Peerless". Tamara can persuade Tamaro with a deal to let her go free and later joins the mission on the "Gnat" with Carla and Ivo. They cause an almost fatal detonation for them near the Object, pushing it on an almost parallel course with the "Peerless", It turns out, that the thermodynamic arrow and the arrow of entropy decrease of the Object point in the opposite directions and hence contact of any matter from the "Peerless" with the Object would again result in a detonation. Carlo travels through the forest section inside the "Peerless" to catch the four arborines (two pairs of cos) for experiments. He then records the light signal of Zosimo when triggering the splitting of Zosima in two children and then sends it into one half of Benigna to mimick Benigno triggering her splitting. Benigna sheds a single female child and survives the birth injured. A successful experiment on their own species could turn the next election around, but time is running out. When this achievement is made public on the "Peerless" some males, fearing their upcoming extinction, set the entire forest on fire. Tamara, not wanting them to have the last word with violence, but knowing about the fatal consequences of Carlo's last experiment, agrees to have the arborine signal transmitted into a part of her body. She also sheds a single female child, who she named Erminia after her mother, and survives injured. Message of the birth is made public and swings the vote around, making the new technology available for every female choosing it and hence solving the population crisis. Carla revisits some of the newly discovered physics and is saddened by the fact, that her legacy will ultimately only be to have taught Patrizia, whose name will go down in history with Patrizia's principle. She does some sketches involving an atom with three different orbitals as well as the emission and absorption of photons, one of which is reflected by a moving mirror to change its frequency, and finally realizes to just have come up with a process to make the eternal flame possible. Some time later, the space probe "Eternal Flame" is dispatched into space to demonstrate the engine indeed working, solving the fuel crisis. Some others, including Tamara and her daughter Erminia, are also watching. Carlo tells Carla to be delighted to have saved her life with his new technology, but she refuses to immediately make use of it, as she rather wants to wait what the new day will bring. Background (literature). Due to Greg Egan begin very popular in , the novel was released by Hayakawa Publishing in Japanese as ("etānaru fureimu", direct transcription of the original English title into Katakana) in 2016. The translation was done by Makoto Yamagishi () and Toru Nakamura (). The novel was a Locus Award Nominee for Best SF Novel in 2013 and reached the 20th place. Background (mathematics and physics). The consequences of the sign change in the metric on the laws of physics are explained in detail (with illustrations and calculations) on Greg Egan's website. The correspondence of the principles presented in the novel with those in our universe are explained in the afterword of the novel. One insight about the Riemannian universe described in the novel is the description of Dirac spinors, the solutions of the Dirac equation, by quaternions. A similar mathematical description is not possible in a Lorentzian universe like ours. Dirac matrices formula_0 are defined using the underlying metric formula_1 in their anticommutator relation formula_2. Switching a sign in the metric results in the corresponding Dirac matrix to be multiplied with the imaginary number formula_3. This is known as the Wick rotation relating Riemannian with Lorentzian geometry through the concept of imaginary time. Dirac matrices (and hence Dirac spinors) in a four-dimensional spacetime are four-dimensional, but there is no connection between the numbers. In a five-dimensional spacetime, they would also be four-dimensional. Quaternions are composed out of four real numbers and therefore also four-dimensional. This makes it possible to formulate the Dirac equation in a Riemannian universe entirely with quaternions. The calculations are described by Greg Egan on his website. The Dirac equation also provides another important concept for the novel. As it is constructed as a square root of the Klein–Gordon equation (a relativistic generalization of the Schrödinger equation), the energies of its solutions are affected by the same problem as the square roots of positive numbers, which is the ambiguity of the sign. This led to the theoretical discovery of antimatter in 1928 before the first observation of a positron (the antiparticle of the electron) in 1932. But as negative energy poses certain problems in further calculations, the negative sign is often shifted to time using the uncertainty principle of energy and time. This interpretation in quantum field theory of antimatter traveling backward through time is known as the Feynman–Stückelberg interpretation. Ordinary matter and antimatter colliding results in their total annihilation, which happens in the novel and explained with the opposite arrows of time. A different situation arises in the sequel "The Arrows of Time" after the "Peerless" has turned around and inverted its own arrow of time. Reception. David Brin, Hugo and Nebula Award-winning author of "Earth and Existence", claims that „Greg Egan is a master of 'what-if' science fiction". His „characters work out the implications and outcomes as they struggle to survive and prevail″ and he presents „the most original alien race since Vernor Vinge's "Tines"“. Jerry Oltion, Nebula Award-winning author of "Abandon in Place", claims that „when most people switch a minus sign for a plus, they re-do the math. Egan re-does the entire universe.“ Karen Burnham, writing in "Strange Horizons", says that „the physics is mind-blowing“ and that „Egan develops almost all of the ideas in the story through dialogue. Some people may say that when the dialogue occurs the action grinds to a halt. However, it's clear that in these novels, the dialogue "is" the action.“ Concerning the struggles with reproduction, she writes that „more than any Egan story to date, the books of the "Orthogonal" trilogy place science in a broader social context“. In a review of the sequel "The Arrows of Time", she adds that „in order to get there, we tour through a huge amount of speculative world building, physics, biology, and sociology.“ A french review by Éric Jentile was published in print in "Bifrost, #88" in October 2017. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\eta" }, { "math_id": 2, "text": "\\{\\gamma^\\mu,\\gamma^\\nu\\}=\\gamma^\\mu\\gamma^\\nu+\\gamma^\\nu\\gamma^\\mu=2\\eta^{\\mu\\nu}" }, { "math_id": 3, "text": "i" } ]
https://en.wikipedia.org/wiki?curid=75651790
75651922
The Arrows of Time
2013 novel by Greg Egan The Arrows of Time is a hard science-fiction novel by Australian author Greg Egan and the third part of the "Orthogonal" trilogy. The novel was published by Gollancz on 21 November 2013 with a cover art by Greg Egan and by Night Shade Books on 5 August 2014 with a cover art by Cody Tilson. The novel describes the return journey of the generation ship "Peerless," which has been launched in "The Clockwork Rocket" and traveled into the void in "The Eternal Flame","" and the reverse enabling the construction of a device to receive messages from the own future as well as the journey to a world where time runs in reverse. The universe of the novel is therefore based on a Riemannian instead of a Lorentzian manifold (which describes our own universe, where time only flows in one direction or the corresponding region being hidden behind an event horizon otherwise), changing the rules of physics. The details are described by Greg Egan on his website. Plot. Valeria watches the launch of the "Peerless" from the home world, waiting for its return. Six generations later, a suitable time has come to turn the ship around. A solution for the home world is in sight. But the crew is split into two fractions, one wanting to continue the mission and one instead wanting to find a new home in the orthogonal cluster. After a vote is held, the "Peerless" turns around and (with an inverted arrow of time) begins the long voyage home. Agata talks with Medoro about the new possibility of using the inverted light of orthogonal stars to send messages back in time, but a discussion only parts the "Peerless" further. After a vote in favor of the system, a bomb kills the group responsible for its construction, including Medoro. Some strongly against it like Ramiro are imprisoned. Meanwhile, the discovery of an inhabitable world called Esilio in the orthogonal cluster seems to pose a solution to the conflict, but time is running backwards there. A journey to Esilio is planned. Examining light bent by the gravity of its sun could also verify Lila's theory (corresponding to General relativity). Agata, Ramiro, Tarquinia and Azelio fly to Esilio with the "Surveyor" using an orthogonal course just like the "Peerless". Twelve years will pass for them while only four years will pass on board the "Peerless". After reaching Esilio, Lila's theory is confirmed. On the surface, they have to deal with many odd consequences of the opposite arrow of time, including having to detonate a rock to enforce their own onto the soil to make their plants grow. Agata finds a message carved into a stone in the crater, which is from the ancestors giving them their gratitude from when Esilio flew past the home world. They fly back with the knowledge of the ultimate success of the grand mission of the "Peerless". When approaching the "Peerless", they learn of the successful construction of the messaging system. Everything about their journey is already common knowledge. Contrary to expectation, the system has blocked any new innovation. But it will completely shut down in a while with no hint at a reason sent back before. The crew discusses about a manual shutdown or manipulation being better than a complete destruction of the "Peerless" by a collision (if it could be averted at all) and build probes to obstruct the orthogonal stars in front of the cameras. After their arrival back on board, they learn about the still ongoing division of society with some thinking the shutdown will be a trick. Agata and Ramiro both learn of a plan to destroy the cameras (which use light from the entire othogonal cluster, rendering bare obstruction impossible) by detonations from space by Giacomo, who used the messaging system he despises nonetheless to plan ahead knowing about the probes of the "Surveyor" and Agata's condition (whose future self still doesn't agree to the current plan) to even abandon it in the last moment if she finds a less dangerous solution. Ramiro admits to Agata to have send Tarquinia to carve the message on Esilio, which therefore isn't from the ancestors and doesn't guarantee a safe future. Shortly before the disruption, Agata realizes that flooding the cameras with ordinary light can also cause a signal loss and heads out into space to move the bombs. The disruption takes place with only minor destruction and Agata survives injured. Tarquinia admits to Agata and Ramiro that she just couldn't uncarve the message on Esilio, which therefore was in fact from the ancestors. Agata tells them about the destruction of the messaging system having lifted the block on innovation with the recent one explaining how their entire existence is even possible. One year after the departure of the "Peerless" from the home world, Valeria is woken up by screams as the sun has turned dark. Together with Eusebio and Silvio, she is brought to Clara, a traveler from the "Peerless" taking them into space. The "Peerless" flew another loop to arrive three years earlier (and also passed by itself on its former course). The sun (which in this universe is a burning rock) was extinguished and gigantic thrusters based on the eternal flame are now build on it to accelerate the entire system (through gravity) onto a parallel course with the Hurtlers, rendering them harmless. The destroyed Gemma will continue to provide light. Clara also shares insight into their society where males and females have been fused together and which prefers to stay on the "Peerless" and the far side of the sun. Valeria plans to come for a visit, but is saddened by the fact that she wouldn't be able to express her gratitude for past generations. Clara thinks that there will surely be a way. Background (literature). Due to Greg Egan begin very popular in , the novel was released by Hayakawa Publishing in Japanese as ("arōzu obu taimu", direct transcription of the original English title into Katakana) in 2017. The translation was done by Makoto Yamagishi () and Toru Nakamura (). The novel was a Locus Award Nominee for Best SF Novel in 2014 and reached the 14th place. The messaging system also appears in the short story "The Hundred Light-Year Diary" by Greg Egan published in 1992, where a time-reversed galaxy from the future phase of contraction of the universe (a theory known as Big Crunch) is discovered. Both stories deal with free will and the consequences of knowing ahead. In particular for "The Arrows of Time", the main mission of scientific research would still be necessary nonetheless since otherwise a result would be sent back in time just because it already had been received without there ever being proof for it. Background (mathematics and physics). The consequences of the sign change in the metric on the laws of physics are explained in detail (with illustrations and calculations) on Greg Egan's website. The correspondence of the principles presented in the novel with those in our universe are explained in the afterword of the novel. The novel adapts one of the most famous experiments in physics, which is the verification of General Relativity through deflection of light by the sun during the Solar Eclipse of May 29, 1919 known as Eddington experiment. While in our universe, Newton's theory predicts light (if it had rest mass, which is not actually the case) to be bent half compared Einstein's theory, in the "Orthogonal" universe, the roles are reversed and Vittorio's theory corresponding to the former predicts light (which in this case does indeed have a rest mass) to be bent more than in Lila's theory corresponding to the latter. Calculations and illustrations of this effect are shown on Greg Egan's website. The Dirac equation provides two important concepts for the novel. As it is constructed as a square root of the Klein–Gordon equation (a relativistic generalization of the Schrödinger equation), the energies of its solutions are affected by the same problem as the square roots of positive numbers, which is the ambiguity of the sign. This led to the theoretical discovery of antimatter in 1928 before the first observation of a positron (the antiparticle of the electron) in 1932. But as negative energy poses certain problems in further calculations, the negative sign is often shifted to time using the uncertainty principle of energy and time. This interpretation in quantum field theory of antimatter traveling backwards through time is known as the Feynman–Stückelberg interpretation. In the novel, matter and antimatter particles are called „negative“ and „positive luxagens“ as a result. Ordinary matter and antimatter colliding results in their total annihilation, which happened with the Object in the prequel "The Eternal Flame". It doesn't happen with matter of the orthogonal cluster any more as the "Peerless" has turned around and hence inverted its own arrow of time. The problem that is now the opposite to that of Esilio and hence it should actually be the other way around is explained in the novel by distinguishing between the actual arrow of time and the arrow of entropy decrease, which are opposite to each other for the orthogonal cluster (hence entropy increases, violating the second law of thermodynamics). The Dirac equation furthermore describes particles with spin formula_0, including electrons. It is based on the metric of spacetime and therefore different in the "Orthogonal" universe as explained in the prequel "The Eternal Flame" and on Greg Egan's website. As a result, electrons don't exist any more and neither does electronics, which uses them in form of electric currents to transmit information. In the novel, an alternative technology called photonics is used, where photons are used instead, and which also exists in our universe in the form of optical fibers for example. (Although different, as photons don't have a rest mass in our universe, but have in the "Orthogonal" universe.) Reception. Karen Burnham, writing in the "New York Review of Science Fiction", says „the scenes of them dealing with the counter-intuitive behavior of the time-reversed planet are appropriately mind-bending“. She finds that „ultimately, the plot of the trilogy is always about physics and the concerns over free will and its limitations; it is never about the threat that the Hurtlers posed to the homeworld.“ She also thinks that „it is a fair criticism that some of the physics dialogues in the trilogy are dry, and if the reader doesn't have a solid grounding in the physics of our own universe, it can be a challenge“. Andy Sawyer, writing in "Strange Horizons", says the novel is "an intellectual quest which involves us, the readers" and "it is as valid an apothesis as anything which involves the physical or the spiritual, made rarer because it celebrates curiosity, knowledge, and understanding." He writes about the diagrams in the novel, which many readers "now find alienating", that they make Egan "a rewarding writer rather than a simply difficult one". The scientific discoveries bring „joy involved with really good hard SF: the conceptual breakthrough and the collision with the sublime“. Other reviews have been published in "Interzone, #251" in March/April 2014 by John Howard and "Analog Science Fiction and Fact" in January/February 2015 by Don Sakers. A french review by Éric Jentile was published in print in "Bifrost, #88" in October 2017. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/2" } ]
https://en.wikipedia.org/wiki?curid=75651922
75652065
Dichronauts
2017 science-fiction novel by Greg Egan Dichronauts is a hard science-fiction novel by Australian author Greg Egan. The novel was published by Night Shade Books on 11 July 2017. It describes a universe with two time dimensions, one of which corresponds to the time perception of the characters while the other influences their space perception, for example by rotations in this directions to be impossible. Hence a symbiosis of two life forms is necessary, so that they can even see in all directions. Furthermore, many fundamental laws of physics are altered crucially: Objects can roll uphill or not fall over any more when oriented suitably. There is negative kinetic energy and a fourth state of matter. Planets are no longer spherical, but hyperbolic and therefore have three separate surfaces. Egan describes these details on his website. Plot. In the world of "Dichronauts", there are two types of beings living in symbiosis with each other: Walkers, who can only see to the west (or east when turning around), provide mobility, while Siders, leech-like creatures running through their skulls, provide additional sight to the north and south. Every city is in a permanent state of migration to follow the sun's shifting orbit and the narrow habitable zone it creates. The Walker Seth and his Sider Theo from the city of Baharabad at the river Zirona join an expedition to the edge of the habitable zone to map safe routes ahead. They encounter a river with the city of Thanton nearby, in which the Walkers seemed to have used poison against their Siders. Seth talks with Theo about their symbiosis. Previously, his sister Elena got pregnant pushing her Sider Irina to abandon her, leaving her side-blind and with a hole in her head. Theo calls through Thanton in the language of the Siders not audible for Walkers and suspects the presence of Sleepsiders, pushing him to ask Seth about Sleepwalkers. Both agree to return to the expedition, where a vote decides against the return to Baharabad and for more explorations. Soon after, the expedition reaches a cliff without another side or bottom visible and suspects to have reached the end of the world. Background (literature). "Dichronauts" describes the dua situation to Egan's the earlier published Orthogonal trilogy, composed of "The Clockwork Rocket" (2011), "The Eternal Flame" (2012), and "The Arrows of Time" (2013). The trilogy is about a universe without any time dimensions at all. In the former, the characters perceive a space dimension as time and in the latter, the characters perceive a time dimension as space. Background (mathematics and physics). Mathematically, the difference between our universe and the "Dichronauts" universe is just a single sign switched in the signature of the metric of flat spacetime. Our universe has signature formula_0 and the "Dichronauts" universe has signature formula_1. A sign change in the signature can be shown in a simplified way by the restriction to two dimensions. A scalar product with signature formula_2 on formula_3 (with the canonical basis) is given by: formula_4. A scalar product with signature formula_5 on formula_3 (with the canonical basis) is given by: formula_6. The vectors formula_7 and formula_8 are orthogonal to each other (meaning their scalar product vanishes) other for both signatures. But given the vector formula_9, the orthogonal direction is spanned by the vector formula_10 for the first and formula_11 for the second signature. Only the second signature allows for a vector like formula_12 to be orthogonal to itself. Such vectors describe the propagation of light, for example in this case that one light-year is traveled in one year by definition. In the universe of "Dichronauts", this leads to the fact, that not the entire space is filled with light, but that there are two dark cones in opposite directions. Calculations and illustrations of this effect are shown on Greg Egan's website. An interactive applet about the movement and rotation of objects in the "Dichronauts" universe is also available there. A fundamental change between our universe and the "Dichronauts" universe can be seen in mechanics, where a ramp will act upon an object resting on it with a force (to counteract gravity, so the object doesn't fall through the ramp) which is orthogonal to the ramp. When considering the combined force of it with gravity, the resulting net force will always pull the object downwards the ramp in our universe, but will pull it up the ramp in the "Dichronauts" universe when the slope is below diagonal. As a result, there is negative kinetic energy in the "Dichronauts" universe. Illustrations of this effect are shown on Greg Egan's website. In our universe with signature formula_0, a planet with radius formula_13 is described by the inequality formula_14 of a sphere, which is convex, bounded and has a surface with one connected component. In the "Dichronauts" universe with signature formula_1, a planet with radius formula_13 is described by the inequality formula_15 of a rotating hyperbola, which is concave, non-bounded and has a surface with three connected components. In both cases, the acceleration of gravity is orthogonal to the surface. But not only is "orthogonal" different in both universes, gravity is as well. The Laplace operator is given by formula_16 in our universe and formula_17 in the "Dichronauts" universe, which changes the form of the gravitational field given by the Poisson equation (of which the Laplace equation is the special case of no matter). Illustrations of the gravitational field are shown on Greg Egan's website. Reception. "Publishers Weekly" writes that the novel is "impressively bizarre" and that "Egan may have out-Eganed himself with this one". writes, that "Egan specializes in inventing seriously strange worlds" and "this one might well be his weirdest yet", but "the problem is, it's counterintuitive, so downright odd that it's impossible to visualize the inhabitants, their surroundings, or what's going on." The symbiosis has "plenty of other issues" and the migration "is not even particularly original", when compared to "Inverted World" by Christopher Priest. Russell Letson writes in the "Locus Magazine", to "wind up taking large chunks of Advanced Egan on faith" and to have "found the "Orthogonal" trilogy and "Dichronauts" impenetrable", but that "still leaves plenty of Egan to work with." A French review by Éric Jentile was published in print in "Bifrost, #88" in October 2017.
[ { "math_id": 0, "text": "(-,+,+,+)" }, { "math_id": 1, "text": "(-,-,+,+)" }, { "math_id": 2, "text": "(+,+)" }, { "math_id": 3, "text": "\\mathbb{R}^2" }, { "math_id": 4, "text": "\\mathbb{R}^2\\times\\mathbb{R}^2\\rightarrow\\mathbb{R}^2,\n\\begin{pmatrix}\nx_1 \\\\\ny_1\n\\end{pmatrix}\\cdot\\begin{pmatrix}\nx_2 \\\\\ny_2\n\\end{pmatrix}\n=x_1x_2\n+y_1y_2" }, { "math_id": 5, "text": "(+,-)" }, { "math_id": 6, "text": "\\mathbb{R}^2\\times\\mathbb{R}^2\\rightarrow\\mathbb{R}^2,\n\\begin{pmatrix}\nx_1 \\\\\ny_1\n\\end{pmatrix}\\cdot\\begin{pmatrix}\nx_2 \\\\\ny_2\n\\end{pmatrix}\n=x_1x_2\n-y_1y_2" }, { "math_id": 7, "text": "(1,0)" }, { "math_id": 8, "text": "(0,1)" }, { "math_id": 9, "text": "(2,1)" }, { "math_id": 10, "text": "(-1,2)" }, { "math_id": 11, "text": "(1,2)" }, { "math_id": 12, "text": "(1,1)" }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "x^2+y^2+z^2\\leq r^2" }, { "math_id": 15, "text": "x^2+y^2-z^2\\leq r^2" }, { "math_id": 16, "text": "\\Delta=\\partial_x^2+\\partial_y^2+\\partial_z^2" }, { "math_id": 17, "text": "\\Delta=\\partial_x^2+\\partial_y^2-\\partial_z^2" } ]
https://en.wikipedia.org/wiki?curid=75652065
75654215
Prabhakar function
Prabhakar function is a certain special function in mathematics introduced by the Indian mathematician Tilak Raj Prabhakar in a paper published in 1971. The function is a three-parameter generalization of the well known two-parameter Mittag-Leffler function in mathematics. The function was originally introduced to solve certain classes of integral equations. Later the function was found to have applications in the theory of fractional calculus and also in certain areas of physics. Definition. The one-parameter and two-parameter Mittag-Leffler functions are defined first. Then the definition of the three-parameter Mittag-Leffler function, the Prabhakar function, is presented. In the following definitions, formula_0 is the well known gamma function defined by formula_1. In the following it will be assumed that formula_2, formula_3 and formula_4 are all complex numbers. One-parameter Mittag-Leffler function. The one-parameter Mittag-Leffler function is defined as formula_5 Two-parameter Mittag-Leffler function. The two-parameter Mittag-Leffler function is defined as formula_6 Three-parameter Mittag-Leffler function (Prabhakar function). The three-parameter Mittag-Leffler function (Prabhakar function) is defined by formula_7 where formula_8. Elementary special cases. The following special cases immediately follow from the definition. Properties. Reduction formula. The following formula can be reduced to lower the value of the third parameter formula_4. formula_13 Relation with Fox–Wright function. The Prabhakar function is related to the Fox–Wright function by the following relation: formula_14 Derivatives. The derivative of the Prabhakar function is given by formula_15 There is a general expression for higher order derivatives. Let formula_16 be a positive integer. The formula_16-th derivative of the Prabhakar function is given by formula_17 The following result is useful in applications. formula_18 Integrals. The following result involving Prabhakar function is known. formula_19 Laplace transforms. The following result involving Laplace transforms plays an important role in both physical applications and numerical computations of the Prabhakar function. formula_20 Prabhakar fractional calculus. The following function is known as the Prabhakar kernel in the literature. formula_21 Given any function formula_22, the convolution of the Prabhakar kernel and formula_22 is called the Prabhakar fractional integral: formula_23 Properties of the Prabhakar fractional integral have been extensively studied in the literature. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma (z)" }, { "math_id": 1, "text": "\\Gamma(z)= \\int_0^\\infty t^{z-1}e^{-z}\\, dz, \\quad\\Re(z) > 0" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\\gamma" }, { "math_id": 5, "text": "E_\\alpha(z)=\\sum_{n=0}^\\infty \\dfrac{z^n}{\\Gamma(\\alpha n + 1)}." }, { "math_id": 6, "text": "E_{\\alpha, \\beta}(z)=\\sum_{n=0}^\\infty \\dfrac{z^n}{\\Gamma(\\alpha n + \\beta)}, \\quad \\Re(\\alpha) >0." }, { "math_id": 7, "text": "E_{\\alpha, \\beta}^\\gamma (z)=\\sum_{n=0}^\\infty \\dfrac{(\\gamma)_n}{n!\\Gamma(\\alpha n + \\beta)} z^n, \\quad \\Re(\\alpha)>0" }, { "math_id": 8, "text": "(\\gamma)_n=\\gamma(\\gamma+1)\\ldots (\\gamma+n-1)" }, { "math_id": 9, "text": "E_{\\alpha,\\beta}^0 (z)= \\frac{1}{\\Gamma(\\beta)}" }, { "math_id": 10, "text": "E_{\\alpha,\\beta}^1 (z)= E_{\\alpha,\\beta}(z)" }, { "math_id": 11, "text": "E_{\\alpha,1}^1 (z)= E_{\\alpha}(z)" }, { "math_id": 12, "text": "E_{1, 1}^1 (z) = e^z" }, { "math_id": 13, "text": " E_{\\alpha,\\beta}^{\\gamma+1} (z) = \\frac{1}{\\alpha\\gamma}\\big[ E_{\\alpha, \\beta -1}^\\gamma(z) + (1-\\beta+\\alpha\\gamma)E_{\\alpha,\\beta}^\\gamma(z)\\big]" }, { "math_id": 14, "text": "E_{\\alpha, \\beta}^\\gamma(z) = \\frac{1}{\\Gamma(\\gamma)}{}_1\\Psi_1 \\left(\\begin{matrix}\\left(\\gamma, 1\\right)\\\\(\\beta,\\alpha)\\end{matrix};z \\right)" }, { "math_id": 15, "text": " \\frac{d}{dz}\\left( E_{\\alpha,\\beta}^\\gamma(z)\\right) = \\frac{1}{\\alpha z}\\big[ E_{\\alpha, \\beta-1}^\\gamma(z) + (1-\\beta)E_{\\alpha, \\beta}^\\gamma\\big] " }, { "math_id": 16, "text": "m" }, { "math_id": 17, "text": " \\frac{d^m}{dz^m}\\left( E_{\\alpha,\\beta}^\\gamma(z)\\right) = \\frac{\\Gamma(\\gamma+m)}{\\Gamma(\\gamma)}E_{\\alpha, m\\alpha+\\beta}^{\\gamma+m}(z)" }, { "math_id": 18, "text": "\\frac{d^m}{dz^m}\\left( t^{\\beta-1} E_{\\alpha,\\beta}^\\gamma(t^\\alpha z) \\right) = t^{\\beta - m -1} E_{\\alpha, \\beta-m}^\\gamma(t^\\alpha z)" }, { "math_id": 19, "text": " \\int_0^t \\tau^{\\beta-1} E_{\\alpha,\\beta}^\\gamma(\\tau^\\alpha z) = t^\\beta E_{\\alpha, \\beta+1}^\\gamma(t^\\alpha z)" }, { "math_id": 20, "text": " L \\left[ t^{\\beta - 1} E_{\\alpha, \\beta}^\\gamma (t^\\alpha z)\\, ;\\, s\\right] =\\frac{s^{\\alpha\\gamma-\\beta}}{(s^\\alpha - z)^\\gamma},\\quad \\Re(s)>0,\\quad |s|>|z|^{1/\\alpha}" }, { "math_id": 21, "text": " e_{\\alpha,\\beta}^\\gamma (t;\\lambda) = t^{\\beta -1} E_{\\alpha,\\beta}^\\gamma(t^\\alpha z)" }, { "math_id": 22, "text": "f(t)" }, { "math_id": 23, "text": " \\int_{t_0}^t (t-u)^{\\beta -1} E_{\\alpha,\\beta}^\\gamma \\left( \\lambda (t-u)^\\alpha\\right) f(u)\\, du" } ]
https://en.wikipedia.org/wiki?curid=75654215
7566175
K-vertex-connected graph
Graph which remains connected when k or fewer nodes removed In graph theory, a connected graph G is said to be k-vertex-connected (or k-connected) if it has more than k vertices and remains connected whenever fewer than k vertices are removed. The vertex-connectivity, or just connectivity, of a graph is the largest k for which the graph is k-vertex-connected. Definitions. A graph (other than a complete graph) has connectivity "k" if "k" is the size of the smallest subset of vertices such that the graph becomes disconnected if you delete them. In complete graphs, there is no subset whose removal would disconnect the graph. Some sources modify the definition of connectivity to handle this case, by defining it as the size of the smallest subset of vertices whose deletion results in either a disconnected graph or a single vertex. For this variation, the connectivity of a complete graph formula_0 is formula_1. An equivalent definition is that a graph with at least two vertices is "k"-connected if, for every pair of its vertices, it is possible to find "k" vertex-independent paths connecting these vertices; see Menger's theorem . This definition produces the same answer, "n" − 1, for the connectivity of the complete graph "K""n". Clearly the complete graph with "n" vertices has connectivity "n" − 1 under this definition. A 1-connected graph is called connected; a 2-connected graph is called biconnected. A 3-connected graph is called triconnected. Applications. Components. Every graph decomposes into a tree of 1-connected components. 1-connected graphs decompose into a tree of biconnected components. 2-connected graphs decompose into a tree of triconnected components. Polyhedral combinatorics. The 1-skeleton of any "k"-dimensional convex polytope forms a "k"-vertex-connected graph (Balinski's theorem). As a partial converse, Steinitz's theorem states that any 3-vertex-connected planar graph forms the skeleton of a convex polyhedron. Computational complexity. The vertex-connectivity of an input graph "G" can be computed in polynomial time in the following way consider all possible pairs formula_2 of nonadjacent nodes to disconnect, using Menger's theorem to justify that the minimal-size separator for formula_2 is the number of pairwise vertex-independent paths between them, encode the input by doubling each vertex as an edge to reduce to a computation of the number of pairwise edge-independent paths, and compute the maximum number of such paths by computing the maximum flow in the graph between formula_3 and formula_4 with capacity 1 to each edge, noting that a flow of formula_5 in this graph corresponds, by the integral flow theorem, to formula_5 pairwise edge-independent paths from formula_3 to formula_4. Properties. Let k≥2. The cycle space of a formula_8-connected graph is generated by its non-separating induced cycles. "k"-linked graph. A graph with at least formula_6 vertices is called formula_5-linked if there are formula_5 disjoint paths for any sequences formula_9 and formula_10 of formula_6 distinct vertices. Every formula_5-linked graph is formula_11-connected graph, but not necessarily formula_6-connected. If a graph is formula_6-connected and has average degree of at least formula_12, then it is formula_5-linked. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_n" }, { "math_id": 1, "text": "n-1" }, { "math_id": 2, "text": "(s, t)" }, { "math_id": 3, "text": "s" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "2k" }, { "math_id": 7, "text": "G" }, { "math_id": 8, "text": "3" }, { "math_id": 9, "text": "a_1,\\dots,a_k" }, { "math_id": 10, "text": "b_1,\\dots, b_k" }, { "math_id": 11, "text": "(2k-1)" }, { "math_id": 12, "text": "16k" } ]
https://en.wikipedia.org/wiki?curid=7566175
75662972
Air to ground channel
Communication link between airborne and terrestrial devices In the domain of wireless communication, air-to-ground channels (A2G) are used for linking airborne devices, such as drones and aircraft, with terrestrial communication equipment. These channels are instrumental in a wide array of applications, extending beyond commercial telecommunications — including important roles in 5G and forthcoming 6G networks, where aerial base stations are integral to Non-Terrestrial Networks — to encompass critical uses in emergency response, environmental monitoring, military communications, and the expanding domain of the internet of things (IoT). A comprehensive understanding of A2G channels, their operational mechanics, and distinct attributes is essential for the enhancement of wireless network performance (range of signal coverage, data transfer speeds, and overall connection reliability). In wireless communication networks, the channel of propagation serves as the medium between the transmitter and the receiver. The characteristics of this channel largely dictate the operational limits of wireless networks in terms of range, throughput, and latency, thereby significantly influencing technological design decisions. Consequently, the characterization and modeling of these channels are of paramount importance. A2G channels are notably characterized by a high probability of line-of-sight (LOS) propagation, a critical factor for higher frequency transmissions like mmWaves and THz. This feature leads to enhanced reliability of links and a reduction in the necessary transmission power to meet the desired link budget. Moreover, for non-line-of-sight (NLOS) links, especially at lower frequencies, the variations in power are less pronounced compared to terrestrial communication networks, attributed to the fact that only the ground-based elements of the link encounter obstacles affecting propagation. Basics of signal propagation. Electromagnetic waves emitted by the transmitter propagate in multiple directions. These waves interact with the environment via different propagation phenomena before reaching the receiver. The figure below demonstrates how processes like specular reflection, diffraction, scattering, and penetration, or a combination thereof, can play a role in wave propagation. It's also important to consider the potential obstructions in the signal's path. The signal received is essentially a combination of multiple versions of the original signal, known as Multipath Components (MPCs), each arriving with varying amplitude, delay (phase), and direction. This results in a coherent aggregate of all these signal copies, which may enhance or weaken the overall signal, depending on the random phases of these components. Radio channels are typically characterized as a superposition of various fading phenomena: formula_0 Here, formula_1 refers to the distance-dependent Pathloss (PL), formula_2 denotes Shadow fading, which accounts for large-scale power variations due to environmental factors, and formula_3 represents Small-Scale or fast fading. The following sections detail the modeling of these components. Channel modeling. There exist several channel models not drawing an explicit distinction between LOS and NLOS channels. However, the most common channel modeling approach consists of the four following steps: Line-of-sight modeling. In cases where the distinction between LOS (Line-of-Sight) and NLOS (Non-Line-of-Sight) links is made, modeling the LOS probability formula_4 becomes critical. The most popular approach to deriving these statistics is based on creating a geometrical model (e.g., Manhattan grid) of the propagation environment. Simplified 2D model: a popular approach suggested by the International Telecommunication Union (ITU). According to ITU , the LOS probability is given by: formula_5 where formula_6, formula_7 is the horizontal distance between the UAV and the ground node, formula_8 and formula_9 are the terminal heights, formula_10 is the ratio of land area covered by buildings compared to the total land area, formula_11 is the mean number of buildings per km2, and formula_12 is the scale parameter of building heights distribution (assumed to follow a Rayleigh distribution). In some cases, it is more convenient to express the LOS probability as a function of incident or elevation angle. Note that the expression is independent of the azimuth angle, consequently, the orientation over the city layout is not taken into account resulting in a 2D model even though the terminal heights are used. The NLOS probability is computed from the LOS probability by the following equation: formula_13 Pathloss modeling. Path loss represents the reduction in power density of an electromagnetic wave as it propagates through space. This attenuation is a critical factor in wireless communication, including A2G channels. A basic path loss model considers a Line-of-Sight (LOS) scenario where the signal travels freely without obstructions between the transmitter and receiver. The formula for calculating the received signal power under these conditions is as follows: formula_14 Here, formula_15 denotes the power of the transmitted signal, formula_16 and formula_17 are the gains of the transmitting and receiving antennas, respectively, formula_18 is the wavelength of the carrier signal, and formula_19 represents the distance between the transmitter and receiver. The Path Loss Exponent (PLE), formula_20, typically has a value of 2 in a free-space environment, indicative of free-space propagation. However, PLE can take other values depending on the propagation environment. Therefore, the general expression for path loss can be represented as: formula_21 However, real-world A2G communication scenarios often differ from ideal free-space conditions. The log-distance path loss model, which considers a reference point for free-space propagation, is frequently utilized to estimate path loss in more complex environments (expressed in decibels): formula_22 where formula_23 is the path loss at a reference distance formula_24, which can be calculated or predetermined based on free-space path loss (formula_25). Incorporating both LOS and Non-Line-of-Sight (NLOS) conditions, the average path loss can be estimated by combining the path loss values for these two scenarios formula_26 In this formula, formula_27 and formula_28 refer to the path loss values for LOS and NLOS conditions, respectively, while formula_4 indicates the likelihood of an LOS link between the UAV and the ground station. The respective PLE values for formula_27 and formula_28 are detailed in various studies. Additionally, atmospheric absorption and rain attenuation can also lead to significant power loss for mmWaves and THz frequency bands. Modeling of shadowing and small-scale fading. Beyond path loss, the presence of large structures like buildings, trees, and vehicles introduces specific, random variations in the power of received signals. These changes, known as Shadow fading, generally evolve at a slower pace, spanning tens to hundreds of wavelengths. Shadow fading at a given distance formula_19 is typically represented as a normal random variable formula_2 in decibels (dB), with a variance formula_29. This variance reflects the deviations in received power around the mean path loss. On a smaller scale, fading involves rapid changes in received signal strength over shorter distances, typically within the span of a few wavelengths. These fluctuations arise from the interference of Multipath Components (MPCs) that converge at the receiver. To quantify this behavior, statistical models such as the Rayleigh and Rice distributions are frequently used. Both are founded on complex Gaussian statistics. In environments with many MPCs, each with distinct amplitudes and random phases, small-scale fading often adheres to a Rayleigh distribution. Particularly in Air-to-Air (A2A) and Air-to-Ground (A2G) channels, where Line-of-Sight (LOS) propagation predominates, a Ricean distribution is a more appropriate model. Additionally, other models like Nakagami, chi-squared (formula_30), and non-central formula_30 distributions are also considered relevant in certain scenarios. Notably, the formula_30 family of distributions encompasses many of these models. For the modeling of small-scale fading, Geometry-Based Stochastic Channel Models (GBSCM) are among the most widely used methodologies. These models are developed through empirical measurements or geometric analysis and simulation, accommodating the inherently stochastic nature of signal variation. GBSCM is particularly effective in modeling narrow-band channels or the taps of wideband models that employ a tapped delay line approach. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H=\\Lambda+X_{sh}+X_{SS}," }, { "math_id": 1, "text": "\\Lambda" }, { "math_id": 2, "text": "X_{sh}" }, { "math_id": 3, "text": "X_{SS}" }, { "math_id": 4, "text": "P_{\\text{LOS}}" }, { "math_id": 5, "text": "P_{\\text{LOS}} = \\prod_{n=0}^{m} \\left[1 - \\exp \\left(-\\frac{\\left[h_{\\text{UAV}} - \\frac{(n + \\frac{1}{2})(h_{\\text{UAV}} - h_{\\text{G}})}{m + 1}\\right]^2}{2\\Omega^2}\\right)\\right]," }, { "math_id": 6, "text": "m = \\text{floor} \\left(d_h \\sqrt{\\varsigma \\xi} - 1\\right)" }, { "math_id": 7, "text": "d_h" }, { "math_id": 8, "text": "h_{\\text{UAV}}" }, { "math_id": 9, "text": "h_{\\text{G}}" }, { "math_id": 10, "text": "\\varsigma" }, { "math_id": 11, "text": "\\xi" }, { "math_id": 12, "text": "\\Omega" }, { "math_id": 13, "text": "P_{\\text{NLOS}} = 1 - P_{\\text{LOS}}." }, { "math_id": 14, "text": "P_{\\text{R}} = P_{\\text{T}} \\times G_{\\text{T}} \\times G_{\\text{R}} \\times \\left(\\frac{\\lambda}{4 \\pi d}\\right)^\\eta," }, { "math_id": 15, "text": "P_{\\text{T}}" }, { "math_id": 16, "text": "G_{\\text{T}}" }, { "math_id": 17, "text": "G_{\\text{R}}" }, { "math_id": 18, "text": "\\lambda" }, { "math_id": 19, "text": "d" }, { "math_id": 20, "text": "\\eta" }, { "math_id": 21, "text": "\\Lambda = \\left(\\frac{4\\pi d}{\\lambda}\\right)^\\eta." }, { "math_id": 22, "text": "\\Lambda(d) = \\Lambda_0 + 10 \\eta \\log \\left(\\frac{d}{d_0}\\right)," }, { "math_id": 23, "text": "\\Lambda_0" }, { "math_id": 24, "text": "d_0" }, { "math_id": 25, "text": "\\Lambda_0 = 20 \\log \\left[\\frac{4\\pi d_0}{\\lambda}\\right]" }, { "math_id": 26, "text": "\\Lambda = P_{\\text{LOS}} \\times \\Lambda_{\\text{LOS}} + (1 - P_{\\text{LOS}}) \\times \\Lambda_{\\text{NLOS}}," }, { "math_id": 27, "text": "\\Lambda_{\\text{LOS}}" }, { "math_id": 28, "text": "\\Lambda_{\\text{NLOS}}" }, { "math_id": 29, "text": "\\sigma" }, { "math_id": 30, "text": "\\chi^2" } ]
https://en.wikipedia.org/wiki?curid=75662972
75668874
Absolutely and completely monotonic functions and sequences
In mathematics, the notions of an absolutely monotonic function and a completely monotonic function are two very closely related concepts. Both imply very strong monotonicity properties. Both types of functions have derivatives of all orders. In the case of an absolutely monotonic function, the function as well as its derivatives of all orders must be non-negative in its domain of definition which would imply that the function as well as its derivatives of all orders are monotonically increasing functions in the domain of definition. In the case of a completely monotonic function, the function and its derivatives must be alternately non-negative and non-positive in its domain of definition which would imply that function and its derivatives are alternately monotonically increasing and monotonically decreasing functions. Such functions were first studied by S. Bernshtein in 1914 and the terminology is also due to him. There are several other related notions like the concepts of almost completely monotonic function, logarithmically completely monotonic function, strongly logarithmically completely monotonic function, strongly completely monotonic function and almost strongly completely monotonic function. Another related concept is that of a completely/absolutely monotonic sequence. This notion was introduced by Hausdorff in 1921. The notions of completely and absolutely monotone function/sequence play an important role in several areas of mathematics. For example, in classical analysis they occur in the proof of the positivity of integrals involving Bessel functions or the positivity of Cesàro means of certain Jacobi series. Such functions occur in other areas of mathematics such as probability theory, numerical analysis, and elasticity. Definitions. Functions. A real valued function formula_0 defined over an interval formula_1 in the real line is called an absolutely monotonic function if it has derivatives formula_2 of all orders formula_3 and formula_4 for all formula_5 in formula_1. The function formula_0 is called a completely monotonic function if formula_6 for all formula_5 in formula_1. The two notions are mutually related. The function formula_0 is completely monotonic if and only if formula_7 is absolutely monotonic on formula_8 where formula_9 the interval obtained by reflecting formula_1 with respect to the origin. (Thus, if formula_1 is the interval formula_10 then formula_9 is the interval formula_11.) In applications, the interval on the real line that is usually considered is the closed-open right half of the real line, that is, the interval formula_12. Examples. The following functions are absolutely monotonic in the specified regions. Sequences. A sequence formula_24 is called an absolutely monotonic sequence if its elements are non-negative and its successive differences are all non-negative, that is, if formula_25 where formula_26. A sequence formula_24 is called a completely monotonic sequence if its elements are non-negative and its successive differences are alternately non-positive and non-negative, that is, if formula_27 Examples. The sequences formula_28 and formula_29 for formula_30 are completely monotonic sequences. Some important properties. Both the extensions and applications of the theory of absolutely monotonic functions derive from theorems. formula_35 where formula_36 is non-decreasing and bounded on formula_33. formula_40 The determination of this function from the sequence is referred to as the Hausdorff moment problem. Further reading. The following is a selection from the large body of literature on absolutely/completely monotonic functions/sequences. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x)" }, { "math_id": 1, "text": "I" }, { "math_id": 2, "text": "f^{(n)}(x)" }, { "math_id": 3, "text": "n=0,1,2,\\ldots" }, { "math_id": 4, "text": "f^{(n)}(x) \\ge 0" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "(-1)^nf^{(n)}(x) \\ge 0" }, { "math_id": 7, "text": "f(-x)" }, { "math_id": 8, "text": "-I " }, { "math_id": 9, "text": "-I" }, { "math_id": 10, "text": "(a,b)" }, { "math_id": 11, "text": "(-b,-a)" }, { "math_id": 12, "text": "[0, \\infty)" }, { "math_id": 13, "text": "f(x)=c" }, { "math_id": 14, "text": " c" }, { "math_id": 15, "text": " -\\infty <x < \\infty " }, { "math_id": 16, "text": "f(x) = \\sum_{k=0}^\\infty a_k x^k " }, { "math_id": 17, "text": "a_k\\ge 0 " }, { "math_id": 18, "text": " k " }, { "math_id": 19, "text": "0\\le x < \\infty " }, { "math_id": 20, "text": " f(x) = -\\log (-x)" }, { "math_id": 21, "text": "-1 \\le x <0 " }, { "math_id": 22, "text": " f(x)=\\sin^{-1}x" }, { "math_id": 23, "text": "0\\le x\\le 1 " }, { "math_id": 24, "text": "\\{\\mu_n\\}_{n=0}^\\infty" }, { "math_id": 25, "text": "\\Delta^k\\mu_n\\ge 0, \\quad n,k = 0,1,2,\\ldots " }, { "math_id": 26, "text": "\\Delta^k\\mu_n = \\sum_{m=0}^k (-1)^m {k \\choose m}\\mu_{n+k-m}" }, { "math_id": 27, "text": "(-1)^k\\Delta^k\\mu_n\\ge 0, \\quad n,k = 0,1,2,\\ldots " }, { "math_id": 28, "text": "\\left\\{\\frac{1}{n+1}\\right\\}_0^\\infty" }, { "math_id": 29, "text": "\\{c^n\\}_0^\\infty" }, { "math_id": 30, "text": " 0\\le c \\le 1 " }, { "math_id": 31, "text": "[a,b]" }, { "math_id": 32, "text": "|x-a| < b-a" }, { "math_id": 33, "text": "[0,\\infty)" }, { "math_id": 34, "text": "(-\\infty,0]" }, { "math_id": 35, "text": " f(x) = \\int_0^\\infty e^{xt}\\, d\\mu(t)" }, { "math_id": 36, "text": "\\mu(t)" }, { "math_id": 37, "text": " \\{\\mu_n\\}_0^\\infty" }, { "math_id": 38, "text": "\\alpha(t)" }, { "math_id": 39, "text": "[0,1]" }, { "math_id": 40, "text": " \\mu_n = \\int_0^1 t^n \\, d\\alpha(t), \\quad n=0,1,2,\\ldots" } ]
https://en.wikipedia.org/wiki?curid=75668874
75675151
Regular estimator
Class of statistical estimators Regular estimators are a class of statistical estimators that satisfy certain regularity conditions which make them amenable to asymptotic analysis. The convergence of a regular estimator's distribution is, in a sense, locally uniform. This is often considered desirable and leads to the convenient property that a small change in the parameter does not dramatically change the distribution of the estimator. Definition. An estimator formula_0 of formula_1 based on a sample of size formula_2 is said to be regular if for every formula_3: formula_4 where the convergence is in distribution under the law of formula_5. Examples of non-regular estimators. Both the Hodges' estimator and the James-Stein estimator are non-regular estimators when the population parameter formula_6 is exactly 0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\hat{\\theta}_n " }, { "math_id": 1, "text": "\\psi(\\theta)" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "h" }, { "math_id": 4, "text": " \\sqrt n \\left ( \\hat{\\theta}_n - \\psi (\\theta + h/\\sqrt n) \\right ) \\stackrel{\\theta+h/\\sqrt n} {\\rightarrow} L_\\theta" }, { "math_id": 5, "text": " \\theta + h/\\sqrt n" }, { "math_id": 6, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=75675151
75679597
Indium arsenide antimonide
Indium arsenide antimonide, also known as indium antimonide arsenide or InAsSb (InAs1-xSbx), is a ternary III-V semiconductor compound. It can be considered as an alloy between indium arsenide (InAs) and indium antimonide (InSb). The alloy can contain any ratio between arsenic and antimony. InAsSb refers generally to any composition of the alloy. Preparation. InAsSb films have been grown by molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE) and liquid phase epitaxy (LPE) on gallium arsenide and gallium antimonide substrates. It is often incorporated into layered heterostructures with other III-V compounds. Thermodynamic stability. Between 524 °C and 942 °C (the melting points of pure InSb and InAs, respectively), InAsSb can exist at a two-phase liquid-solid equilibrium, depending on temperature and average composition of the alloy. InAsSb possesses an additional miscibility gap at temperatures below approximately 503 °C. This means that intermediate compositions of the alloy below this temperature are thermodynamically unstable and can spontaneously separate into two phases: one InAs-rich and one InSb-rich. This limits the compositions of InAsSb that can be obtained by near-equilibrium growth techniques, such as LPE, to those outside of the miscibility gap. However, compositions of InAsSb within the miscibility gap can be obtained with non-equilibrium growth techniques, such as MBE and MOVPE. By carefully selecting the growth conditions and maintaining relatively low temperatures during and after growth, it is possible to obtain compositions of InAsSb within the miscibility gap that are kinetically stable. Electronic properties. The bandgap and lattice constant of InAsSb alloys are between those of pure InAs (a = 0.606 nm, Eg = 0.35 eV) and InSb (a = 0.648 nm, Eg = 0.17 eV). Over all compositions, the band gap is direct, like in InAs and InSb. The direct bandgap displays strong bowing, reaching a minimum with respect to composition at approximately x = 0.62 at room temperature and lower temperatures. The following empirical relationship has been suggested for the direct bandgap of InAsSb in eV as a function of composition (0 &lt; x &lt; 1) and temperature (in Kelvin): formula_0 This equation is plotted in the figures, using a suggested bowing parameter of C = 0.75 eV. Slightly different relations have also been suggested for Eg as a function of composition and temperature, depending on the material quality, strain, and defect density. Applications. Because of its small direct bandgap, InAsSb has been extensively studied over the last few decades, predominantly for use in mid- to long-wave infrared photodetectors that operate at room temperature and cryogenic temperatures. InAsSb is used as the active material in some commercially available infrared photodetectors. Depending on the heterostructure and detector configuration that is used, InAsSb-based detectors can operate at wavelengths ranging from approximately 2 μm to 11 μm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{g}(x,T) = 0.417 - (1.28\\cdot 10^{-4})T - (2.6\\cdot 10^{-7})T^{2} - x(C + 0.182 + (10^{-9})T^{2}) + x^{2}(C - (5.8\\cdot 10^{-4}) + (10^{-7})T^{2})" } ]
https://en.wikipedia.org/wiki?curid=75679597
75689664
Donsker classes
Classes of functions A class of functions is considered a Donsker class if it satisfies Donsker's theorem, a functional generalization of the central limit theorem. Definition. A class of functions formula_0 is called a Donsker class if the empirical process indexed by formula_0, formula_1, converges in distribution to a Gaussian process in the space formula_2. This means that for every finite set of functions formula_3 and each formula_4, the random vector formula_5 converges in distribution to a multivariate normal vector as formula_6. The empirical process formula_7 is defined by formula_8 where formula_9 is the empirical measure based on an iid sample formula_10 and formula_11 is the probability measure from which the sample is drawn. Examples and Sufficient Conditions. Classes of functions which have finite Dudley's entropy integral are Donsker classes. This includes empirical distribution functions formed from the class of functions defined by formula_12 as well as parametric classes over bounded parameter spaces. More generally any VC class is also Donsker class. Properties. Classes of functions formed by taking infima or suprema of functions in a Donsker class also form a Donsker class. Donsker's Theorem. Donsker's theorem states that the empirical distribution function, when properly normalized, converges weakly to a Brownian bridge—a continuous Gaussian process. This is significant as it assures that results analogous to the central limit theorem hold for empirical processes, thereby enabling asymptotic inference for a wide range of statistical applications. The concept of the Donsker class is influential in the field of asymptotic statistics. Knowing whether a function class is a Donsker class helps in understanding the limiting distribution of empirical processes, which in turn facilitates the construction of confidence bands for function estimators and hypothesis testing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathcal{F} " }, { "math_id": 1, "text": " \\{\\mathbb{G}_n(f): f \\in \\mathcal{F}\\} " }, { "math_id": 2, "text": " l^{\\infty}(\\mathcal{F}) " }, { "math_id": 3, "text": " f_1, f_2, \\dots, f_k \\in \\mathcal{F} " }, { "math_id": 4, "text": " n " }, { "math_id": 5, "text": " (\\mathbb{G}_n(f_1), \\mathbb{G}_n(f_2), \\dots, \\mathbb{G}_n(f_k)) " }, { "math_id": 6, "text": " n \\rightarrow \\infty " }, { "math_id": 7, "text": " \\mathbb{G}_n(f) " }, { "math_id": 8, "text": " \\mathbb{G}_n(f) = \\sqrt{n}(\\mathbb{P}_n - P)(f) " }, { "math_id": 9, "text": " \\mathbb{P}_n " }, { "math_id": 10, "text": " X_1, \\dots, X_n " }, { "math_id": 11, "text": " P " }, { "math_id": 12, "text": "\\mathbb I_{(-\\infty, t]}" } ]
https://en.wikipedia.org/wiki?curid=75689664
75689762
Dudley's entropy integral
Dudley's entropy integral is a mathematical concept in the field of probability theory that describes a relationship involving the entropy of certain metric spaces and the concentration of measure phenomenon. It is named after the mathematician R. M. Dudley, who introduced the integral as part of his work on the uniform central limit theorem. Definition. The Dudley's entropy integral is defined for a metric space formula_0 equipped with a probability measure formula_1. Given a set formula_2 and an formula_3-covering, the entropy of formula_2 is the logarithm of the minimum number of balls of radius formula_3 required to cover formula_2. Dudley's entropy integral is then given by the formula: formula_4 where formula_5 is the covering number, i.e. the minimum number of balls of radius formula_3 with respect to the metric formula_6 that cover the space formula_2. Mathematical background. Dudley's entropy integral arises in the context of empirical processes and Gaussian processes, where it is used to bound the supremum of a stochastic process. Its significance lies in providing a metric entropy measure to assess the complexity of a space with respect to a given probability distribution. More specifically, the expected supremum of a sub-gaussian process is bounded up to finite constants by the entropy integral. Additionally, function classes with a finite entropy integral satisfy a uniform central limit theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(T, d)" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "\\epsilon" }, { "math_id": 4, "text": "\n\\int_0^\\infty \\sqrt{\\log N(T, d, \\epsilon)} \\, d\\epsilon\n" }, { "math_id": 5, "text": "N(T, d, \\epsilon)" }, { "math_id": 6, "text": "d" } ]
https://en.wikipedia.org/wiki?curid=75689762
75694692
Square-root sum problem
Problem in computer science &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: What is the Turing run-time complexity of the square-root sum problem? The square-root sum problem (SRS) is a computational decision problem from the field of numerical analysis, with applications to computational geometry. Definitions. SRS is defined as follows:Given positive integers formula_0 and an integer "t", decide whether formula_1.An alternative definition is:Given positive integers formula_0 and formula_2, decide whether formula_3. The problem was posed in 1981, and likely earlier. Run-time complexity. SRS can be solved in polynomial time in the Real RAM model. However, its run-time complexity in the Turing machine model is open, as of 1997. The main difficulty is that, in order to solve the problem, the square-roots should be computed to a high accuracy, which may require a large number of bits. The problem is mentioned in the Open Problems Garden. Blomer presents a polynomial-time Monte Carlo algorithm for deciding whether a sum of square roots equals zero. The algorithm applies more generally, to any sum of radicals. Allender, Burgisser, Pedersen and Miltersen prove that SRS lies in the counting hierarchy (which is contained in PSPACE). Separation bounds. One way to solve SRS is to prove a lower bound on the absolute difference formula_4 or formula_5. Such lower bound is called a "separation bound" since it separates between the difference and 0. For example, if the absolute difference is at least 2-"d", it means that we can round all numbers to "d" bits of accuracy, and solve SRS in time polynomial in "d". This leads to the mathematical problem of proving bounds on this difference. Define "r"("n","k") as the smallest positive value of the difference formula_6, where "ai" and "bi" are integers between 1 and "n"; define "R"("n","k") is defined as -log "r"("n","k"), which is the number of accuracy digits required to solve SRS. Computing "r"("n","k") is open problem 33 in the open problem project. In particular, it is interesting whether r("n","k") is in O(poly("k",log("n")). A positive answer would imply that SRS can be solved in polynomial time in the Turing Machine model. Some currently known bounds are: Applications. SRS is important in computational geometry, as Euclidean distances are given by square-roots, and many geometric problems (e.g. Minimum spanning tree in the plane and Euclidean traveling salesman problem) require to compute sums of distances. Etessami and Yannakakis show a reduction from SRS to the problem of termination of recursive concurrent stochastic games. Relation to semidefinite programming. SRS also has a theoretic importance, as it is a simple special case of a semidefinite programming feasibility problem. Consider the matrix formula_16. This matrix is positive semidefinite iff formula_17, iff formula_18. Therefore, to solve SRS, we can construct a feasibility problem with "n" constraints of the form formula_19, and additional linear constraints formula_20. The resulting SDP is feasible if and only if SRS is feasible. As the runtime complexity of SRS in the Turing machine model is open, the same is true for SDP feasibility (as of 1997). Extensions. Kayal and Saha extend the problem from integers to polynomials. Their results imply a solution to SRS for a special class of integers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_1,\\ldots,a_k" }, { "math_id": 1, "text": "\\sum_{i=1}^k \\sqrt{a_i} \\leq t" }, { "math_id": 2, "text": "b_1,\\ldots,b_k" }, { "math_id": 3, "text": "\\sum_{i=1}^k \\sqrt{a_i} \\leq \\sum_{i=1}^k \\sqrt{b_i}" }, { "math_id": 4, "text": "\\left|t - \\sum_{i=1}^k \\sqrt{a_i} \\right|" }, { "math_id": 5, "text": "\\left|\\sum_{i=1}^k \\sqrt{a_i} - \\sum_{i=1}^k \\sqrt{b_i}\\right|" }, { "math_id": 6, "text": "\\sum_{i=1}^k \\sqrt{a_i} - \\sum_{i=1}^k \\sqrt{b_i}" }, { "math_id": 7, "text": "r(n,k)\\in O(n^{-2k+3/2})" }, { "math_id": 8, "text": "R(n,k)\\geq (2k-3/2)\\cdot \\log{n}" }, { "math_id": 9, "text": "R(n,k)\\in O(2^{2 k}\\cdot \\log{n})" }, { "math_id": 10, "text": "R(n,k)\\in 2^{O(n/\\log{n})}\\cdot \\log{n}" }, { "math_id": 11, "text": "R(n,k)\\in 2^{O(n/\\log{n})}" }, { "math_id": 12, "text": "2^{o(k)} \\cdot (\\log{n})^{O(1)}" }, { "math_id": 13, "text": "n^{k+o(k)}" }, { "math_id": 14, "text": "r(n,k)\\geq \\gamma\\cdot n^{-2n}" }, { "math_id": 15, "text": "r(n,k)\\geq \\left( n\\cdot \\max_i (\\sqrt{a_i})\\right)^{-2^n}" }, { "math_id": 16, "text": "\\left(\n\\begin{matrix}\n1 & x\n\\\\ \nx & a\n\\end{matrix}\n\\right)\n" }, { "math_id": 17, "text": "a - x^2 \\geq 0" }, { "math_id": 18, "text": "|x|\\leq \\sqrt{a}" }, { "math_id": 19, "text": "\\left(\n\\begin{matrix}\n1 & x_i\n\\\\ \nx_i & a_i\n\\end{matrix}\n\\right) \\succeq 0\n" }, { "math_id": 20, "text": "x_i\\geq 0, \\sum_{i=1}^n x_i \\geq k" } ]
https://en.wikipedia.org/wiki?curid=75694692
75705
Poker probability
Chances of card combinations in poker In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands. History. Probability and gambling have been ideas since long before the invention of poker. The development of probability theory in the late 1400s was attributed to gambling; when playing a game with high stakes, players wanted to know what the chance of winning would be. In 1494, Fra Luca Paccioli released his work which was the first written text on probability. Motivated by Paccioli's work, Girolamo Cardano (1501-1576) made further developments in probability theory. His work from 1550, titled "Liber de Ludo Aleae", discussed the concepts of probability and how they were directly related to gambling. However, his work did not receive any immediate recognition since it was not published until after his death. Blaise Pascal (1623-1662) also contributed to probability theory. His friend, Chevalier de Méré, was an avid gambler with the goal to become wealthy from it. De Méré tried a new mathematical approach to a gambling game but did not get the desired results. Determined to know why his strategy was unsuccessful, he consulted with Pascal. Pascal's work on this problem began an important correspondence between him and fellow mathematician Pierre de Fermat (1601-1665). Communicating through letters, the two continued to exchange their ideas and thoughts. These interactions led to the conception of basic probability theory. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling. Frequencies. 5-card poker hands. In straight poker and five-card draw, where there are no hole cards, players are simply dealt five cards from a deck of 52. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. Wild cards are not considered. In this chart: The "nCr" function on most scientific calculators can be used to calculate hand frequencies; entering with and , for example, yields formula_0 as above. The royal flush is a case of the straight flush. It can be formed 4 ways (one for each suit), giving it a probability of 0.000154% and odds of 649,739 : 1. When ace-low straights and ace-low straight flushes are not counted, the probabilities of each are reduced: straights and straight flushes each become 9/10 as common as they otherwise would be. The 4 missed straight flushes become flushes and the 1,020 missed straights become no pair. Note that since suits have no relative value in poker, two hands can be considered identical if one hand can be transformed into the other by swapping suits. For example, the hand 3♣ 7♣ 8♣ Q♠ A♠ is identical to 3♦ 7♦ 8♦ Q♥ A♥ because replacing all of the clubs in the first hand with diamonds and all of the spades with hearts produces the second hand. So eliminating identical hands that ignore relative suit values, there are only 134,459 distinct hands. The number of distinct poker hands is even smaller. For example, 3♣ 7♣ 8♣ Q♠ A♠ and 3♦ 7♣ 8♦ Q♥ A♥ are not identical hands when just ignoring suit assignments because one hand has three suits, while the other hand has only two—that difference could affect the relative value of each hand when there are more cards to come. However, even though the hands are not identical from that perspective, they still form equivalent poker hands because each hand is an A-Q-8-7-3 high card hand. There are 7,462 distinct poker hands. 7-card poker hands. In some popular variations of poker such as Texas hold 'em, the most widespread poker variant overall, a player uses the best five-card poker hand out of seven cards. The frequencies are calculated in a manner similar to that shown for 5-card hands, except additional complications arise due to the extra two cards in the 7-card poker hand. The total number of distinct 7-card hands is formula_1. It is notable that the probability of a no-pair hand is "lower" than the probability of a one-pair or two-pair hand. The Ace-high straight flush or royal flush is slightly more frequent (4324) than the lower straight flushes (4140 each) because the remaining two cards can have any value; a King-high straight flush, for example, cannot have the Ace of its suit in the hand (as that would make it ace-high instead). Since suits have no relative value in poker, two hands can be considered identical if one hand can be transformed into the other by swapping suits. Eliminating identical hands that ignore relative suit values leaves 6,009,159 distinct 7-card hands. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824. Perhaps surprisingly, this is fewer than the number of 5-card poker hands from 5 cards, as some 5-card hands are impossible with 7 cards (e.g. 7-high and 8-high). 5-card lowball poker hands. Some variants of poker, called lowball, use a low hand to determine the winning hand. In most variants of lowball, the ace is counted as the lowest card and straights and flushes don't count against a low hand, so the lowest hand is the five-high hand A-2-3-4-5, also called a "wheel". The probability is calculated based on formula_0, the total number of 5-card combinations. (The frequencies given are exact; the probabilities and odds are approximate.) As can be seen from the table, just over half the time a player gets a hand that has no pairs, threes- or fours-of-a-kind. (50.7%) If aces are not low, simply rotate the hand descriptions so that 6-high replaces 5-high for the best hand and ace-high replaces king-high as the worst hand. Some players do not ignore straights and flushes when computing the low hand in lowball. In this case, the lowest hand is A-2-3-4-6 with at least two suits. Probabilities are adjusted in the above table such that "5-high" is not listed", "6-high" has one distinct hand, and "King-high" having 330 distinct hands, respectively. The Total line also needs adjusting. 7-card lowball poker hands. In some variants of poker a player uses the best five-card low hand selected from seven cards. In most variants of lowball, the ace is counted as the lowest card and straights and flushes don't count against a low hand, so the lowest hand is the five-high hand A-2-3-4-5, also called a "wheel". The probability is calculated based on formula_2, the total number of 7-card combinations. The table does not extend to include five-card hands with at least one pair. Its "Total" represents the 95.4% of the time that a player can select a 5-card low hand without any pair. If aces are not low, simply rotate the hand descriptions so that 6-high replaces 5-high for the best hand and ace-high replaces king-high as the worst hand. Some players do not ignore straights and flushes when computing the low hand in lowball. In this case, the lowest hand is A-2-3-4-6 with at least two suits. Probabilities are adjusted in the above table such that "5-high" is not listed, "6-high" has 781,824 distinct hands, and "King-high" has 21,457,920 distinct hands, respectively. The Total line also needs adjusting. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{52 \\choose 5} = 2,598,960" }, { "math_id": 1, "text": "{52 \\choose 7} = 133{,}784{,}560" }, { "math_id": 2, "text": "{52 \\choose 7} = 133,784,560" } ]
https://en.wikipedia.org/wiki?curid=75705
7570573
Interval scheduling
Class of problems in computer science Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an "interval" describing the time in which it needs to be processed by some machine (or, equivalently, scheduled on some resource). For instance, task A might run from 2:00 to 5:00, task B might run from 4:00 to 10:00 and task C might run from 9:00 to 11:00. A subset of intervals is "compatible" if no two intervals overlap on the machine/resource. For example, the subset {A,C} is compatible, as is the subset {B}; but neither {A,B} nor {B,C} are compatible subsets, because the corresponding intervals within each subset overlap. The "interval scheduling maximization problem" (ISMP) is to find a largest compatible set, i.e., a set of non-overlapping intervals of maximum size. The goal here is to execute as many tasks as possible, that is, to maximize the throughput. It is equivalent to finding a maximum independent set in an interval graph. A generalization of the problem considers formula_0 machines/resources. Here the goal is to find formula_1 compatible subsets whose union is the largest. In an upgraded version of the problem, the intervals are partitioned into groups. A subset of intervals is "compatible" if no two intervals overlap, and moreover, no two intervals belong to the same group (i.e., the subset contains at most a single representative of each group). Each group of intervals corresponds to a single task, and represents several alternative intervals in which it can be executed. The "group interval scheduling decision problem" (GISDP) is to decide whether there exists a compatible set in which all groups are represented. The goal here is to execute a single representative task from each group. GISDPk is a restricted version of GISDP in which the number of intervals in each group is at most "k". The "group interval scheduling maximization problem" (GISMP) is to find a largest compatible set - a set of non-overlapping representatives of maximum size. The goal here is to execute a representative task from as many groups as possible. GISMPk is a restricted version of GISMP in which the number of intervals in each group is at most "k". This problem is often called JISPk, where J stands for Job. GISMP is the most general problem; the other two problems can be seen as special cases of it: All these problems can be generalized by adding a "weight" for each interval, representing the profit from executing the task in that interval. Then, the goal is to maximize the total weight. All these problems are special cases of single-machine scheduling, since they assume that all tasks must run on a single processor. Single-machine scheduling is a special case of optimal job scheduling. Single-Interval Scheduling Maximization. Single-interval scheduling refers to creating an interval schedule in which no intervals overlap. Unweighted. Several algorithms, that may look promising at first sight, actually do not find the optimal solution: The following greedy algorithm, called Earliest deadline first scheduling, does find the optimal solution for unweighted single-interval scheduling: Whenever we select an interval at step 1, we may have to remove many intervals in step 2. However, all these intervals necessarily cross the finishing time of "x", and thus they all cross each other. Hence, at most 1 of these intervals can be in the optimal solution. Hence, for every interval in the optimal solution, there is an interval in the greedy solution. This proves that the greedy algorithm indeed finds an optimal solution. A more formal explanation is given by a Charging argument. The greedy algorithm can be executed in time O("n" log "n"), where "n" is the number of tasks, using a preprocessing step in which the tasks are sorted by their finishing times. Weighted. Problems involving weighted interval scheduling are equivalent to finding a maximum-weight independent set in an interval graph. Such problems can be solved in polynomial time. Assuming the vectors are sorted from earliest to latest finish time, the following pseudocode determines the maximum weight of a single-interval schedule in Θ(n) time: // The vectors are already sorted from earliest to latest finish time. int v[numOfVectors + 1]; // list of interval vectors int w[numOfVectors + 1]; // w[j] is the weight for v[j]. int p[numOfVectors + 1]; // p[j] is the # of vectors that end before v[j] begins. int M[numOfVectors + 1]; int finalSchedule[]; // v[0] does not exist, and the first interval vector is assigned to v[1]. w[0] = 0; p[0] = 0; M[0] = 0; // The following code determines the value of M for each vector. // The maximum weight of the schedule is equal to M[numOfVectors]. for (int i = 1; i &lt; numOfVectors + 1; i++) { M[i] = max(w[i] + M[p[i]], M[i - 1]); // Function to construct the optimal schedule schedule (j) { else if (w[j] + M[p[j]] &gt;= M[j - 1]){ prepend(v[j], finalSchedule); // prepends v[j] to schedule. schedule(p[j]); Example. If we have the following 9 vectors sorted by finish time, with the weights above each corresponding interval, we can determine which of these vectors are included in our maximum weight schedule which only contains a subset of the following vectors. Here, we input our final vector (where j=9 in this example) into our schedule function from the code block above. We perform the actions in the table below until j is set to 0, at which point, we only include into our final schedule the encountered intervals which met the formula_2 requirement. This final schedule is the schedule with the maximum weight. Group Interval Scheduling Decision. NP-complete when some groups contain 3 or more intervals. GISDPk is NP-complete when formula_3, even when all intervals have the same length. This can be shown by a reduction from the following version of the Boolean satisfiability problem, which was shown to be NP-complete likewise to the unrestricted version. Let formula_4 be a set of Boolean variables. Let formula_5 be a set of clauses over "X" such that (1) each clause in "C" has at most three literals and (2) each variable is restricted to appear once or twice positively and once negatively overall in "C". Decide whether there is an assignment to variables of "X" such that each clause in "C" has at least one true literal. Given an instance of this satisfiability problem, construct the following instance of GISDP. All intervals have a length of 3, so it is sufficient to represent each interval by its starting time: Note that there is no overlap between intervals in groups associated with different clauses. This is ensured since a variable appears at most twice positively and once negatively. The constructed GISDP has a feasible solution (i.e. a scheduling in which each group is represented), if and only if the given set of boolean clauses has a satisfying assignment. Hence GISDP3 is NP-complete, and so is GISDPk for every formula_3. Polynomial when all groups contain at most 2 intervals. GISDP2 can be solved at polynomial time by the following reduction to the 2-satisfiability problem: This construction contains at most O("n"2) clauses (one for each intersection between intervals, plus two for each group). Each clause contains 2 literals. The satisfiability of such formulas can be decided in time linear in the number of clauses (see 2-SAT). Therefore, the GISDP2 can be solved in polynomial time. Group Interval Scheduling Maximization. MaxSNP-complete when some groups contain 2 or more intervals. GISMPk is NP-complete even when formula_21. Moreover, GISMPk is MaxSNP-complete, i.e., it does not have a PTAS unless P=NP. This can be proved by showing an approximation-preserving reduction from MAX 3-SAT-3 to GISMP2. Polynomial 2-approximation. The following greedy algorithm finds a solution that contains at least 1/2 of the optimal number of intervals: A formal explanation is given by a Charging argument. The approximation factor of 2 is tight. For example, in the following instance of GISMP2: The greedy algorithm selects only 1 interval [0..2] from group #1, while an optimal scheduling is to select [1..3] from group #2 and then [4..6] from group #1. A more general approximation algorithm attains a 2-factor approximation for the weighted case. LP-based approximation algorithms. Using the technique of Linear programming relaxation, it is possible to approximate the optimal scheduling with slightly better approximation factors. The approximation ratio of the first such algorithm is asymptotically 2 when "k" is large, but when "k=2" the algorithm achieves an approximation ratio of 5/3. The approximation factor for arbitrary "k" was later improved to 1.582. Related problems. An interval scheduling problem can be described by an intersection graph, where each vertex is an interval, and there is an edge between two vertices if and only if their intervals overlap. In this representation, the interval scheduling problem is equivalent to finding the maximum independent set in this intersection graph. Finding a maximum independent set is NP-hard in general graphs, but it can be done in polynomial time in the special case of intersection graphs (ISMP). A group-interval scheduling problem (GISMPk) can be described by a similar interval-intersection graph, with additional edges between each two intervals of the same group, i.e., this is the edge union of an interval graph and a graph consisting of n disjoint cliques of size "k". Variations. An important class of scheduling algorithms is the class of dynamic priority algorithms. When none of the intervals overlap the optimum solution is trivial. The optimum for the non-weighted version can found with the earliest deadline first scheduling. Weighted interval scheduling is a generalization where a value is assigned to each executed task and the goal is to maximize the total value. The solution need not be unique. The interval scheduling problem is 1-dimensional – only the time dimension is relevant. The Maximum disjoint set problem is a generalization to 2 or more dimensions. This generalization, too, is NP-complete. Another variation is resource allocation, in which a set of intervals s are scheduled using resources "k" such that "k" is minimized. That is, all the intervals must be scheduled, but the objective is to minimize the usage of resources. Another variation is when there are "m" processors instead of a single processor. I.e., "m" different tasks can run in parallel. See identical-machines scheduling. Single-machine scheduling is also a very similar problem. Sources. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k>1" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "w[j]+M[p[j]] \\ge M[j-1]" }, { "math_id": 3, "text": "k\\geq 3" }, { "math_id": 4, "text": "X = \\{x_1, x_2, \\dots, x_p\\}" }, { "math_id": 5, "text": "C = \\{c_1, c_2, \\dots, c_q\\}" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "50i-10" }, { "math_id": 8, "text": "x_i = \\mathrm{false}" }, { "math_id": 9, "text": "50i+10" }, { "math_id": 10, "text": "x_i = \\mathrm{true}" }, { "math_id": 11, "text": "c_j" }, { "math_id": 12, "text": "50i-12" }, { "math_id": 13, "text": "50i-8" }, { "math_id": 14, "text": "50i+8" }, { "math_id": 15, "text": "x_i=\\text{true}" }, { "math_id": 16, "text": "y_i" }, { "math_id": 17, "text": "x_i \\cup y_i" }, { "math_id": 18, "text": "\\neg{x_i} \\cup \\neg{y_i}" }, { "math_id": 19, "text": "y_j" }, { "math_id": 20, "text": "\\neg{x_i} \\cup \\neg{y_j}" }, { "math_id": 21, "text": "k\\geq 2" } ]
https://en.wikipedia.org/wiki?curid=7570573
75709429
Gelman-Rubin statistic
The Gelman-Rubin statistic allows a statement about the convergence of Monte Carlo simulations. Definition. formula_0 Monte Carlo simulations (chains) are started with different initial values. The samples from the respective burn-in phases are discarded. From the samples formula_1 (of the j-th simulation), the variance between the chains and the variance in the chains is estimated: formula_2 Mean value of chain j formula_3 Mean of the means of all chains formula_4 Variance of the means of the chains formula_5 Averaged variances of the individual chains across all chains An estimate of the Gelman-Rubin statistic formula_6 then results as formula_7. When L tends to infinity and B tends to zero, R tends to 1. A different formula is given by Vats &amp; Knudson. Alternatives. The Geweke Diagnostic compares whether the mean of the first x percent of a chain and the mean of the last y percent of a chain match.
[ { "math_id": 0, "text": "J" }, { "math_id": 1, "text": "x_{1}^{(j)},\\dots, x_{L}^{(j)}" }, { "math_id": 2, "text": "\\overline{x}_j=\\frac{1}{L}\\sum_{i=1}^L x_i^{(j)}" }, { "math_id": 3, "text": "\\overline{x}_*=\\frac{1}{J}\\sum_{j=1}^J \\overline{x}_j" }, { "math_id": 4, "text": "B=\\frac{L}{J-1}\\sum_{j=1}^J (\\overline{x}_j-\\overline{x}_*)^2" }, { "math_id": 5, "text": "W=\\frac{1}{J} \\sum_{j=1}^J \\left(\\frac{1}{L-1} \\sum_{i=1}^L (x^{(j)}_i-\\overline{x}_j)^2\\right)" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "R=\\frac{\\frac{L-1}{L}W+\\frac{1}{L}B}{W}" } ]
https://en.wikipedia.org/wiki?curid=75709429
75712890
Neyman Type A distribution
Compound Poisson-family discrete probability distribution In statistics and probability, the Neyman Type A distribution is a discrete probability distribution from the family of Compound Poisson distribution. First of all, to easily understand this distribution we will demonstrate it with the following example explained in Univariate Discret Distributions; we have a statistical model of the distribution of larvae in a unit area of field (in a unit of habitat) by assuming that the variation in the number of clusters of eggs per unit area (per unit of habitat) could be represented by a Poisson distribution with parameter formula_0, while the number of larvae developing per cluster of eggs are assumed to have independent Poisson distribution all with the same parameter formula_1. If we want to know how many larvae there are, we define a random variable "Y" as the sum of the number of larvae hatched in each group (given "j" groups). Therefore, "Y" = "X"1 + "X"2 + ... "X" j, where "X"1...,"X"j are independent Poisson variables with parameter formula_2 and formula_3. History. Jerzy Neyman was born in Russia in April 16 of 1894, he was a Polish statistician who spent the first part of his career in Europe. In 1939 he developed the Neyman Type A distribution to describe the distribution of larvae in experimental field plots. Above all, it is used to describe populations based on contagion, e.g., entomology (Beall[1940], Evans[1953]), accidents (Creswell i Froggatt [1963]), and bacteriology. The original derivation of this distribution was on the basis of a biological model and, presumably, it was expected that a good fit to the data would justify the hypothesized model. However, it is now known that it is possible to derive this distribution from different models (William Feller[1943]), and in view of this, Neyman's distribution derive as Compound Poisson distribution. This interpretation makes them suitable for modelling heterogeneous populations and renders them examples of apparent contagion. Despite this, the difficulties in dealing with Neyman's Type A arise from the fact that its expressions for probabilities are highly complex. Even estimations of parameters through efficient methods, such as maximum likelihood, are tedious and not easy to understand equations. Definition. Probability generating function. The probability generating function (pgf) "G"1("z"), which creates "N" independent Xj random variables, is used to a branching process. Each Xj produces a random number of individuals, where X1, X2... have the same distribution as "X", which is that of "X" with pgf "G"2("z"). The total number of  individuals is then the random variable, formula_4 The p.g.f. of the distribution of "SN" is : formula_5 One of the notations, which is particularly helpful, allows us to use a symbolic representation to refer to  an F1 distribution that has been generalized by an F2 distribution is, formula_6 In this instance, it is written as, formula_7 Finally, the probability generating function is, formula_8 From the generating function of probabilities we can calculate the probability mass function explained below. Probability mass function. Let "X"1,"X"2..."X"j be Poisson independent variables. The probability distribution of the random variable "Y" = "X"1 +"X"2+..."X"j is the Neyman's Type A distribution with parameters formula_0 and formula_9. formula_10formula_11 Alternatively, formula_12 In order to see how the previous expression develops, we must bear in mind that the probability mass function is calculated from the probability generating function, and use the property of Stirling Numbers. Let's see the development formula_13 formula_14 formula_15 Another form to estimate the probabilities is with recurring successions, formula_16,formula_17 Although its length varies directly with "n", this recurrence relation is only employed for numerical computation and is particularly useful for computer applications. where formula_20 formula_21 Properties. Moment and cumulant generating functions. The moment generating function of a random variable "X" is defined as the expected value of "e""t", as a function of the real parameter "t". For an formula_22, the moment generating function exists and is equal to formula_23 The cumulant generating function is the logarithm of the moment generating function and is equal to formula_24 In the following table we can see the moments of the order from 1 to 4 Skewness. The skewness is the third moment centered around the mean divided by the 3/2 power of the standard deviation, and for the formula_26 distribution is, formula_27 Kurtosis. The kurtosis is the fourth moment centered around the mean, divided by the square of the variance, and for the formula_26 distribution is, formula_28 The excess kurtosis is just a correction to make the kurtosis of the normal distribution equal to zero, and it is the following, formula_29 Characteristic function. In a discrete distribution the characteristic function of any real-valued random variable is defined as the expected value of formula_32, where "i" is the imaginary unit and "t" ∈ "R" formula_33 This function is related to the moment generating function via formula_34. Hence for this distribution the characteristic function is, formula_35 Cumulative distribution function. The cumulative distribution function is, formula_37 formula_40 formula_41 where formula_25 is the poblational mean of formula_42 formula_45 Parameter estimation. Method of moments. The mean and the variance of the NA(formula_46) are formula_47 and formula_48, respectively. So we have these two equations, formula_49 Solving these two equations we get the moment estimators formula_52 and formula_53 of formula_2 and formula_3. formula_54 formula_55 Maximum likelihood. Calculating the maximum likelihood estimator of formula_2 and formula_3 involves multiplying all the probabilities in the probability mass function to obtain the expression formula_56. When we apply the parameterization adjustment defined in "Other Properties," we get formula_57. We may define the Maximum likelihood estimation based on a single parameter if we estimate the formula_25 as the formula_42 (sample mean) given a sample "X" of size "N". We can see it below. formula_58 Testing Poisson assumption. When formula_22 is used to simulate a data sample it is important to see if the Poisson distribution fits the data well. For this, the following Hypothesis test is used: formula_59 Likelihood-ratio test. The likelihood-ratio test statistic for formula_26 is, formula_60 Where likelihood formula_61 is the log-likelihood function. "W" does not have an asymptotic formula_62 distribution as expected under the null hypothesis since "d" = 1 is at the parameter domain's edge. In the asymptotic distribution of "W," it can be demonstrated that the constant 0 and formula_62 have a 50:50 mixture. For this mixture, the formula_63 upper-tail percentage points are the same as the formula_64 upper-tail percentage points for a formula_62 Related distributions. The poisson distribution (on { 0, 1, 2, 3, ... }) is a special case of the Neyman Type A distribution, with formula_65 From moments of order 1 and 2 we can write the population mean and variance based on the parameters formula_2 and formula_3. formula_47 formula_66 In the dispersion index "d" we observe that by substituting formula_43 for the parametrized equation of order 1 and formula_67 for the one of order 2, we obtain formula_68. Our variable "Y" is therefore distributed as a Poisson of parameter formula_0 when "d" approaches to 1. Then we have that, formula_69 Applications. Usage History. When the Neyman's type A species' reproduction results in clusters, a distribution has been used to characterize the dispersion of plants. This typically occurs when a species develops from offspring of parent plants or from seeds that fall close to the parent plant. However, Archibald (1948] observed that there is insufficient data to infer the kind of reproduction from the type of fitted distribution. While Neyman type A produced positive results for plant distributions, Evans (1953] showed that Negative binomial distribution produced positive results for insect distributions. Neyman type A distributions have also been studied in the context of ecology, and the results that unless the plant clusters are so compact as to not lie across the edge of the square used to pick sample locations, the distribution is unlikely to be applicable to plant populations. The compactness of the clusters is a hidden assumption in Neyman's original derivation of the distribution, according to Skellam (1958). The results were shown to be significantly impacted by the square size selection. In the context of bus driver accidents, Cresswell and Froggatt (1963) derived the Neyman type A based on the following hypotheses: These assumptions lead to a Neyman type A distribution via the formula_70 and formula_71 model. In contrast to their "short distribution," Cresswell and Froggatt called this one the "long distribution" because of its lengthy tail. According to Irwin(1964), a type A distribution can also be obtained by assuming that various drivers have different levels of proneness, or "K". with probability: formula_72 and that a driver with proneness "k"formula_3 has "X" accidents where: formula_74 This is the formula_75 model with mixing over the values taken by "K". Distribution was also suggested in the application for the grouping of minority tents for food from 1965 to 1969. In this regard, it was predicted that only the clustering rates or the average number of entities per grouping needed to be approximated, rather than adjusting distributions on d very big data bases. Calculating Neyman Type A probabilities in R. rNeymanA &lt;- function(n,lambda, phi){ r &lt;- numeric() for (j in 1:n) { k = rpois(1,lambda) r[j] &lt;- sum(rpois(k,phi)) return(r) dNeyman.rec &lt;- function(x, lambda, phi){ p &lt;- numeric() p[1]&lt;- exp(-lambda + lambda*exp(-phi)) c &lt;- lambda*phi*exp(-phi) if(x == 0){ return(p[1]) else{ for (i in 1:x) { suma = 0 for (r in 0:(i-1)) { suma = suma + (phi^r)/(factorial(r))*p[i-r] p[i+1] = (c/(i))*suma # +1 per l'R res &lt;- p[i+1] return(res) We compare results between the relative frequencies obtained by the simulation and the probabilities computed by the p.m.f. . Given two values for the parameters formula_76 and formula_77. It is displayed in the following table, References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\lambda " }, { "math_id": 1, "text": " \\phi " }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": " Y = SN = X_1 + X_2 + ... + X_N" }, { "math_id": 5, "text": " E[z^{SN}] = E_N[E[z^{SN}|N]] = E_N[G_2(z)] = G_1(G_2(z)) " }, { "math_id": 6, "text": " Y\\sim F_1\\bigwedge F_N" }, { "math_id": 7, "text": " Y\\sim \\operatorname{Pois(\\lambda)}\\bigwedge \\operatorname{Pois(\\phi)}" }, { "math_id": 8, "text": " G_Y(z) = \\exp(\\lambda(e^{\\phi (z-1)}-1))" }, { "math_id": 9, "text": " \\phi" }, { "math_id": 10, "text": " p_x = P(Y=x) = \\frac{e^{-\\lambda}\\phi^x}{x!} \\sum_{j=0}^{\\infty} \\frac{(\\lambda e^{-\\phi})^j j^x}{j!}~~" }, { "math_id": 11, "text": "~~x = 1,2,..." }, { "math_id": 12, "text": " p_x = P(Y=x) = \\frac{e^{-\\lambda + \\lambda e^{-\\phi}}\\phi^x}{x!} \\sum_{j=0}^{x}S(x,j)\\lambda^j e^{-\\phi^j} " }, { "math_id": 13, "text": " G(z) = e^{-\\lambda + \\lambda e^{-\\phi}}\\sum_{j=0}^{\\infty}\\frac{\\lambda^j e^{-j\\phi}(e^{\\phi z}-1)^j}{j|} " }, { "math_id": 14, "text": " = e^{-\\lambda + \\lambda e^{-\\phi}}\\sum_{j=0}^{\\infty} (\\lambda e^{-\\phi})^j \\sum_{x=j}^{\\infty}\\frac{S(x,j)\\phi^x z^x}{x!}" }, { "math_id": 15, "text": " = \\frac{e^{-\\lambda + \\lambda e^{-\\phi}}\\phi^x}{x!} \\sum_{j=0}^{x}S(x,j)\\lambda^j e^{-\\phi^j}" }, { "math_id": 16, "text": "p_x = P(Y=x) = \\frac{\\lambda\\phi e^{- \\phi}}{x} \\sum_{r=0}^{x-1} \\frac{\\phi^r}{r!} p_{x-r-1}~~" }, { "math_id": 17, "text": "~~p_0 = \\exp(-\\lambda + \\lambda e ^{- \\phi})" }, { "math_id": 18, "text": " j \\leq x " }, { "math_id": 19, "text": " \\phi>0 " }, { "math_id": 20, "text": " (e^{\\phi z} - 1)^j = j! \\sum_{x=j}^\\infty \\frac{S(x,j){\\phi z}^x}{x!}" }, { "math_id": 21, "text": "Y\\ \\sim \\operatorname{NA}(\\lambda,\\phi)\\," }, { "math_id": 22, "text": "\\operatorname{NA(\\lambda,\\phi)}" }, { "math_id": 23, "text": " M(t) = G_Y(e^t) = \\exp(\\lambda(e^{\\phi (e^t-1)}-1))" }, { "math_id": 24, "text": " K(t) = \\log(M(t)) = \\lambda(e^{\\phi (e^t-1)}-1)" }, { "math_id": 25, "text": "\\mu" }, { "math_id": 26, "text": "\\operatorname{NA}" }, { "math_id": 27, "text": "\\gamma_1 = \\frac{\\mu_3}{\\mu_2^{3/2}} = \\frac{\\lambda \\phi(1 + 3\\phi + \\phi^2)}{(\\lambda \\phi(1 + \\phi))^{3/2}}" }, { "math_id": 28, "text": "\\beta_2= \\frac{\\mu_4}{\\mu_2^2} = \\frac{\\lambda \\phi(1 + 7\\phi + 6\\phi^2 + \\phi^3) + 3\\lambda^2\\phi^2(1 + \\phi)^2}{\\lambda^2 \\phi^2(1 + \\phi)^2} = \\frac{1+7\\phi+6\\phi^2+\\phi^3}{\\phi\\lambda (1+\\phi)^2} + 3 " }, { "math_id": 29, "text": "\\gamma_2= \\frac{\\mu_4}{\\mu_2^2}-3 = \\frac{1+7\\phi+6\\phi^2+\\phi^3}{\\phi\\lambda (1+\\phi)^2} " }, { "math_id": 30, "text": "\\beta_2 >3" }, { "math_id": 31, "text": "\\gamma_2 >0" }, { "math_id": 32, "text": "e^{itX}" }, { "math_id": 33, "text": "\\phi(t)= E[e^{itX}] = \\sum_{j=0}^\\infty e^{ijt}P[X=j]" }, { "math_id": 34, "text": "\\phi_x(t) = M_X(it)" }, { "math_id": 35, "text": "\\phi_x(t) = \\exp(\\lambda(e^{\\phi (e^{it}-1)}-1))" }, { "math_id": 36, "text": "\\phi_x" }, { "math_id": 37, "text": " \\begin{align}\nF(x;\\lambda,\\phi)& = P(Y \\leq x)\\\\\n&= e^{-\\lambda} \\sum_{i=0}^{x} \\frac{\\phi^i}{i!} \\sum_{j=0}^{\\infty} \\frac{(\\lambda e^{-\\phi})^j j^i}{j!}\\\\\n&= e^{-\\lambda} \\sum_{i=0}^{x} \\sum_{j=0}^{\\infty} \\frac{\\phi^i(\\lambda e^{-\\phi})^j j^i}{i!j!} \\end{align}" }, { "math_id": 38, "text": " \\sigma^2" }, { "math_id": 39, "text": " \\mu" }, { "math_id": 40, "text": " d = \\frac{\\sigma^2}{\\mu} = 1 + \\phi " }, { "math_id": 41, "text": " \\sum_{i=1}^{N} \\frac{x_i}{N} = \\bar{x} = \\bar{\\lambda}\\bar{\\phi} ~~~" }, { "math_id": 42, "text": "\\bar{x}" }, { "math_id": 43, "text": "\\mu " }, { "math_id": 44, "text": " d " }, { "math_id": 45, "text": "\n \\begin{cases}\n \\mu = \\lambda\\phi \\\\\n d = 1 + \\phi\n \\end{cases}\n \\longrightarrow\n \\quad\\!\n \\begin{aligned}\n \\lambda = \\frac{\\mu}{d-1} \\\\\n \\phi = d -1\n \\end{aligned} \n " }, { "math_id": 46, "text": "\\lambda,\\phi" }, { "math_id": 47, "text": "\\mu = \\lambda\\phi" }, { "math_id": 48, "text": "\\lambda\\phi(1 + \\phi)" }, { "math_id": 49, "text": "\n \\begin{cases}\n \\bar{x} = \\lambda\\phi \\\\\n s^2 = \\lambda\\phi(1 + \\phi)\n \\end{cases}\n " }, { "math_id": 50, "text": " s^2" }, { "math_id": 51, "text": " \\bar{x}" }, { "math_id": 52, "text": "\\hat{\\lambda}" }, { "math_id": 53, "text": "\\hat{\\phi}" }, { "math_id": 54, "text": "\\bar{\\lambda} = \\frac{\\bar{x}^2}{s^2 - \\bar{x}}" }, { "math_id": 55, "text": "\\bar{\\phi} = \\frac{s^2 - \\bar{x}}{\\bar{x}}" }, { "math_id": 56, "text": "\\mathcal{L}(\\lambda,\\phi;x_1,\\ldots,x_n)" }, { "math_id": 57, "text": "\\mathcal{L}(\\mu, d;X)" }, { "math_id": 58, "text": "\\mathcal{L}(d;X)= \\prod_{i=1}^n P(x_i;d) " }, { "math_id": 59, "text": "\n \\begin{cases}\n H_0: d=1 \\\\\n H_1: d> 1\n \\end{cases}\n " }, { "math_id": 60, "text": "W = 2(\\mathcal{L}(X;\\mu,d)-\\mathcal{L}(X;\\mu,1))" }, { "math_id": 61, "text": "\\mathcal{L}()" }, { "math_id": 62, "text": "\\chi_1^2" }, { "math_id": 63, "text": "\\alpha" }, { "math_id": 64, "text": "2\\alpha" }, { "math_id": 65, "text": "\\operatorname{Pois}(\\lambda) = \\operatorname{NA}(\\lambda,\\, 0).\\," }, { "math_id": 66, "text": "\\sigma^2 = \\lambda\\phi (1 + \\phi)" }, { "math_id": 67, "text": "\\sigma " }, { "math_id": 68, "text": " d = 1 + \\phi " }, { "math_id": 69, "text": " \\lim_{d\\rightarrow 1}\\operatorname{NA}(\\lambda,\\, \\phi) \\rightarrow \\operatorname{Pois}(\\lambda)" }, { "math_id": 70, "text": " \\operatorname{Pois(\\lambda)} " }, { "math_id": 71, "text": " \\operatorname{Pois(\\phi)} " }, { "math_id": 72, "text": "\\operatorname{P(K = k\\phi)} = \\frac{e^{-\\lambda} \\lambda^k}{k!} " }, { "math_id": 73, "text": " ~ 0,~ \\phi, ~ 2\\phi, ~ 3\\phi, ~ ..." }, { "math_id": 74, "text": "~\\operatorname{P(X = x|k\\phi)} = \\frac{e^{-k\\phi}(k\\phi)^x}{x!} " }, { "math_id": 75, "text": " ~\\operatorname{Pois(k\\phi)} \\wedge \\operatorname{Pois(\\lambda)}" }, { "math_id": 76, "text": "\\lambda = 2" }, { "math_id": 77, "text": "\\phi= 1" } ]
https://en.wikipedia.org/wiki?curid=75712890
75715006
Triangular Dominoes
Triangular Dominoes is a variant of dominoes using equilateral triangle tiles, patented by Franklin H. Richards in 1885. Two versions were made: a starter set of 35 unique tiles, with each side numbered from zero to four pips, and an advanced set of 56 unique tiles, with each side numbered from zero to five pips. In both versions, a wild card "boss" tile was included, making 36 and 57 tiles in each complete set, respectively. Equipment. In his patent, Richards used a three-digit notation, referring to the pips in clockwise order from the side(s) with the lowest value. Richards illustrated the tiles as two unique sets, with pip values subject to the following restrictions: In addition to this marking scheme, Richards added the sum of all pips to the center of the tile. Percy Alexander MacMahon showed there were 24 possible combinations when each of the three edges of an equilateral triangle are assigned one of four values, and showed the number of unique pieces that can be made in this way is formula_0 for formula_1 unique values. For formula_2, there are 45 unique combinations possible, and for formula_3, there are 76 unique combinations; the reduced set of 35 and 56 in "Triangular Dominoes", for 0–4 and 0–5 pips, respectively, result from the additional restriction for increasing values around each side of the tiles when counting clockwise. This can be demonstrated by examination of the "singles" tiles: where 012 is a valid sequence in "Triangular Dominoes", 021 is not, and so the mirror image of each "singles" pattern is excluded; there are ten excluded patterns for the set of 0–4 pips and twenty for the set of 0–5 pips. By examination, mirror images of the triples and doubles are identical to the original tiles and so these patterns already adhere to the counting-up restriction. These restrictions and resulting tile set of "Triangular Dominoes" were retained, with markings moved to the corners using Arabic numerals for "Triominoes", which was published in 1965. Gameplay. Richards proposed several games that could be played in the patent. Points. For this variant, the "boss" tile may be included or left out. The tiles are distributed evenly between the players. Play is led by the player holding the highest triple tile. Each player takes a turn, placing one tile on the table; each tile must be added next to the tile that was placed in the preceding turn, matching the number of pips on adjacent sides. Once one player exhausts their hand, the game is over and the winner's score is determined by the sum of the pips on the tiles remaining in their opponents' hands. Muggins. This variant is similar to "points", except the matching criterion is the sum of pips on adjacent sides must be a multiple of five. Star. This variant allows players to lay tiles side-to-side or corner-to-corner. Corner-to-corner plays are allowed when the player is able to match the number on both sides of the corner. If a corner-to-corner match is created, that player can take another turn. Scoring in this variant is accomplished when the sum of all the pips on both dominoes (whether matched side-to-side or corner-to-corner) is a multiple of five; for example, if the 233 and 334 tiles are laid next to each other, the total sum is (2+3+3)+(3+3+4)=18, not divisible by five and hence no score is awarded. Alternatively, if the 233 and 133 tiles are laid next to each other, the total sum is 15, divisible by five, and the player is awarded 15 points. When the "boss" tile is played, the tile is assumed to have enough pips to bring the sum of it and adjacent tile(s) to a multiple of five. Subsequent tiles played next to the "boss" tile assume the value is zero. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{n}{3}\\cdot(n^2+2)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n=5" }, { "math_id": 3, "text": "n=6" } ]
https://en.wikipedia.org/wiki?curid=75715006
7571557
Myogenic mechanism
The myogenic mechanism is how arteries and arterioles react to an increase or decrease of blood pressure to keep the blood flow constant within the blood vessel. Myogenic response refers to a contraction initiated by the myocyte itself instead of an outside occurrence or stimulus such as nerve innervation. Most often observed in (although not necessarily restricted to) smaller resistance arteries, this 'basal' myogenic tone may be useful in the regulation of organ blood flow and peripheral resistance, as it positions a vessel in a preconstricted state that allows other factors to induce additional constriction or dilation to increase or decrease blood flow. The smooth muscle of the blood vessels reacts to the stretching of the muscle by opening ion channels, which cause the muscle to depolarize, leading to muscle contraction. This significantly reduces the volume of blood able to pass through the lumen, which reduces blood flow through the blood vessel. Alternatively when the smooth muscle in the blood vessel relaxes, the ion channels close, resulting in vasodilation of the blood vessel; this increases the rate of flow through the lumen. This system is especially significant in the kidneys, where the glomerular filtration rate (the rate of blood filtration by the nephron) is particularly sensitive to changes in blood pressure. However, with the aid of the myogenic mechanism, the glomerular filtration rate remains very insensitive to changes in human blood pressure. Myogenic mechanisms in the kidney are part of the autoregulation mechanism which maintains a constant renal blood flow at varying arterial pressure. Concomitant autoregulation of glomerular pressure and filtration indicates regulation of preglomerular resistance. Model and experimental studies were performed to evaluate two mechanisms in the kidney, myogenic response and tubuloglomerular feedback. A mathematical model showed good autoregulation through a myogenic response, aimed at maintaining a constant wall tension in each segment of the preglomerular vessels. Tubuloglomerular feedback gave rather poor autoregulation. The myogenic mechanism showed 'descending' resistance changes, starting in the larger arteries, and successively affecting downstream preglomerular vessels at increasing arterial pressures. This finding was supported by micropuncture measurements of pressure in the terminal interlobular arteries. Evidence that the mechanism was myogenic was obtained by exposing the kidney to a subatmospheric pressure of 40 mmHg; this led to an immediate increase in renal resistance, which could not be prevented by denervation or various blocking agents. Bayliss effect. Bayliss effect or Bayliss myogenic response is a special manifestation of the myogenic tone in the vasculature. The Bayliss effect in vascular smooth muscles cells is a response to stretch. This is especially relevant in arterioles of the body. When blood pressure is increased in the blood vessels and the blood vessels distend, they react with a constriction; this is the Bayliss effect. Stretch of the muscle membrane opens a stretch-activated ion channel. The cells then become depolarized and this results in a Ca2+ signal and triggers muscle contraction. No action potential is necessary here; the level of entered calcium affects the level of contraction proportionally and causes tonic contraction. The contracted state of the smooth muscle depends on the grade of stretch and plays an important part in the regulation of blood flow. Increased contraction increases the total peripheral resistance (TPR) and this further increases the mean arterial pressure (MAP). This is explained by the following equation: formula_0, where CO is the cardiac output, which is the volume of blood pumped by the heart in one minute. This effect is independent of nervous mechanisms, which is controlled by the sympathetic nervous system. The overall effect of the myogenic response (Bayliss effect) is to decrease blood flow across a vessel after an increase in blood pressure. History. The Bayliss effect was discovered by physiologist Sir William Bayliss in 1902. Proposed mechanism. When the endothelial cell in the tunica intima of an artery is stretched it is likely that the endothelial cell may signal constriction to the muscle cell layer in a paracrine fashion. Increase in blood pressure may cause depolarisation of the affected myocytes as well or endothelial cells alone. The mechanism is not yet completely understood, but studies have shown that volume regulated chloride channels and stretch sensitive non-selective cation channels lead to an increased probability in opening of L-type (voltage-dependent) Ca2+ channels, thus raising the cytosolic concentration of Ca2+ leading to a contraction of the myocyte, and this may involve other channels in the endothelia. Unstable Membrane Potentials. Many cells have resting membrane potentials that are unstable. It is usually due to ion channels in the cell membrane that spontaneously open and close (e.g. If channels in cardiac pacemaker cells). When the membrane potential reaches depolarization threshold an action potential (AP) is fired, excitation-contraction coupling initiates and the myocyte contracts. Slow wave potentials. Slow-wave potentials are unstable resting membrane potentials that continuously cycle through depolarization- and repolarization phases. However, not every cycle reaches depolarization threshold and thus an action potential (AP) will not always fire. Owing to temporal summation (depolarization potentials spaced closely together in time so that they summate), however, cell membrane depolarization will periodically reach depolarization threshold and an action potential will fire, triggering contraction of the myocyte. Pacemaker potentials. Pacemaker potentials are unstable cell membrane potentials that reach depolarization threshold with every depolarization/repolarization cycle. This results in AP's being fired according to a set rhythm. Cardiac pacemaker cells, a type of cardiac myocyte in the SA node of heart, are an example of cells with a pacemaker potential. Stretch. This mechanism involves the opening of mechanically gated Ca2+ channels when some myocytes are stretched. The resulting influx of Ca2+ ions lead to the initiation of excitation-contraction coupling and thus contraction of the myocyte. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "MAP = CO * TPR " } ]
https://en.wikipedia.org/wiki?curid=7571557
7571758
David E. Pritchard
American physicist David Edward Pritchard (born October 15, 1941) is a professor at the Massachusetts Institute of Technology (MIT) who specializes in atomic physics and educational research. Career. Early work. Pritchard completed his PhD in 1968 at Harvard University under the supervision of Daniel Kleppner. His thesis involved building the first atomic scattering machine with polarized atoms to study differential spin exchange scattering, a process by which the 21 cm hydrogen line manifests. Pritchard was an early adopter of tunable lasers in physics and chemistry, demonstrating high-resolution spectroscopy through the simultaneous absorption of two laser photons. He employed both laser and radio-frequency spectroscopy to study weakly bound van der Waals molecules, such as NaNe and KAr, in cold supersonic molecular beams. Atom optics, atom traps, and atom interferometers. Pritchard made use of tunable lasers' ability to transfer momentum to atoms, leading to demonstrations of the diffraction of atoms from a standing wave of light (denoted Kapitza-Dirac or Raman-Nath regimes) and Bragg scattering of atoms from light gratings, founding the field of coherent atom optics. This led to the creation of the first atom interferometer, where matter waves would propagate on both sides of a metal foil before recombining, so that different interactions on the two sides would result in a fringe shift of the atomic interference pattern. This allowed for precise measurements of atomic polarizability, the refractive index of gaseous matter waves, and fundamental testing of quantum decoherence, as well as the first demonstration of the ability of atom interferometers to measure angular velocity like a gyroscope and to work for complex particles like Na2 molecules in the gaseous phase. A singularly important development from atom optics is Pritchard’s invention of the magneto-optical trap which captures and cools atoms to sub-millikelvin temperatures and of the Dark SPOT MOT, in which atoms are confined in a way such that they do not interact with trapping light. Together with a magnetic atom trap, it can compress ~ 1010 cold atoms into the same small volume (This is sometimes called the Ioffe-Pritchard trap to honor its plasma physics origin). These traps are commonly used in the field of cold atom research and are the foundational tools for the MIT-Harvard Center for Ultracold Atoms"." In 1990, Pritchard brought Wolfgang Ketterle to MIT as a postdoctoral researcher to work on atom cooling. To encourage Ketterle to stay at MIT, in 1993 Ketterle was given his own experimental cold atom program (with two students and two grants) while Pritchard himself stepped aside from the field to allow Ketterle to be appointed to the faculty. Ketterle pursued atom cooling to achieve Bose–Einstein condensation in 1995, a discovery for which Ketterle was awarded the Nobel Prize in Physics in 2001, alongside Pritchard’s former graduate student, Eric Allin Cornell, and Carl Wieman, who was an informal Pritchard mentee while an undergraduate at MIT. Ketterle and Pritchard then partnered to study atom optics and interferometry with Bose condensates, demonstrating coherent amplification of matter waves, superradiant Rayleigh scattering, and the power of Bragg spectroscopy to probe the condensate and used laser light to establish coherence between two condensates that never touch. Precise measurements of atomic masses. Pritchard is a pioneer in the precise measurement of atomic and molecular masses using ion traps, an advance enabled by his group’s developing highly sensitive radio-frequency detectors based on SQUIDs (superconducting quantum interference devices) and techniques to coherently cross-couple the motion of different modes of an ion’s oscillation in the trap. These advances culminated in an ion balance in which one each of two different ions were simultaneously confined while their cyclotron frequencies were inter-compared to better than one part in 1011. This led to the discovery of a new type of systematic shift of the cyclotron frequency due to the polarizability of the ion, providing the most accurate measurement of ionic molecule polarizability. It also resulted in a fifty-fold improvement of experimental tests of Albert Einstein’s mass–energy equivalence that formula_0 (where "E" is the energy, "m" is the mass and "c" the speed of light)– now at ½ part per million. Precise measurements of the masses of rubidium and caesium (Cesium) atoms made with the MIT apparatus have been combined with others’ high-precision atom interferometric measurements of "h"/"m" (Planck’s constant divided by the atom mass) to give the most accurate value of the fine structure constant at 0.2 ppb (parts per billion), differing by ~ 2.5 combined errors from measurement based on quantum electrodynamics. This is the most precise comparison of measurements made using entirely different theoretical bases. Teaching and education software. In 1998, David Pritchard and his son Alex developed an online Socratic tutor, mycybertutor.com, which provides specific critiques of incorrect symbolic answers, hints upon request, and follow-up comments and questions. This tool has been shown to significantly improve students’ ability to answer traditional MIT examination problems, increasing their performance by approximately 2 standard deviations. The software is now marketed as Mastering Physics, Mastering Chemistry, and Mastering Astronomy by Pearson Education. It has become a widely used homework tutor in Science and Engineering, with approximately 2.5 million. Pritchard’s education research group, RELATE was started in 2000 with the goal to "Apply the principles and techniques of science and engineering to study and improve learning, especially of expertise""". They conduct research using all components in the acronym RELATE - Research in Learning, Assessing, and Tutoring Effectively. They showed that copying online homework is by far the best predictor of a low final exam grade in MIT residential physics, and is the dominant contributor to ~ 5% of the certificates given by edX. They explored new types of instruction (e.g. deliberate practice of critical problem-solving skills) or variations in instruction (adding a diagram, replacing multiple choice questions with more interactive drag and drop questions, etc.) compared with traditional instruction (the control). These experiments, along with other relevant research, indicated an important principle that students were struggling with – strategic thinking – the ability to determine which concepts and procedures are helpful in solving an unfamiliar problem. For this purpose, RELATE developed a Mechanics Reasoning Inventory that measures strategic ability; it served as a benchmark of progress for their new pedagogy: Modeling Approach to Problem-Solving. This pedagogy was shown to greatly improve students’ attitudes towards learning science, raise their scores on the Physics 1 final exam retake, and subsequently help them improve their Physics 2 grade by ~ ½ standard deviation relative to students who didn’t benefit from this intervention. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E=mc^2" } ]
https://en.wikipedia.org/wiki?curid=7571758
75737969
Parabolic subgroup of a reflection group
In the mathematical theory of reflection groups, the parabolic subgroups are a special kind of subgroup. The precise definition of which subgroups are parabolic depends on context—for example, whether one is discussing general Coxeter groups or complex reflection groups—but in all cases the collection of parabolic subgroups exhibits important good behaviors. For example, the parabolic subgroups of a reflection group have a natural indexing set and form a lattice when ordered by inclusion. The different definitions of parabolic subgroups essentially coincide in the case of finite real reflection groups. Parabolic subgroups arise in the theory of algebraic groups, through their connection with Weyl groups. Background: reflection groups. In a Euclidean space (such as the Euclidean plane, ordinary three-dimensional space, or higher-dimensional analogues), a "reflection" is a symmetry of the space across a mirror (technically, across a subspace of dimension one smaller than the whole space) that fixes the vectors that lie on the mirror and send the vectors orthogonal to the mirror to their negatives. A "finite real reflection group" W is a finite group generated by reflections (that is, every linear transformation in W is a composition of some of the reflections in W). For example, the symmetries of a regular polygon in the plane form a reflection group (called the dihedral group), because each rotation symmetry of the polygon is a composition of two reflections. Finite real reflection groups can be generalized in various ways, and the definition of parabolic subgroup depends on the choice of definition. Each finite real reflection group W has the structure of a "Coxeter group": this means that W contains a subset S of reflections (called "simple reflections") such that S generates W, subject to relations of the form formula_0 where 1 denotes the identity in W and the formula_1 are numbers that satisfy formula_2 for formula_3 and formula_4 for formula_5. Thus, the Coxeter groups form one generalization of finite real reflection groups. A separate generalization is to consider the geometric action on vector spaces whose underlying field is not the real numbers. Especially, if one replaces the real numbers with the complex numbers, with a corresponding generalization of the notion of a reflection, one arrives at the definition of a "complex reflection group". Every real reflection group can be complexified to give a complex reflection group, so the complex reflection groups form another generalization of finite real reflection groups. In Coxeter groups. Suppose that W is a Coxeter group with a finite set S of simple reflections. For each subset I of S, let formula_6 denote the subgroup of W generated by formula_7. Such subgroups are called "standard parabolic subgroups" of W. In the extreme cases, formula_8 is the trivial subgroup (containing just the identity element of W) and formula_9. The pair formula_10 is again a Coxeter group. Moreover, the Coxeter group structure on formula_6 is compatible with that on W, in the following sense: if formula_11 denotes the length function on W with respect to S (so that formula_12 if the element w of W can be written as a product of k elements of S and not fewer), then for every element w of formula_6, one has that formula_13. That is, the length of w is the same whether it is viewed as an element of W or of formula_6. The same is true of the Bruhat order: if u and w are elements of formula_6, then formula_14 in the Bruhat order on formula_6 if and only if formula_14 in the Bruhat order on W. If I and J are two subsets of S, then formula_15 if and only if formula_16, formula_17, and the smallest group formula_18 that contains both formula_6 and formula_19 is formula_20. Consequently, the lattice of standard parabolic subgroups of W is a Boolean lattice. Given a standard parabolic subgroup formula_6 of a Coxeter group W, the cosets of formula_6 in W have a particularly nice system of representatives: let formula_21 denote the set formula_22 of elements in W that do not have any element of I as a right descent. Then for each formula_23, there are unique elements formula_24 and formula_25 such that formula_26. Moreover, this is a length-additive product, that is, formula_27. Furthermore, u is the element of minimum length in the coset formula_28. An analogous construction is valid for right cosets. The collection of all left cosets of standard parabolic subgroups is one possible construction of the Coxeter complex. In terms of the Coxeter–Dynkin diagram, the standard parabolic subgroups arise by taking a subset of the nodes of the diagram and the edges induced between those nodes, erasing all others. The only normal parabolic subgroups arise by taking a union of connected components of the diagram, and the whole group W is the direct product of the irreducible Coxeter groups that correspond to the components. In complex reflection groups. Suppose that W is a complex reflection group acting on a complex vector space V. For any subset formula_29, let formula_30 be the subset of W consisting of those elements in W that fix each element of A. Such a subgroup is called a "parabolic subgroup" of W. In the extreme cases, formula_31 and formula_32 is the trivial subgroup of W that contains only the identity element. It follows from a theorem of that each parabolic subgroup formula_33 of a complex reflection group W is a reflection group, generated by the reflections in W that fix every point in A. Since W acts linearly on V, formula_34 where formula_35 is the span of A (that is, the smallest linear subspace of V that contains A). In fact, there is a simple choice of subspaces A that index the parabolic subgroups: each reflection in W fixes a hyperplane (that is, a subspace of V whose dimension is 1 less than that of V) pointwise, and the collection of all these hyperplanes is the "reflection arrangement" of W. The collection of all intersections of subsets of these hyperplanes, partially ordered by inclusion, is a lattice formula_36. The elements of the lattice are precisely the fixed spaces of the elements of W (that is, for each intersection I of reflecting hyperplanes, there is an element formula_23 such that formula_37). The map that sends formula_38 for formula_39 is an order-reversing bijection between subspaces in formula_36 and parabolic subgroups of W. Concordance of definitions in finite real reflection groups. Let W be a finite real reflection group; that is, W is a finite group of linear transformations on a finite-dimensional real Euclidean space that is generated by orthogonal reflections. As mentioned above (see ), W may be viewed as both a Coxeter group and as a complex reflection group. For a real reflection group W, the parabolic subgroups of W (viewed as a complex reflection group) are not all standard parabolic subgroups of W (when viewed as a Coxeter group, after specifying a fixed Coxeter generating set S), as there are many more subspaces in the intersection lattice of its reflection arrangement than subsets of S. However, in a finite real reflection group W, every parabolic subgroup is "conjugate" to a standard parabolic subgroup with respect to S. Examples. The symmetric group formula_40, which consists of all permutations of formula_41, is a Coxeter group with respect to the set of adjacent transpositions formula_42, ..., formula_43. The standard parabolic subgroups of formula_40 (which are also known as Young subgroups) are the subgroups of the form formula_44, where formula_45 are positive integers with sum n, in which the first factor in the direct product permutes the elements formula_46 among themselves, the second factor permutes the elements formula_47 among themselves, and so on. The hyperoctahedral group formula_48, which consists of all signed permutations of formula_49 (that is, the bijections w on that set such that formula_50 for all i), has as its maximal standard parabolic subgroups the stabilizers of formula_51 for formula_52. More general definitions in Coxeter theory. In a Coxeter group generated by a finite set S of simple reflections, one may define a "parabolic subgroup" to be any conjugate of a standard parabolic subgroup. Under this definition, it is still true that the intersection of any two parabolic subgroups is a parabolic subgroup. The same does "not" hold in general for Coxeter groups of infinite rank. If W is a group and T is a subset of W, the pair formula_53 is called a "dual Coxeter system" if there exists a subset S of T such that formula_54 is a Coxeter system and formula_55 so that T is the set of all reflections (conjugates of the simple reflections) in W. For a dual Coxeter system formula_53, a subgroup of W is said to be a "parabolic subgroup" if it is a standard parabolic (as in ) of formula_54 for some choice of simple reflections S for formula_53. In some dual Coxeter systems, all sets of simple reflections are conjugate to each other; in this case, the parabolic subgroups with respect to one simple system (that is, the conjugates of the standard parabolic subgroups) coincide with the parabolic subgroups with respect to any other simple system. However, even in finite examples, this may not hold: for example, if W is the dihedral group with 10 elements, viewed as symmetries of a regular pentagon, and T is the set of reflection symmetries of the polygon, then any pair of reflections in T forms a simple system for formula_53, but not all pairs of reflections are conjugate to each other. Nevertheless, if W is finite, then the parabolic subgroups (in the sense above) coincide with the parabolic subgroups in the classical sense (that is, the conjugates of the standard parabolic subgroups with respect to a single, fixed, choice of simple reflections S). The same result does "not" hold in general for infinite Coxeter groups. Affine and crystallographic Coxeter groups. When W is an affine Coxeter group, the associated finite Weyl group is always a maximal parabolic subgroup, whose Coxeter–Dynkin diagram is the result of removing one node from the diagram of W. In particular, the length functions on the finite and affine groups coincide. In fact, every standard parabolic subgroup of an affine Coxeter group is finite. As in the case of finite real reflection groups, when we consider the action of an affine Coxeter group W on a Euclidean space V, the conjugates of the standard parabolic subgroups of W are precisely the subgroups of the form formula_56 for some subset A of V. If W is a crystallographic Coxeter group, then every parabolic subgroup of W is also crystallographic. Connection with the theory of algebraic groups. If G is an algebraic group and B is a Borel subgroup for G, then a "parabolic subgroup" of G is any subgroup that contains B. If furthermore G has a ("B", "N") pair, then the associated quotient group formula_57 is a Coxeter group, called the "Weyl group" of G. Then the group G has a Bruhat decomposition formula_58 into double cosets (where formula_59 is the disjoint union), and the parabolic subgroups of G containing B are precisely the subgroups of the form formula_60 where formula_19 is a standard parabolic subgroup of W. Parabolic closures. Suppose W is a Coxeter group of finite rank (that is, the set S of simple generators is finite). Given any subset X of W, one may define the "parabolic closure" of X to be the intersection of all parabolic subgroups containing X. As mentioned above, in this case the intersection of any two parabolic subgroups of W is again a parabolic subgroup of W, and consequently the parabolic closure of X is a parabolic subgroup of W; in particular, it is the (unique) minimal parabolic subgroup of W containing X. The same analysis applies to complex reflection groups, where the parabolic closure of X is also the pointwise stabiliser of the space of fixed points of X. The same does "not" hold for Coxeter groups of infinite rank. Braid groups. Each Coxeter group is associated to another group called its "Artin–Tits group" or "generalized braid group", which is defined by omitting the relations formula_61 for each generator formula_3 from its Coxeter presentation. Although generalized braid groups are not reflection groups, they inherit a notion of parabolic subgroups: a "standard parabolic subgroup" of a generalized braid group is a subgroup generated by a subset of the standard generating set S, and a "parabolic subgroup" is any subgroup conjugate to a standard parabolic. A generalized braid group is said to be of "spherical type" if the associated Coxeter group is finite. If B is a generalized braid group of spherical type, then the intersection of any two parabolic subgroups of B is also a parabolic subgroup. Consequently, the parabolic subgroups of B form a lattice under inclusion. For a finite real reflection group W, the associated generalized braid group may be defined in purely topological language, without referring to a particular group presentation. This definition naturally extends to finite complex reflection groups. Parabolic subgroups can also be defined in this setting. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W = \\langle S \\mid (s s')^{m_{s, s'}} = 1 \\rangle," }, { "math_id": 1, "text": "m_{s, s'}" }, { "math_id": 2, "text": "m_{s, s} = 1" }, { "math_id": 3, "text": "s \\in S" }, { "math_id": 4, "text": "m_{s, s'} \\in \\{2, 3, \\ldots\\}" }, { "math_id": 5, "text": "s \\neq s' \\in S" }, { "math_id": 6, "text": "W_I" }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "W_\\varnothing" }, { "math_id": 9, "text": "W_S = W" }, { "math_id": 10, "text": "(W_I, I)" }, { "math_id": 11, "text": "\\ell_S" }, { "math_id": 12, "text": "\\ell_S(w) = k" }, { "math_id": 13, "text": "\\ell_S(w) = \\ell_I(w)" }, { "math_id": 14, "text": "u \\leq w" }, { "math_id": 15, "text": "W_I = W_J" }, { "math_id": 16, "text": "I = J" }, { "math_id": 17, "text": "W_I \\cap W_J = W_{I\\cap J}" }, { "math_id": 18, "text": "\\langle W_I, W_J \\rangle" }, { "math_id": 19, "text": "W_J" }, { "math_id": 20, "text": "W_{I \\cup J}" }, { "math_id": 21, "text": "W^I" }, { "math_id": 22, "text": " W^I = \\{ w \\in W \\colon \\ell_S(ws) > \\ell_S(w) \\text{ for all } s \\in I\\}" }, { "math_id": 23, "text": "w \\in W" }, { "math_id": 24, "text": "u \\in W^I" }, { "math_id": 25, "text": "v \\in W_I" }, { "math_id": 26, "text": "w = uv" }, { "math_id": 27, "text": "\\ell_S(w) = \\ell_S(u) + \\ell_S(v)" }, { "math_id": 28, "text": "w W_I" }, { "math_id": 29, "text": "A \\subseteq V" }, { "math_id": 30, "text": " W_A = \\{ w \\in W \\colon w(a) = a \\text{ for all } a \\in A\\}" }, { "math_id": 31, "text": "W_{\\varnothing} = W_{\\{0\\}} = W" }, { "math_id": 32, "text": "W_V" }, { "math_id": 33, "text": "W_A" }, { "math_id": 34, "text": "W_A = W_{\\overline{A}}" }, { "math_id": 35, "text": "\\overline{A}" }, { "math_id": 36, "text": "L_W" }, { "math_id": 37, "text": "\\{v \\in V \\colon w(v) = v\\} = I" }, { "math_id": 38, "text": "I \\mapsto W_I" }, { "math_id": 39, "text": "I \\in L_W" }, { "math_id": 40, "text": "S_n" }, { "math_id": 41, "text": "\\{1, \\ldots, n\\}" }, { "math_id": 42, "text": "(1\\ 2)" }, { "math_id": 43, "text": "(n - 1\\ n)" }, { "math_id": 44, "text": "S_{a_1} \\times \\cdots \\times S_{a_k}" }, { "math_id": 45, "text": "a_1, \\ldots, a_k" }, { "math_id": 46, "text": "\\{1, \\ldots, a_1\\}" }, { "math_id": 47, "text": "\\{a_1 + 1, \\ldots, a_1 + a_2\\}" }, { "math_id": 48, "text": "S^B_n" }, { "math_id": 49, "text": "\\{\\pm 1, \\ldots, \\pm n\\}" }, { "math_id": 50, "text": "w(-i) = - w(i)" }, { "math_id": 51, "text": "\\{i + 1, \\ldots, n\\}" }, { "math_id": 52, "text": "i \\in \\{1, \\ldots, n\\}" }, { "math_id": 53, "text": "(W, T)" }, { "math_id": 54, "text": "(W, S)" }, { "math_id": 55, "text": " T = \\{ w s w^{-1} \\colon w \\in W, s \\in S \\}," }, { "math_id": 56, "text": " \\{w \\in W \\colon w(a) = a \\text{ for all } a \\in A\\}" }, { "math_id": 57, "text": "W = B / (B \\cap N)" }, { "math_id": 58, "text": "G = \\bigsqcup_{w \\in W} BwB" }, { "math_id": 59, "text": "\\sqcup" }, { "math_id": 60, "text": "P_J = BW_JB" }, { "math_id": 61, "text": "s^2 = 1" } ]
https://en.wikipedia.org/wiki?curid=75737969
75739302
Graduated majority judgment
Single-winner electoral system Graduated majority judgment (GMJ), sometimes called the usual judgment or continuous Bucklin voting, is a single-winner electoral system. It was invented independently three times in the early 21st century. It was first suggested as an improvement on majority judgment by Andrew Jennings in 2010, then by Jameson Quinn, and later independently by the French social scientist Adrien Fabre in 2019. In 2024, the latter coined the name "median judgment" for the rule, arguing it was the best highest median voting rule. It is a highest median voting rule, a system of cardinal voting in which the winner is decided by the median rating rather than the mean. GMJ begins by counting all ballots for their first choice. If no candidate has a majority then later (second, third, etc.) preferences are added to first preferences until one candidate reaches 50% of the vote. The first candidate to reach a majority of the vote is the winner. Highest medians. Votes should be cast using a cardinal (rated) ballot, which ask voters to give each candidate a separate grade, such as : When counting the votes, we calculate the share of each grade for each of the votes cast. This is the candidate's "merit profile": For each candidate, we determine the "median" or "majority" grade as the grade where a majority of voters would oppose giving the candidate a higher grade, but a majority would also oppose giving a lower grade. This rule means that the absolute majority of the electors judge that a candidates merits "at least" its median grade, while half the electors judge that he deserves "at most" its median grade. If only one candidate has the highest median grade, they are elected (as in all highest median voting rules). Otherwise, the election uses a tie-breaking procedure. Tie-breaking. Graduated majority judgment uses a simple line-drawing method to break ties. This rule is easier to explain than others such as majority judgment, and also guarantees continuity. Graphically, we can represent this by drawing a plot showing the share of voters who assign an approval less than the given score, then draw lines connecting the points on this graph. The place where this plot intersects 50% is each candidate's score. Example. Consider the same election as before, but relabeling the verbal grades as numbers on a scale from 0 to 6: Candidates A and B both cross the 50% threshold between 2 or 3, so we must invoke the tiebreaking procedure. When we do, we find that the median grades for candidates A, B, and C are 3.4, 3.1, and 2.0 respectively. Thus, Candidate A is declared the winner. Race analogy. The tiebreaking rule can be explained using an analogy where every candidate is in a race. Each candidate takes 1 minute to run from one grade to the next, and they run at a constant speed when moving from one grade to the next. The winner is the first candidate to cross the finish line at 50% of the vote. Mathematical formula. Say the median grade of a candidate formula_0 is formula_1 (when there is a tie, we define the median as halfway between the neighboring grades). Let formula_2 (the share of "proponents") refer to the share of electors giving formula_0 a score strictly better than the median grade. The share of "opponents" of formula_0, written formula_3, is the share of grades falling below the median. Then the complete score for GMJ is given by the following formula:formula_4 Additional tie-breaking. In the unusual case of a tie where the formula above does not determine a single winner (if several candidates have exactly the same score), ties can be broken by binning together the 3 grades closest to the median, then repeating the tie-breaking procedure. In the example above, we would combine all "Good," "Fair," and "Passable" grades into a new "Passable to Good" grade, then apply the same tie-breaking formula as before. This process can be repeated multiple times (binning more and more grades) until a winner is found. Properties and advantages. Advantages and disadvantages common to highest-median rules. As an electoral system, the graduated majority judgment shares most of its advantages with other highest-median voting rules such as majority judgment, including its resistance to tactical voting. It also shares most of its disadvantages (for example, it fails the participation criterion, and can fail the majority criterion arbitrarily badly). Specific advantages of graduated majority judgment. The tie-breaking formula of the graduated majority judgment presents specific advantages over the other highest-median voting rules. Continuity. The function defined by the graduated majority judgment tie-breaking formula is a continuous function (as well as being almost-everywhere differentiable), whereas the functions of majority judgment and typical judgment are discontinuous. In other words, a small change in the number of votes for each candidate is unlikely to change the winner of the election, because small changes in vote shares result in only small changes in the overall rating. This property makes the graduated majority judgment a more robust voting method in the face of accusations of fraud or demands of a recount of all votes. As small differences of votes are less likely to change the outcome of the election, candidates are less likely to contest results. Rare ties. The additional tie-breaking procedures of graduated majority judgment mean that tied elections become extremely unlikely (far less likely than systems such as plurality). Whereas plurality votes, ranked voting, and approval voting can result in ties when working with small elections, the only way for two candidates to tie with usual judgment is to have all candidates receive exactly the same number of votes in "every" grade category, implying the chances of an undetermined election fall exponentially with the number of grades.
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "\\alpha _c" }, { "math_id": 2, "text": "p_c" }, { "math_id": 3, "text": "q_c" }, { "math_id": 4, "text": "n_c = \\alpha_c + \\frac{1}{2} \\frac{p_c-q_c}{1-p_c-q_c}" } ]
https://en.wikipedia.org/wiki?curid=75739302
75745524
Éamon Hanrahan
Irish geotechnical engineer and professor of soil mechanics Edward (Éamon) T. Hanrahan (1917 – 30 November 2012) was an Irish civil engineer, Associate Professor of Civil Engineering, and Head of department in the School of Civil, Structural and Environmental Engineering at University College Dublin (UCD). Owing to his contributions to geotechnical engineering education and practice in Ireland, a biennial lecture at UCD's Geotechnical Society is named in his honour. Hanrahan undertook studies and research on soil mechanics and foundation engineering, particularly on soft soils such as peat. In 1955, he created the first postgraduate soil mechanics course in for students in Ireland. He published work in Irish and British journals including "Géotechnique", and published several works on peat and glacial tills which continue to be cited in soil mechanics and geotechnical engineering research. Early life and education. Hanrahan was born in Limerick City in 1917. He attended local primary and secondary schools before studying engineering at UCD. He was awarded a BE degree in Civil Engineering in 1939. Career. After working in England during the 1940s, Hanrahan returned to Ireland and joined the faculty at UCD in 1948, where he remained until his retirement in 1987. He pursued interests in soil mechanics and foundation engineering from early in his career. Hanrahan was awarded an ME degree in 1946, a PhD in 1954, and a DSc in 1983. Hanrahan played a role in establishing geotechnical engineering research at UCD by developing the first postgraduate course in soil mechanics there in 1955. His research focused on the mechanical properties and deformation characteristics of soft soils, especially peat. Hanrahan developed laboratory facilities to support his research at UCD. In 1953 and 1954, he designed and supervised remediation works to a road between Edenderry and Rathangan which had begun to disintegrate in the late 1940s. An initial repair strategy had involved the uncontrolled use of gravel for consolidation, leading to uneven settlements and road deformations. To rectify this, Hanrahan proposed reconstruction using bales of horticultural peat as a lightweight fill, topped with gravel, which successfully stabilised the road. Hanrahan undertook experiments to measure pore-water pressure during shearing and consolidation, and examined the strength of peat under various conditions. In 1954, he published work discussing the effects of load and time on peat's permeability and investigated its compressibility and rate of consolidation. He built upon foundational studies on soft soils by Albert Sybrandus Keverling Buisman and others, conducting laboratory tests on peat to explore its physical properties such as permeability, shear, and consolidation characteristics. Hanrahan's work extended the theoretical framework provided by Keverling Buisman by offering practical insights and empirical data, particularly focusing on the unique behaviour of peat under various conditions. This progression signified a development from theoretical and generalized concepts of soft soil mechanics towards more specific and detailed understanding of peat's behavior in engineering applications. His work was published in several national and international journals and is still cited, including his papers in "Géotechnique" in 1954 and 1964. His book, an analysis of soil behaviour under applied loading, entitled "The Geotechnics of Real Materials: The formula_0,formula_1 Method", was published by Elsevier in 1985. Hanrahan's contributions to soil mechanics were not limited to peat and soft soils, and he also contributed to improved understanding of the behaviour of glacial tills in Ireland. Personal life. Hanrahan had an interest in the Irish language, Gaelic games, and he was a lifelong member of the St Vincent de Paul Society. Legacy. Hanrahan's work in geotechnical engineering, especially his research on peat and soft soils, has been widely cited. A biennial lecture at the Geotechnical Society of UCD is named in his honour. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_g" }, { "math_id": 1, "text": "E_k" } ]
https://en.wikipedia.org/wiki?curid=75745524
75746300
Biodiversity Impact Credit
A metric of biodiversity designed for commercial use A Biodiversity Impact Credit (BIC) is a transferable biodiversity credit designed to reduce global species extinction risk. The underlying BIC metric, developed by academics working at Queen Mary University of London and Bar-Ilan University, is given by a simple formula that quantifies the positive and negative effects that interventions in nature have on the mean long-term survival probability of species. In particular, an organisation's global footprint in terms of BICs can be computed from PDF-based biodiversity footprints. The metric is broadly applicable across taxa (taxonomic groups) and ecosystems. Organisations whose overall biodiversity impact is positive in terms of the BIC metric contribute to achieving the objective of the Global Biodiversity Framework to "significantly reduce extinction risk". Use of BICs by businesses has been recommended by the Task Force on Nature-related Financial Disclosures and the first provider of BICs for sale is Botanic Gardens Conservation International (BGCI). The credits are generated by BGCI's international member organisations by rebuilding the populations of tree species at high risk of extinction under the IUCN Red List methodology. Theory. Definition. Users of BICs distinguish between the metric's scientific definition and how metric values are estimated through methodologies and approximations suitable for particular contexts. This mirrors the situation with carbon credits, which are designed to quantify avoidance or reductions of atmospheric carbon dioxide load but in practice are estimated using a broad variety of context-specific methodologies. For a given taxonomic or functional group of formula_0 species, let formula_1 be a measure of the current global population size of the formula_2th species. This can be measured, e.g., by the number of mature individuals or population biomass, in some cases even by the number of colonies, whichever approximates total reproductive value well. Denote by formula_3 the change in the global population of species formula_2 resulting from a specific intervention in nature. The corresponding Biodiversity Impact Credits are then given by formula_4 where formula_5 denotes the population size of species formula_2 at which environmental and demographic stochasticity are of the same magnitude. Calculation. Depending on the kind of intervention, the system affected and the available data, a variety of methods is available to estimate BICs. Since typical values of formula_5 lie in the range of 1 to 100 adult individuals, the contribution of formula_5 in the definition above is often negligibly small compared to formula_1. The formula then simplifies to formula_6 In projects that aim to rebuild the population of a single endangered species formula_2, the term associated with that species will often dominate the sum in the formula above so that it simplifies further to formula_7 When a species restoration project has increased the population of a species by an amount that is much larger than the original population (and formula_5) and no comparable increases in the population of that species have occurred elsewhere, then the species' current population formula_1 is nearly identical to the increase formula_8 of the population achieved. In this case, the formula above simplifies to formula_9 For use over large areas, approximations expressing BICs in terms of Range Size Rarity, Potentially Disappearing Fraction (PDF) of species, or combinations thereof are available. In particular, an organisation's global footprint in terms of BICs can be computed from PDF-based biodiversity footprints. Interpretation. As a simple interpretation, the BIC metric measures the equivalent number of endangered species whose populations have been restored or (for negative BIC) the number of species that should be restored to achieve net zero biodiversity impact. This follows from above approximation that BIC = 1 for the restoration of a single threatened species. However, the BIC metric goes beyond simply counting the number of threatened species that have been restored. It takes into account that decline or recovery of a species can be the result of many small impacts by different actors and attributes both positive and negative credits accordingly. Specifically, it is constructed such that, according to a simple model, BIC &gt; 0 implies that the underlying intervention or combination of interventions leads to a reduction of mean long-term global species extinction risk for the taxonomic or functional group considered. According to the same model, a perfect market for BICs would lead to near-optimal allocation of resources to long-term species conservation. Compatibility with other standards. The BIC metric aligns with other globally-recognised biodiversity measures such as the Range Size Rarity, the Species Threat Abatement and Recovery Metric (START) by IUCN/TNFD, and the Ecosystem Damage Metric underlying the Biodiversity Footprint for Financial Institutions (BFFI). Biodiversity Impact Credits in practice. Rationale. The search for standardised systems to quantify biodiversity impacts has gained momentum in light of the accelerating rates of biodiversity loss worldwide. Traditional biodiversity conservation efforts can lack scalability and are hard to measure: Improving one area of land or river has a different impact on local biodiversity from improving another, so their impacts are difficult to compare. BICs were developed with the aim to simplify assessments of biodiversity change by focusing on reducing species' extinction risks. The 2022 United Nations Biodiversity Conference emphasised the importance of global collaboration to halt biodiversity loss, marking the adoption of the Kunming-Montreal Global Biodiversity Framework (GBF). BICs are designed to address Target 4 of this framework ("to halt extinction of known threatened species ... and significantly reduce extinction risk" and Target 15: "[Take measures] to ensure that large transnational companies and financial institutions [...] transparently disclose their risks, dependencies and impacts on biodiversity ... in order to progressively reduce negative impacts." The Task Force on Nature-related Financial Disclosures via their LEAP methodology recommends use of BICs to quantify impacts on species extinction risk in version 1.1 of their disclosure recommendations. The BIC methodology was one of four recognised metrics for assessing extinction risk.221 Trees are at the base of the ecological pyramid. Countless species rely on native trees for survival, including fungi, lichen, insects, birds and other vertebrates. Repopulating native tree species improves local biodiversity, helps prevents soil erosion, conserves water and helps cools the planet as well as being a carbon store. BGCI developed the GlobalTreeSearch database which is the only comprehensive, geo-referenced list of all the world's c.60,000 tree species. Working with the International Union for Conservation of Nature (IUCN) they then produced the Global Tree Assessment which concluded that more than 17,500 tree species (c.30%) are threatened with extinction. Finally, BGCI's Global Tree Conservation Program is the only global programme dedicated to saving the world's threatened tree species. Even before BICs were are launched, over 400 rare and threatened tree species had already been conserved in over 50 countries. Implementation. One of the critical components of the BIC system is that it is being driven by conservation organisations like BGCI and their international network of members, and backed by theoretical analyses by several Queen Mary University London academics. These organisations provide the practical know-how and decades of experience in species conservation, focusing particularly on native trees which play a pivotal role in local ecosystems. BGCI is now mediating issuance of transferable BIC certificates to organisations who sponsor tree conservation projects by BGCI member organisations. The BIC system has been designed for easy adoption and scalability. This is crucial for engaging financial institutions and other large corporations that require streamlined, global, comparable, and straightforward metrics to set their sustainability goals. BGCI unveiled their Global Biodiversity Standard at the 2021 United Nations Climate Change Conference – a global biodiversity accreditation framework. BICs are due to be formally launched in early 2024. Critique. Biodiversity credits have been criticised by some who say that putting a monetary value on nature is wrong or regard it as impossible because of the complexity of biodiversity. Others say that they are always bought to offset damage to nature. Biodiversity credits have also been criticised as a way for companies to make false sustainability claims, a practice called greenwashing. Since February 2024, a Biodiversity Net Gain policy has been in place in England. Under this policy, developers must buy biodiversity credits from the government as a last resort if they cannot achieve net gain in biodiversity in other ways. It is not yet known how successful these requirements for builders to compensate for nature loss will be. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "N_i" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "\\Delta N_i" }, { "math_id": 4, "text": "\n\\text{BIC}=\\sum_i^S \\frac{\\Delta N_{i}}{N_i^*+N_i},\n" }, { "math_id": 5, "text": "N^*_i" }, { "math_id": 6, "text": "\n\\text{BIC}=\\sum_i^S \\frac{\\Delta N_{i}}{N_i}.\n" }, { "math_id": 7, "text": "\n\\text{BIC}=\\frac{\\Delta N_{i}}{N_i}.\n" }, { "math_id": 8, "text": "\\Delta N_{i}" }, { "math_id": 9, "text": "\\text{BIC}=1." } ]
https://en.wikipedia.org/wiki?curid=75746300
75767512
Beryllocene
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Beryllocene is an organoberyllium compound with the chemical formula Be(C5H5)2. It was first prepared in 1959. The colorless substance can be crystallized from petroleum ether in the form of white needles at −60 °C and decomposes quickly upon contact with atmospheric oxygen and water. Preparation. Beryllocene can be prepared by reacting beryllium chloride and sodium cyclopentadienide in benzene or diethyl ether: formula_0 Properties. Physical. In contrast to the uncharged metallocenes of the transition metals V, Cr, Fe, Co, Ni, Ru and Os, which have a strictly symmetrical and therefore dipoleless structure, beryllocene has a electric dipole moment of 2.46 Debye (in benzene), or 2.24 Debye (in cyclohexane), indicating asymmetry of the molecule. In the IR spectrum there are signals at 1524, 1610, 1669, 1715 and 1733 cm−1, which also indicate that the structure does correspond to that of ferrocene. In contrast, the nuclear magnetic resonance spectrum shows only one signal down to −135 °C, indicating either a symmetrical structure or a rapid fluctuation of the rings. Structure. Beryllocene shows different molecular geometries depending on the physical state. The low-temperature X-ray structure analysis shows a slipped sandwich structure, i.e. the rings are offset from each other - one ring is η5 coordinated with a Be-Cp distance of 152 pm, the second only η1 coordinated (Be-Cp distance: 181 pm). The reason for the η5, η1 structure is that the orbitals of beryllocene can only be occupied with a maximum of 8 valence electrons. In the gas phase both rings η5 appear to be coordinated. In fact, one ring is significantly further from the central atom than the other (190 and 147 pm) and the apparent η5 coordination is due to a rapid fluctuation of the bond. Based on gas-phase electron diffraction studies at 120 °C, Arne Haaland concluded in 1979 that the two rings are only about 80 pm shifted from each other and are not coordinated η5,η1, but rather η5,η3. Like beryllocene, the octamethyl derivative Be(C5Me4H)2 has a slipped sandwich structure with η5,η1 coordination. In contrast Be(C5Me5)2 shows the classic η5,η5 coordination. In the crystal, however, the Be-C distances vary between 196.9(1) and 211.4(1) pm. Chemical. Beryllocene decomposes relatively quickly in tetrahydrofuran, forming a yellowish gel. It reacts violently in water to produce beryllium hydroxide and cyclopentadiene: Like magnesocene, beryllocene also forms ferrocene with iron(II) chloride. The driving force is the formation of the very stable ferrocene molecule. It is predicted to react with beryllium to generate C5H5BeBeC5H5. Safety. Beryllocene is toxic and carcinogenic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{BeCl_2 +\\ 2\\ Na(C_5H_5)\\ \\xrightarrow[]{Et_2O} Be(C_5H_5)_2 + 2\\ NaCl }" } ]
https://en.wikipedia.org/wiki?curid=75767512
75768013
Cost-sensitive machine learning
Cost-sensitive machine learning is an approach within machine learning that considers varying costs associated with different types of errors. This method diverges from traditional approaches by introducing a cost matrix, explicitly specifying the penalties or benefits for each type of prediction error. The inherent difficulty which cost-sensitive machine learning tackles is that minimizing different kinds of classification errors is a multi-objective optimization problem. Overview. Cost-sensitive machine learning optimizes models based on the specific consequences of misclassifications, making it a valuable tool in various applications. It is especially useful in problems with a high imbalance in class distribution and a high imbalance in associated costs Cost-sensitive machine learning introduces a scalar cost function in order to find one (of multiple) Pareto optimal points in this multi-objective optimization problem. Cost Matrix. The cost matrix is a crucial element within cost-sensitive modeling, explicitly defining the costs or benefits associated with different prediction errors in classification tasks. Represented as a table, the matrix aligns true and predicted classes, assigning a cost value to each combination. For instance, in binary classification, it may distinguish costs for false positives and false negatives. The utility of the cost matrix lies in its application to calculate the expected cost or loss. The formula, expressed as a double summation, utilizes joint probabilities: formula_0 Here, formula_1 denotes the joint probability of actual class formula_2 and predicted class formula_3, providing a nuanced measure that considers both the probabilities and associated costs. This approach allows practitioners to fine-tune models based on the specific consequences of misclassifications, adapting to scenarios where the impact of prediction errors varies across classes. Applications. Fraud Detection. In the realm of data science, particularly in finance, cost-sensitive machine learning is applied to fraud detection. By assigning different costs to false positives and false negatives, models can be fine-tuned to minimize the overall financial impact of misclassifications. Medical Diagnostics. In healthcare, cost-sensitive machine learning plays a role in medical diagnostics. The approach allows for customization of models based on the potential harm associated with misdiagnoses, ensuring a more patient-centric application of machine learning algorithms. Challenges. A typical challenge in cost-sensitive machine learning is the reliable determination of the cost matrix which may evolve over time.
[ { "math_id": 0, "text": " \\text{Expected Loss} = \\sum_{i} \\sum_{j} P(\\text{Actual}_i, \\text{Predicted}_j) \\cdot \\text{Cost}_{\\text{Actual}_i, \\text{Predicted}_j} " }, { "math_id": 1, "text": "P(\\text{Actual}_i, \\text{Predicted}_j)" }, { "math_id": 2, "text": "i " }, { "math_id": 3, "text": "j" } ]
https://en.wikipedia.org/wiki?curid=75768013
7577556
Oort constants
Parameters characterizing properties of our galaxy The Oort constants (discovered by Jan Oort) formula_0 and formula_1 are empirically derived parameters that characterize the local rotational properties of our galaxy, the Milky Way, in the following manner: formula_2 where formula_3 and formula_4 are the rotational velocity and distance to the Galactic Center, respectively, measured at the position of the Sun, and v and r are the velocities and distances at other positions in our part of the galaxy. As derived below, A and B depend only on the motions and positions of stars in the solar neighborhood. As of 2018, the most accurate values of these constants are formula_0 15.3 ± 0.4 km s−1 kpc−1, formula_1 −11.9 ± 0.4 km s−1 kpc−1. From the Oort constants, it is possible to determine the orbital properties of the Sun, such as the orbital velocity and period, and infer local properties of the Galactic disk, such as the mass density and how the rotational velocity changes as a function of radius from the Galactic Center. Historical significance and background. By the 1920s, a large fraction of the astronomical community had recognized that some of the diffuse, cloud-like objects, or nebulae, seen in the night sky were collections of stars located beyond our own, local collection of star clusters. These "galaxies" had diverse morphologies, ranging from ellipsoids to disks. The concentrated band of starlight that is the visible signature of the Milky Way was indicative of a disk structure for our galaxy; however, our location within our galaxy made structural determinations from observations difficult. Classical mechanics predicted that a collection of stars could be supported against gravitational collapse by either random velocities of the stars or their rotation about its center of mass. For a disk-shaped collection, the support should be mainly rotational. Depending on the mass density, or distribution of the mass in the disk, the rotation velocity may be different at each radius from the center of the disk to the outer edge. A plot of these rotational velocities against the radii at which they are measured is called a rotation curve. For external disk galaxies, one can measure the rotation curve by observing the Doppler shifts of spectral features measured along different galactic radii, since one side of the galaxy will be moving towards our line of sight and one side away. However, our position in the Galactic midplane of the Milky Way, where dust in molecular clouds obscures most optical light in many directions, made obtaining our own rotation curve technically difficult until the discovery of the 21 cm hydrogen line in the 1930s. To confirm the rotation of our galaxy prior to this, in 1927 Jan Oort derived a way to measure the Galactic rotation from just a small fraction of stars in the local neighborhood. As described below, the values he found for formula_0 and formula_1 proved not only that the Galaxy was rotating but also that it rotates differentially, or as a fluid rather than a solid body. Derivation. Consider a star in the midplane of the Galactic disk with Galactic longitude formula_5 at a distance formula_6 from the Sun. Assume that both the star and the Sun have circular orbits around the center of the Galaxy at radii of formula_7 and formula_8 from the Galactic Center and rotational velocities of formula_9 and formula_10, respectively. The motion of the star along our line of sight, or "radial velocity", and motion of the star across the plane of the sky, or "transverse velocity", as observed from the position of the Sun are then: formula_11 With the assumption of circular motion, the rotational velocity is related to the angular velocity by formula_12 and we can substitute this into the velocity expressions: formula_13 From the geometry in Figure 1, one can see that the triangles formed between the Galactic Center, the Sun, and the star share a side or portions of sides, so the following relationships hold and substitutions can be made: formula_14 and with these we get formula_15 To put these expressions only in terms of the known quantities formula_5 and formula_6, we take a Taylor expansion of formula_16 about formula_8. formula_17 Additionally, we take advantage of the assumption that the stars used for this analysis are "local", i.e. formula_18 is small, and the distance d to the star is smaller than formula_7 or formula_8, and we take: formula_19. So: formula_20 Using the sine and cosine half angle formulae, these velocities may be rewritten as: formula_21 Writing the velocities in terms of our known quantities and two coefficients formula_0 and formula_1 yields: formula_22 where formula_23 At this stage, the observable velocities are related to these coefficients and the position of the star. It is now possible to relate these coefficients to the rotation properties of the galaxy. For a star in a circular orbit, we can express the derivative of the angular velocity with respect to radius in terms of the rotation velocity and radius and evaluate this at the location of the Sun: formula_24 so formula_25 formula_0 is the Oort constant describing the shearing motion and formula_1 is the Oort constant describing the rotation of the Galaxy. As described below, one can measure formula_0 and formula_1 from plotting these velocities, measured for many stars, against the galactic longitudes of these stars. Measurements. As mentioned in an intermediate step in the derivation above: formula_26 Therefore, we can write the Oort constants formula_0 and formula_1 as: formula_27 Thus, the Oort constants can be expressed in terms of the radial and transverse velocities, distances, and galactic longitudes of objects in our Galaxy - all of which are, in principle, observable quantities. However, there are a number of complications. The simple derivation above assumed that both the Sun and the object in question are traveling on circular orbits about the Galactic center. This is not true for the Sun (the Sun's velocity relative to the local standard of rest is approximately 13.4 km/s), and not necessarily true for other objects in the Milky Way either. The derivation also implicitly assumes that the gravitational potential of the Milky Way is axisymmetric and always directed towards the center. This ignores the effects of spiral arms and the Galaxy's bar. Finally, both transverse velocity and distance are notoriously difficult to measure for objects which are not relatively nearby. Since the non-circular component of the Sun's velocity is known, it can be subtracted out from our observations to compensate. We do not know, however, the non-circular components of the velocity of each individual star we observe, so they cannot be compensated for in this way. But, if we plot transverse velocity divided by distance against galactic longitude for a large sample of stars, we know from the equations above that they will follow a sine function. The non-circular velocities will introduce scatter around this line, but with a large enough sample the true function can be fit for and the values of the Oort constants measured, as shown in figure 2. formula_0 is simply the amplitude of the sinusoid and formula_1 is the vertical offset from zero. Measuring transverse velocities and distances accurately and without biases remains challenging, though, and sets of derived values for formula_0 and formula_1 frequently disagree. Most methods of measuring formula_0 and formula_1 are fundamentally similar, following the above patterns. The major differences usually lie in what sorts of objects are used and details of how distance or proper motion are measured. Oort, in his original 1927 paper deriving the constants, obtained formula_0 31.0 ± 3.7 km s−1 kpc−1. He did not explicitly obtain a value for formula_1, but from his conclusion that the Galaxy was nearly in Keplerian rotation (as in example 2 below), we can presume he would have gotten a value of around −10 km s−1 kpc−1. These differ significantly from modern values, which is indicative of the difficulty of measuring these constants. Measurements of formula_0 and formula_1 since that time have varied widely; in 1964 the IAU adopted formula_0 15 km s−1 kpc−1 and formula_1 −10 km s−1 kpc−1 as standard values. Although more recent measurements continue to vary, they tend to lie near these values. The Hipparcos satellite, launched in 1989, was the first space-based astrometric mission, and its precise measurements of parallax and proper motion have enabled much better measurements of the Oort constants. In 1997 Hipparcos data were used to derive the values formula_0 14.82 ± 0.84 km s−1 kpc−1 and formula_1 −12.37 ± 0.64 km s−1 kpc−1. The Gaia spacecraft, launched in 2013, is an updated successor to Hipparcos; which allowed new and improved levels of accuracy in measuring four Oort constants formula_0 15.3 ± 0.4 km s−1 kpc−1, formula_1 -11.9 ± 0.4 km s−1 kpc−1, formula_28 −3.2 ± 0.4 km s−1 kpc−1 and formula_29 −3.3 ± 0.6 km s−1 kpc−1. With the Gaia values, we find formula_30 This value of Ω corresponds to a period of 226 million years for the sun's present neighborhood to go around the Milky Way. However, the time it takes for the Sun to go around the Milky Way (a galactic year) may be longer because (in a simple model) it is circulating around a point further from the centre of the galaxy where Ω is smaller (see Sun#Orbit in Milky Way). The values in km s−1 kpc−1 can be converted into milliarcseconds per year by dividing by 4.740. This gives the following values for the average proper motion of stars in our neighborhood at different galactic longitudes, after correction for the effect due to the Sun's velocity with respect to the local standard of rest: The motion of the sun towards the solar apex in Hercules adds a generally westward component to the observed proper motions of stars around Vela or Centaurus and a generally eastward component for stars around Cygnus or Cassiopeia. This effect falls off with distance, so the values in the table are more representative for stars that are further away. On the other hand, more distant stars or objects will not follow the table, which is for objects in our neighborhood. For example, Sagittarius A*, the radio source at the centre of the galaxy, will have a proper motion of approximately Ω or 5.7 mas/y southwestward (with a small adjustment due to the Sun's motion toward the solar apex) even though it is in Sagittarius. Note that these proper motions cannot be measured against "background stars" (because the background stars will have similar proper motions), but must be measured against more stationary references such as quasars. Meaning. The Oort constants can greatly enlighten one as to how the Galaxy rotates. As one can see formula_0 and formula_1 are both functions of the Sun's orbital velocity as well as the first derivative of the Sun's velocity. As a result, formula_0 describes the shearing motion in the disk surrounding the Sun, while formula_1 describes the angular momentum gradient in the solar neighborhood, also referred to as vorticity. To illuminate this point, one can look at three examples that describe how stars and gas orbit within the Galaxy giving intuition as to the meaning of formula_0 and formula_1. These three examples are solid body rotation, Keplerian rotation and constant rotation over different annuli. These three types of rotation are plotted as a function of radius (formula_7), and are shown in Figure 3 as the green, blue and red curves respectively. The grey curve is approximately the rotation curve of the Milky Way. Solid body rotation. To begin, let one assume that the rotation of the Milky Way can be described by solid body rotation, as shown by the green curve in Figure 3. Solid body rotation assumes that the entire system is moving as a rigid body with no differential rotation. This results in a constant angular velocity, formula_31, which is independent of formula_7. Following this we can see that velocity scales linearly with formula_7, formula_32, thus formula_33 Using the two Oort constant identities, one then can determine what the formula_0 and formula_1 constants would be, formula_34 This demonstrates that in solid body rotation, there is no shear motion, i.e. formula_35, and the vorticity is just the angular rotation, formula_36. This is what one would expect because there is no difference in orbital velocity as radius increases, thus no stress between the annuli. Also, in solid body rotation, the only rotation is about the center, so it is reasonable that the resulting vorticity in the system is described by the only rotation in the system. One can actually measure and find that is non-zero (formula_37 km s−1 kpc−1.). Thus the galaxy does not rotate as a solid body in our local neighborhood, but may in the inner regions of the Galaxy. Keplerian rotation. The second illuminating example is to assume that the orbits in the local neighborhood follow a Keplerian orbit, as shown by the blue line in Figure 3. The orbital motion in a Keplerian orbit is described by, formula_38 where formula_39 is the gravitational constant, and formula_40 is the mass enclosed within radius formula_41. The derivative of the velocity with respect to the radius is, formula_42 The Oort constants can then be written as follows, formula_43 For values of Solar velocity, formula_44 km/s, and radius to the Galactic Center, formula_45 kpc, the Oort's constants are formula_46 km s−1 kpc−1, and formula_47 km s−1 kpc−1. However, the observed values are formula_37 km s−1 kpc−1 and formula_48 km s−1 kpc−1. Thus, Keplerian rotation is not the best description the Milky Way rotation. Furthermore, although this example does not describe the local rotation, it can be thought of as the limiting case that describes the minimum velocity an object can have in a stable orbit. Flat rotation curve. The final example is to assume that the rotation curve of the Galaxy is flat, i.e. formula_49 is constant and independent of radius, formula_41. The rotation velocity is in between that of a solid body and of Keplerian rotation, and is the red dottedline in Figure 3. With a constant velocity, it follows that the radial derivative of formula_49 is 0, formula_50 and therefore the Oort constants are, formula_51 Using the local velocity and radius given in the last example, one finds formula_52 km s−1 kpc−1 and formula_53 km s−1 kpc−1. This is close to the actual measured Oort constants and tells us that the constant-speed model is the closest of these three to reality in the solar neighborhood. But in fact, as mentioned above, formula_54 is negative, meaning that at our distance, speed decreases with distance from the centre of the galaxy. What one should take away from these three examples, is that with a remarkably simple model, the rotation of the Milky Way can be described by these two constants. The first two examples are used as constraints to the Galactic rotation, for they show the fastest and slowest the Galaxy can rotate at a given radius. The flat rotation curve serves as an intermediate step between the two rotation curves, and in fact gives the most reasonable Oort constants as compared to current measurements. Uses. One of the major uses of the Oort constants is to calibrate the galactic rotation curve. A relative curve can be derived from studying the motions of gas clouds in the Milky Way, but to calibrate the actual absolute speeds involved requires knowledge of V0. We know that: formula_55 Since R0 can be determined by other means (such as by carefully tracking the motions of stars near the Milky Way's central supermassive black hole), knowing formula_0 and formula_1 allows us to determine V0. It can also be shown that the mass density formula_56 can be given by: formula_57 So the Oort constants can tell us something about the mass density at a given radius in the disk. They are also useful to constrain mass distribution models for the Galaxy. As well, in the epicyclic approximation for nearly circular stellar orbits in a disk, the epicyclic frequency formula_58 is given by formula_59, where formula_31 is the angular velocity. Therefore, the Oort constants can tell us a great deal about motions in the galaxy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "\n\\begin{align} \n& A=\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}-\\frac{dv}{dr}\\Bigg\\vert_{R_{0}}\\right) \\\\\n& B= - \\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}+\\frac{dv}{dr}\\Bigg\\vert_{R_{0}}\\right) \\\\\n\\end{align}\n" }, { "math_id": 3, "text": "V_0" }, { "math_id": 4, "text": "R_0" }, { "math_id": 5, "text": "l" }, { "math_id": 6, "text": "d" }, { "math_id": 7, "text": "R" }, { "math_id": 8, "text": "R_{0}" }, { "math_id": 9, "text": "V" }, { "math_id": 10, "text": "V_{0}" }, { "math_id": 11, "text": "\n\\begin{align} \n& V_{\\text{obs, r}}=V_{\\text{star, r}}-V_{\\text{sun, r}}=V\\cos\\left(\\alpha\\right)-V_{0}\\sin\\left(l\\right) \\\\\n& V_{\\text{obs, t}}=V_{\\text{star, t}}-V_{\\text{sun, t}}=V\\sin\\left(\\alpha\\right)-V_{0}\\cos\\left(l\\right) \\\\\n\\end{align}\n" }, { "math_id": 12, "text": "v=\\Omega r" }, { "math_id": 13, "text": "\n\\begin{align} \n& V_{\\text{obs, r}}=\\Omega R\\cos\\left(\\alpha\\right)-\\Omega_{0}R_{0}\\sin\\left(l\\right) \\\\\n& V_{\\text{obs, t}}=\\Omega R\\sin\\left(\\alpha\\right)-\\Omega_{0}R_{0}\\cos\\left(l\\right) \\\\\n\\end{align}\n" }, { "math_id": 14, "text": "\n\\begin{align} \n& R\\cos\\left(\\alpha\\right)=R_{0}\\sin\\left(l\\right) \\\\\n& R\\sin\\left(\\alpha\\right)=R_{0}\\cos\\left(l\\right)-d \\\\\n\\end{align}\n" }, { "math_id": 15, "text": "\n\\begin{align} \n& V_{\\text{obs, r}}=\\left(\\Omega-\\Omega_{0}\\right)R_{0}\\sin\\left(l\\right) \\\\\n& V_{\\text{obs, t}}=\\left(\\Omega-\\Omega_{0}\\right)R_{0}\\cos\\left(l\\right)-\\Omega d \\\\\n\\end{align}\n" }, { "math_id": 16, "text": "\\Omega-\\Omega_{0}" }, { "math_id": 17, "text": "\\left(\\Omega-\\Omega_{0}\\right)=\\left(R-R_{0}\\right)\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}+..." }, { "math_id": 18, "text": "R-R_{0}" }, { "math_id": 19, "text": "R-R_{0}=-d \\cdot \\cos\\left(l\\right)" }, { "math_id": 20, "text": "\n\\begin{align} \n& V_{\\text{obs, r}}=-R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}d \\cdot \\cos\\left(l\\right)\\sin\\left(l\\right) \\\\\n& V_{\\text{obs, t}}=-R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}d \\cdot \\cos^{2}\\left(l\\right)-\\Omega d \\\\\n\\end{align}\n" }, { "math_id": 21, "text": "\n\\begin{align} \n& V_{\\text{obs, r}}=-R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}d\\frac{\\sin\\left(2l\\right)}{2} \\\\\n& V_{\\text{obs, t}}=-R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}d\\frac{\\left(\\cos\\left(2l\\right)+1\\right)}{2}-\\Omega d=-R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}d\\frac{\\cos\\left(2l\\right)}{2}+\\left(-\\frac{1}{2}R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}-\\Omega\\right)d \\\\\n\\end{align}\n" }, { "math_id": 22, "text": "\n\\begin{align} \n& V_{\\text{obs, r}}=Ad\\sin\\left(2l\\right) \\\\\n& V_{\\text{obs, t}}=Ad\\cos\\left(2l\\right)+Bd \\\\\n\\end{align}\n" }, { "math_id": 23, "text": "\n\\begin{align} \n& A=-\\frac{1}{2}R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}} \\\\\n& B=-\\frac{1}{2}R_{0}\\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}-\\Omega \\\\\n\\end{align}\n" }, { "math_id": 24, "text": "\n\\begin{align} \n& \\Omega=\\frac{v}{r} \\\\\n& \\frac{d\\Omega}{dr}\\Bigg\\vert_{R_{0}}=\\frac{d(v/r)}{dr}\\Bigg\\vert_{R_{0}}=-\\frac{V_{0}}{R_{0}^{2}}+\\frac{1}{R_{0}}\\frac{dv}{dr}\\Bigg\\vert_{R_{0}} \\\\\n\\end{align}\n" }, { "math_id": 25, "text": "\n\\begin{align} \n& A=\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}-\\frac{dv}{dr}\\Bigg\\vert_{R_{0}}\\right) \\\\\n& B=-\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}+\\frac{dv}{dr}\\Bigg\\vert_{R_{0}}\\right) \\\\\n\\end{align}\n" }, { "math_id": 26, "text": "\n\\begin{align}\n& V_{\\text{obs, r}}=A\\,d\\,\\sin\\left(2l\\right) \\\\\n& V_{\\text{obs, t}}=A\\,d\\,\\cos\\left(2l\\right)+B\\,d \\\\\n\\end{align}\n" }, { "math_id": 27, "text": "\n\\begin{align}\n& A=\\frac{V_{\\text{obs, r}}}{d\\,\\sin\\left(2l\\right)} \\\\\n& B=\\frac{V_{\\text{obs, t}}}{d}-A\\,\\cos\\left(2l\\right) \\\\\n\\end{align}\n" }, { "math_id": 28, "text": "C" }, { "math_id": 29, "text": "K" }, { "math_id": 30, "text": "\n\\begin{align}\n& \\frac{dv}{dr}\\Bigg\\vert_{R_{0}}=-A-B=-3.4\\text{ km/s/kpc} \\\\\n& \\frac{V_{0}}{R_{0}}=\\Omega=A-B=27.2\\text{ km/s/kpc} \\\\\n\\end{align}\n" }, { "math_id": 31, "text": "\\Omega" }, { "math_id": 32, "text": " v \\propto r" }, { "math_id": 33, "text": "\n\\begin{align}\n&\\frac{d v}{dr} =\\frac{v}{r}= \\Omega \\\\\n\\end{align}\n" }, { "math_id": 34, "text": "\n\\begin{align}\n& A= \\frac{1}{2}\\left(\\frac{\\Omega_{0} R_{0}}{R_{0}}-{\\Omega}\\Bigg\\vert_{R_{0}}\\right)=0 \\\\\n& B=-\\frac{1}{2}\\left(\\frac{\\Omega_{0}R_{0}}{R_{0}}+{\\Omega}\\Bigg\\vert_{R_{0}}\\right)=-\\Omega_{0} \\\\\n\\end{align}\n" }, { "math_id": 35, "text": "A=0" }, { "math_id": 36, "text": "B=-\\Omega" }, { "math_id": 37, "text": "A=14" }, { "math_id": 38, "text": " v=\\sqrt{ \\frac{G M}{r} } ," }, { "math_id": 39, "text": "G" }, { "math_id": 40, "text": "M" }, { "math_id": 41, "text": "r" }, { "math_id": 42, "text": "\\frac{d v}{dr} = -\\frac{1}{2} \\sqrt{ \\frac{G M}{R^3} }=-\\frac{1}{2} \\frac{v}{r}" }, { "math_id": 43, "text": "\n\\begin{align} \n& A= \\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}+\\frac{v}{2r}\\Bigg\\vert_{R_{0}}\\right)= \\frac{3V_{0}}{4R_{0}} \\\\\n& B=-\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}-\\frac{v}{2r}\\Bigg\\vert_{R_{0}}\\right)=-\\frac{1V_{0}}{4R_{0}} \\\\\n\\end{align}\n" }, { "math_id": 44, "text": "V_{0}=218" }, { "math_id": 45, "text": "R_{0}=8" }, { "math_id": 46, "text": "A= 20" }, { "math_id": 47, "text": "B=-7" }, { "math_id": 48, "text": "B = -12" }, { "math_id": 49, "text": "v" }, { "math_id": 50, "text": "\\frac{dv}{dr}=0" }, { "math_id": 51, "text": "\n\\begin{align} \n& A=\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}-0\\Bigg\\vert_{R_{0}}\\right)=\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}\\right) \\\\\n& B=-\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}+0\\Bigg\\vert_{R_{0}}\\right)=-\\frac{1}{2}\\left(\\frac{V_{0}}{R_{0}}\\right) \\\\\n\\end{align}\n" }, { "math_id": 52, "text": "A= 13.6" }, { "math_id": 53, "text": "B=-13.6" }, { "math_id": 54, "text": "-A-B" }, { "math_id": 55, "text": "V_0 = R_0(A-B)\\,\\!" }, { "math_id": 56, "text": "\\rho_R" }, { "math_id": 57, "text": " \\rho_R = \\frac{B^2 - A^2}{2 \\pi G}" }, { "math_id": 58, "text": "\\kappa" }, { "math_id": 59, "text": "\\kappa^2 = -4B\\Omega" } ]
https://en.wikipedia.org/wiki?curid=7577556
7578186
Orbit portrait
In mathematics, an orbit portrait is a combinatorial tool used in complex dynamics for understanding the behavior of one-complex dimensional quadratic maps. In simple words one can say that it is : Definition. Given a quadratic map formula_0 from the complex plane to itself formula_1 and a repelling or parabolic periodic orbit formula_2 of formula_3, so that formula_4 (where subscripts are taken 1 + modulo formula_5), let formula_6 be the set of angles whose corresponding external rays land at formula_7. Then the set formula_8 is called the orbit portrait of the periodic orbit formula_9. All of the sets formula_6 must have the same number of elements, which is called the valence of the portrait. Examples. Parabolic or repelling orbit portrait. valence 2. formula_10 formula_11 formula_12 formula_13 valence 3. Valence is 3 so rays land on each orbit point. formula_14 For complex quadratic polynomial with c= -0.03111+0.79111*i portrait of parabolic period 3 orbit is : formula_15 Rays for above angles land on points of that orbit . Parameter c is a center of period 9 hyperbolic component of Mandelbrot set. For parabolic julia set c = -1.125 + 0.21650635094611*i. It is a root point between period 2 and period 6 components of Mandelbrot set. Orbit portrait of period 2 orbit with valence 3 is : formula_16 valence 4. formula_17 Formal orbit portraits. Every orbit portrait formula_18 has the following properties: Any collection formula_25 of subsets of the circle which satisfy these four properties above is called a formal orbit portrait. It is a theorem of John Milnor that every formal orbit portrait is realized by the actual orbit portrait of a periodic orbit of some quadratic one-complex-dimensional map. Orbit portraits contain dynamical information about how external rays and their landing points map in the plane, but formal orbit portraits are no more than combinatorial objects. Milnor's theorem states that, in truth, there is no distinction between the two. Trivial orbit portraits. Orbit portrait where all of the sets formula_6 have only a single element are called trivial, except for orbit portrait formula_26. An alternative definition is that an orbit portrait is nontrivial if it is maximal, which in this case means that there is no orbit portrait that strictly contains it (i.e. there does not exist an orbit portrait formula_27 such that formula_28). It is easy to see that every trivial formal orbit portrait is realized as the orbit portrait of some orbit of the map formula_29, since every external ray of this map lands, and they all land at distinct points of the Julia set. Trivial orbit portraits are pathological in some respects, and in the sequel we will refer only to nontrivial orbit portraits. Arcs. In an orbit portrait formula_25, each formula_6 is a finite subset of the circle formula_30, so each formula_6 divides the circle into a number of disjoint intervals, called complementary arcs based at the point formula_7. The length of each interval is referred to as its angular width. Each formula_7 has a unique largest arc based at it, which is called its critical arc. The critical arc always has length greater than formula_31 These arcs have the property that every arc based at formula_7, except for the critical arc, maps diffeomorphically to an arc based formula_32, and the critical arc covers every arc based at formula_32 once, except for a single arc, which it covers twice. The arc that it covers twice is called the critical value arc for formula_32. This is not necessarily distinct from the critical arc. When formula_33 escapes to infinity under iteration of formula_34, or when formula_33 is in the Julia set, then formula_33 has a well-defined external angle. Call this angle formula_35. formula_35 is in every critical value arc. Also, the two inverse images of formula_33 under the doubling map (formula_36 and formula_37) are both in every critical arc. Among all of the critical value arcs for all of the formula_6's, there is a unique smallest critical value arc formula_38, called the characteristic arc which is strictly contained within every other critical value arc. The characteristic arc is a complete invariant of an orbit portrait, in the sense that two orbit portraits are identical if and only if they have the same characteristic arc. Sectors. Much as the rays landing on the orbit divide up the circle, they divide up the complex plane. For every point formula_7 of the orbit, the external rays landing at formula_7 divide the plane into formula_39 open sets called sectors based at formula_7. Sectors are naturally identified the complementary arcs based at the same point. The angular width of a sector is defined as the length of its corresponding complementary arc. Sectors are called critical sectors or critical value sectors when the corresponding arcs are, respectively, critical arcs and critical value arcs. Sectors also have the interesting property that formula_40 is in the critical sector of every point, and formula_33, the critical value of formula_34, is in the critical value sector. Parameter wakes. Two parameter rays with angles formula_41 and formula_42 land at the same point of the Mandelbrot set in parameter space if and only if there exists an orbit portrait formula_43 with the interval formula_44 as its characteristic arc. For any orbit portrait formula_43 let formula_45 be the common landing point of the two external angles in parameter space corresponding to the characteristic arc of formula_43. These two parameter rays, along with their common landing point, split the parameter space into two open components. Let the component that does not contain the point formula_40 be called the formula_43-wake and denoted as formula_46. A quadratic polynomial formula_47 realizes the orbit portrait formula_18 with a repelling orbit exactly when formula_48. formula_18 is realized with a parabolic orbit only for the single value formula_49 for about Primitive and satellite orbit portraits. Other than the zero portrait, there are two types of orbit portraits: primitive and satellite. If formula_39 is the valence of an orbit portrait formula_43 and formula_23 is the recurrent ray period, then these two types may be characterized as follows: Generalizations. Orbit portraits turn out to be useful combinatorial objects in studying the connection between the dynamics and the parameter spaces of other families of maps as well. In particular, they have been used to study the patterns of all periodic dynamical rays landing on a periodic cycle of a unicritical anti-holomorphic polynomial. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_c : z \\mapsto z^2 + c. " }, { "math_id": 1, "text": "f_c : \\mathbb{\\Complex} \\to \\mathbb{\\Complex} " }, { "math_id": 2, "text": "{\\mathcal O} = \\{z_1, \\ldots z_n\\}" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "f(z_j) = z_{j+1}" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "A_j" }, { "math_id": 7, "text": "z_j" }, { "math_id": 8, "text": "{\\mathcal P} = {\\mathcal P}({\\mathcal O}) = \\{A_1, \\ldots A_n\\}" }, { "math_id": 9, "text": "{\\mathcal O}" }, { "math_id": 10, "text": "\n{\\mathcal P} = \\left \\{\\left(\n\\frac{1}{3},\\frac{2}{3}\n\\right) \\right \\rbrace\n" }, { "math_id": 11, "text": "\n{\\mathcal P} = \\left \\{\n\n\\left( \\frac{3}{7} , \\frac{4}{7} \\right), \n\\left( \\frac{6}{7} , \\frac{1}{7} \\right), \n\\left( \\frac{5}{7} , \\frac{2}{7} \\right) \n \\right \\rbrace\n" }, { "math_id": 12, "text": "\n{\\mathcal P} = \\left \\{\n\n\\left( \\frac{4}{9} , \\frac{5}{9} \\right), \n\\left( \\frac{8}{9} , \\frac{1}{9} \\right), \n\\left( \\frac{7}{9} , \\frac{2}{9} \\right) \n \\right \\rbrace\n" }, { "math_id": 13, "text": "\n{\\mathcal P} = \\left \\{\n\n\\left( \\frac{11}{31} , \\frac{12}{31} \\right), \n\\left( \\frac{22}{31} , \\frac{24}{31} \\right), \n\\left( \\frac{13}{31} , \\frac{17}{31} \\right), \n\\left( \\frac{26}{31} , \\frac{3}{31} \\right), \n\\left( \\frac{21}{31} , \\frac{6}{31} \\right) \n \\right \\rbrace\n" }, { "math_id": 14, "text": "{\\mathcal P} = \\left \\{ \n\\left(\\frac{1}{7},\\frac{2}{7} ,\\frac{4}{7}\\right)\\right \\rbrace" }, { "math_id": 15, "text": "{\\mathcal P} = \\left \\{ \n\\left(\\frac{74}{511},\\frac{81}{511},\\frac{137}{511} \\right) , \n \\left(\\frac{148}{511},\\frac{162}{511},\\frac{274}{511} \\right) ,\n \\left(\\frac{296}{511},\\frac{324}{511},\\frac{37}{511} \\right) \n\\right \\rbrace" }, { "math_id": 16, "text": "{\\mathcal P} = \\left \\{ \n\\left(\\frac{22}{63},\\frac{25}{63},\\frac{37}{63} \\right) , \n\\left(\\frac{11}{63},\\frac{44}{63},\\frac{50}{63} \\right) \n\\right \\rbrace" }, { "math_id": 17, "text": "{\\mathcal P} = \\left \\{ \n\\left(\\frac{1}{15},\\frac{2}{15} ,\\frac{4}{15} ,\\frac{8}{15} \\right) \\right \\rbrace" }, { "math_id": 18, "text": "{\\mathcal P}" }, { "math_id": 19, "text": "{\\mathbb R} / {\\mathbb Z}" }, { "math_id": 20, "text": "A_{j+1}" }, { "math_id": 21, "text": "A_1, \\ldots, A_n" }, { "math_id": 22, "text": "rn" }, { "math_id": 23, "text": "r" }, { "math_id": 24, "text": " {\\mathbb R }/ {\\mathbb Z}" }, { "math_id": 25, "text": "\\{A_1, \\ldots, A_n\\}" }, { "math_id": 26, "text": "{{0}}" }, { "math_id": 27, "text": "\\{A^\\prime_1,\\ldots,A^\\prime_n\\}" }, { "math_id": 28, "text": "A_j \\subsetneq A^\\prime_j" }, { "math_id": 29, "text": "f_0(z) = z^2" }, { "math_id": 30, "text": "\\mathbb R / \\mathbb Z" }, { "math_id": 31, "text": "\\frac 1 2" }, { "math_id": 32, "text": "z_{j+1}" }, { "math_id": 33, "text": "c" }, { "math_id": 34, "text": "f_c" }, { "math_id": 35, "text": "\\theta_c" }, { "math_id": 36, "text": "\\frac {\\theta_c} 2" }, { "math_id": 37, "text": "\\frac {\\theta_c + 1} 2" }, { "math_id": 38, "text": "{\\mathcal I}_{\\mathcal P}" }, { "math_id": 39, "text": "v" }, { "math_id": 40, "text": "0" }, { "math_id": 41, "text": "t_-" }, { "math_id": 42, "text": "t_+" }, { "math_id": 43, "text": "\\mathcal P" }, { "math_id": 44, "text": "[t_-, t_+]" }, { "math_id": 45, "text": "r_{\\mathcal P}" }, { "math_id": 46, "text": "{\\mathcal W}_{\\mathcal P}" }, { "math_id": 47, "text": "f_c(z) = z^2 + c" }, { "math_id": 48, "text": "c \\in {\\mathcal W}_{\\mathcal P}" }, { "math_id": 49, "text": " c= r_{\\mathcal P}" }, { "math_id": 50, "text": "r = 1" }, { "math_id": 51, "text": "v = 2" }, { "math_id": 52, "text": "f^n" }, { "math_id": 53, "text": "r = v \\ge 2" } ]
https://en.wikipedia.org/wiki?curid=7578186
75783859
Samarium(III) antimonide
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Samarium antimonide is a binary inorganic compound of samarium and antimony with the formula SmSb. It forms crystals. Preparation. Samarium antimonide can be prepared by heating samarium and antimony in a vacuum: formula_0 Physical properties. Samarium antimonide forms cubic crystals, space group "Fm"3"m", cell parameters a = 0.6271 nm, Z = 4, and structure like sodium chloride. The compound melts congruently at a temperature of ≈2000 °C or 1922 °C. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{ Sm + Sb \\ \\xrightarrow{1922^oC}\\ SmSb }" } ]
https://en.wikipedia.org/wiki?curid=75783859
7578564
Power supply unit (computer)
Internal computer component that provides power to other components A power supply unit (PSU) converts mains AC to low-voltage regulated DC power for the internal components of a desktop computer. Modern personal computers universally use switched-mode power supplies. Some power supplies have a manual switch for selecting input voltage, while others automatically adapt to the main voltage Most modern desktop personal computer power supplies conform to the ATX specification, which includes form factor and voltage tolerances. While an ATX power supply is connected to the mains supply, it always provides a 5-volt standby (5VSB) power so that the standby functions on the computer and certain peripherals are powered. ATX power supplies are turned on and off by a signal from the motherboard. They also provide a signal to the motherboard to indicate when the DC voltages are in spec, so that the computer is able to safely power up and boot. The most recent ATX PSU standard is version 3.0 as of mid 2024. Functions. The desktop computer power supply converts the alternating current (AC) from a wall socket of mains electricity to a low-voltage direct current (DC) to operate the motherboard, processor and peripheral devices. Several direct-current voltages are required, and they must be regulated with some accuracy to provide stable operation of the computer. A "power supply rail" or "voltage rail" refers to a single voltage provided by a PSU. Some PSUs can also supply a standby voltage, so that most of the computer system can be powered off after preparing for hibernation or shutdown, and powered back on by an event. Standby power allows a computer to be started remotely via wake-on-LAN and Wake-on-ring or locally via Keyboard Power ON (KBPO) if the motherboard supports it. This standby voltage may be generated by a small linear power supply inside the unit or a switching power supply, sharing some components with the main unit to save cost and energy. History. First-generation microcomputer and home computer power supply units used a heavy step-down transformer and a linear power supply, as used, in for example, the Commodore PET introduced in 1977. The Apple II, also introduced in 1977, was noted for its switched-mode power supply, which was lighter and smaller than an equivalent linear power supply would have been, and which had no cooling fan. The switched-mode supply uses a ferrite-cored high frequency transformer and power transistors that switch thousands of times per second. By adjusting the switching time of the transistor, the output voltage can be closely controlled without dissipating energy as heat in a linear regulator. The development of high-power and high-voltage transistors at economical prices made it practical to introduce switched-mode supplies that had been used in aerospace, mainframes, minicomputers and color television, into desktop personal computers. The Apple II design by Atari engineer Rod Holt was awarded a patent, and was in the vanguard of modern computer power supply design. Now all modern computers use switched-mode power supplies, which are lighter, less costly, and more efficient than equivalent linear power supplies. Computer power supplies may have short circuit protection, overpower (overload) protection, over-voltage protection, under-voltage protection, over-current protection, and over-temperature protection. Input voltage switch. Power supplies designed for worldwide use were once equipped with an input voltage selector switch that allowed the user to configure the unit for use on local power grid. In the lower voltage range, around 115 V, this switch is turned on changing the power grid voltage rectifier into a voltage doubler in Delon circuit design. As a result, the large primary filter capacitor behind that rectifier was split up into two capacitors wired in series, balanced with bleeder resistors and varistors that were necessary in the upper input voltage range, around 230 V. Connecting the unit configured for the lower range to a higher-voltage grid usually resulted in immediate permanent damage. When a power-factor correction (PFC) was required, those filter capacitors were replaced with higher-capacity ones, together with a coil installed in series to delay the inrush current. This is the simple design of a passive PFC. Active PFC is more complex and can achieve higher PF, up to 99%. The first active PFC circuits just delayed the inrush. Newer ones work as an input and output condition-controlled step-up converter, supplying a single 400 V filter capacitor from a wide-range input source, usually between 80 and 240 V. Newer PFC circuits also replace the NTC-based inrush current limiter, which is an expensive part previously located next to the fuse. Development. Original IBM PC, XT and AT standard. The first IBM PC power supply unit (PSU) supplied two main voltages: +5 V and +12 V. It supplied two other voltages, −5 V and −12 V, but with limited amounts of power. Most microchips of the time operated on 5 V power. Of the 63.5 W these PSUs could deliver, most of it was on this +5 V rail. The +12 V supply was used primarily to operate motors such as in disk drives and cooling fans. As more peripherals were added, more power was delivered on the 12 V rail. However, since most of the power is consumed by chips, the 5 V rail still delivered most of the power. The −12 V rail was used primarily to provide the negative supply voltage to the RS-232 serial ports. A −5 V rail was provided for peripherals on the ISA bus (such as soundcards), but was not used by any motherboard other than the original IBM PC motherboard. An additional wire referred to as 'Power Good' is used to prevent digital circuitry operation during the initial milliseconds of power supply turn-on, where output voltages and currents are rising but not yet sufficient or stable for proper device operation. Once the output power is ready to use, the Power Good signal tells the digital circuitry that it can begin to operate. Original IBM power supplies for the PC (model 5150), XT and AT included a line-voltage power switch that extended through the side of the computer case. In a common variant found in tower cases, the line-voltage switch was connected to the power supply with a short cable, allowing it to be mounted apart from the power supply. An early microcomputer power supply was either fully on or off, controlled by the mechanical line-voltage switch, and energy saving low-power idle modes were not a design consideration of early computer power supplies. These power supplies were generally not capable of power saving modes such as standby or "soft off", or scheduled turn-on power controls. Due to the always-on design, in the event of a short circuit, either a fuse would blow, or a switched-mode supply would repeatedly cut the power, wait a brief period of time, and attempt to restart. For some power supplies the repeated restarting is audible as a quiet rapid chirping or ticking emitted from the device. ATX standard. When Intel developed the ATX standard power supply connector (published in 1995), microchips operating on 3.3 V were becoming more popular, beginning with the Intel 80486DX4 microprocessor in 1994, and the ATX standard supplies three positive rails: +3.3 V, +5 V, and +12 V. Earlier computers requiring 3.3 V typically derived that from a simple but inefficient linear regulator connected to the +5 V rail. The ATX connector provides multiple wires and power connections for the 3.3 V supply, because it is most sensitive to voltage drop in the supply connections. Another ATX addition was the +5 V SB (standby) rail for providing a small amount of standby power, even when the computer was nominally "off". When a computer is in ACPI S3 sleep mode, only +5 V SB rail is used. There are two basic differences between AT and ATX power supplies: the connectors that provide power to the motherboard, and the soft switch. In ATX-style systems, the front-panel power switch provides only a control signal to the power supply and does not switch the mains AC voltage. This low-voltage control allows other computer hardware or software to turn the system on and off. Since ATX power supplies share both the same dimensions () and the same mounting layout (four screws arranged on the back side of the unit), with the prior format, there is no major physical difference preventing an AT case from accepting an ATX PSU (or vice versa, if the case can host the power switch needed by an AT PSU), provided that the specific PSU is not too long for the specific case. ATX12V standard. As transistors become smaller on chips, it becomes preferable to operate them on lower supply voltages, and the lowest supply voltage is often desired by the densest chip, the central processing unit. In order to supply large amounts of low-voltage power to the Pentium and subsequent microprocessors, a special power supply, the voltage regulator module began to be included on motherboards. Newer processors require up to 100 A at 2 V or less, which is impractical to deliver from off-board power supplies. Initially, this was supplied by the main +5 V supply, but as power demands increased, the high currents required to supply sufficient power became problematic. To reduce the power losses in the 5 V supply, with the introduction of the Pentium 4 microprocessor, Intel changed the processor power supply to operate on +12 V, and added the separate four-pin P4 connector to the new ATX12V 1.0 standard to supply that power. Modern high-powered graphics processing units do the same thing, resulting in most of the power requirement of a modern personal computer being on the +12 V rail. When high-powered GPUs were first introduced, typical ATX power supplies were "5 V-heavy", and could only supply 50–60% of their output in the form of 12 V power. Thus, GPU manufacturers, to ensure 200–250 W of 12 V power (peak load, CPU+GPU), recommended power supplies of 500–600 W or higher. More modern ATX power supplies can deliver almost all (typically 80–90%) of their total rated capacity in the form of +12 V power. Because of this change, it is important to consider the +12 V supply capacity, rather than the overall power capacity, when using an older ATX power supply with a more recent computer. Low-quality power supply manufacturers sometimes take advantage of this overspecification by assigning unrealistically high power supply ratings, knowing that very few customers fully understand power supply ratings. +3.3 V and +5 V rails. +3.3 V and +5 V rail voltage supplies are rarely a limiting factor; generally, any supply with a sufficient +12 V rating will have adequate capacity at lower voltages. However, most hard drives or PCI cards will create a greater load on the +5 V rail. Older CPUs and logic devices on the motherboard were designed for 5 V operating voltage. Power supplies for those computers regulate the 5 V output precisely, and supply the 12 V rail in a specified voltage window depending on the load ratio of both rails. The +12 V supply was used for computer fan motors, disk drive motors and serial interfaces (which also used the −12 V supply). A further use of the 12 V came with the sound cards, using linear chip audio power amplifiers, sometimes filtered by a 9 V linear regulator on the card to cut the noise of the motors. Since certain i386 variant CPUs use lower operating voltages such as 3.3 or 3.45 V, motherboards had linear voltage regulators supplied by the 5 V rail. Jumpers or dip switches set the output voltages to the installed CPU's specification. When newer CPUs required higher currents, switching mode voltage regulators like buck converters replaced linear regulators for efficiency. Since the first revision of the ATX standard, PSUs were required to have a 3.3 V output voltage rail. Rarely, a linear regulator generated these, supplied from the 5 V and converting the product of voltage drop and current to heat. Later regulators managed all the 3.3, 5 and 12 V rails. As CPUs increased in current consumption (due to higher static current due to higher transistor count and much higher dynamic current due to both higher count and switching frequency) in CPU generations after the i386, it became necessary to place voltage regulators close to the CPU. In order to reduce power consumption of regulation (and thus to remain thermally feasible), these regulators are of switch-mode power supply design . To keep conduction losses at bay, it is desirable to transport the same power on the higher-voltage +12 V rail at lower current, instead of on +5V at higher current. Thus, Pentium-era power supplies tend have their highest current capacity on these rails. Entry-Level Power Supply Specification. "Entry-Level Power Supply Specification" (EPS) is a power supply unit meant for high-power-consumption computers and entry-level servers. Developed by the Server System Infrastructure (SSI) forum, a group of companies including Intel, Dell, Hewlett-Packard and others, that works on server standards, the EPS form factor is a derivative of the ATX form factor. The latest specification is v2.93. The EPS standard provides a more powerful and stable environment for critical server-based systems and applications. EPS power supplies have a 24-pin motherboard power connector and an eight-pin +12 V connector. The standard also specifies two additional four-pin 12 V connectors for more power-hungry boards (one required on 700–800 W PSUs, both required on 850 W+ PSUs). EPS power supplies are in principle compatible with standard ATX or ATX12V motherboards found in homes and offices but there may be mechanical issues where the 12 V connector and in the case of older boards connector overhang the sockets. Many PSU vendors use connectors where the extra sections can be unclipped to avoid this issue. As with later versions of the ATX PSU standard, there is also no −5 V rail. Single vs. multiple +12 V rail. As power supply capacity increased, the ATX power supply standard was amended (beginning with version 2.0) to include: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;3.2.4. Power Limit / Hazardous Energy Levels Under normal or overload conditions, no output shall continuously provide more than 240 VA under any conditions of load including output short circuit, per the requirement of UL 1950 / CSA 950 / EN 60950 / IEC 950. The requirement was later deleted from version 2.3 (March 2007) of the ATX12V power supply specifications, but led to a distinction in modern ATX power supplies between single and multiple rails. The rule was intended to set a safe limit on the current able to pass through any single output wire. A sufficiently large current can cause serious damage in the event of a short circuit, or can melt the wire or its insulation in the case of a fault, or potentially start a fire or damage other components. The rule limits each output to below 20 amps, with typical supplies guaranteeing 18 A availability. Power supplies capable of delivering more than 18 A at 12 V would provide their output in groups of cables (called "rails"). Each rail delivers up to a limited amount of current through one or more cables, and each rail is independently controlled by its own current sensor which shuts down the supply upon excess current. Unlike a fuse or circuit breaker, these limits reset as soon as the overload is removed. Typically, a power supply will guarantee at least 17 A at 12 V by having a current limit of 18.5 A ± 8%. Thus, it is guaranteed to supply at least 17 A, and guaranteed to cut off before 20 A. The current limits for each group of cables is then documented so the user can avoid placing too many high-current loads in the same group. Originally at the time of ATX 2.0, a power supply featuring "multiple +12 V rails" implied one able to deliver more than 20 A of +12 V power, and was seen as a good thing. However, people found the need to balance loads across many +12 V rails inconvenient, especially as higher-end PSUs began to deliver far greater currents up to around 2000 W, or more than 150 A at 12 V (compared to the 240 or 500 W of earlier times). When the assignment of connectors to rails is done at manufacturing time it is not always possible to move a given load to a different rail or manage the allocation of current across devices. Rather than add more current limit circuits, many manufacturers chose to ignore the requirement and increase the current limits above 20 A per rail, or provided "single-rail" power supplies that omit the current limit circuitry. (In some cases, in violation of their own advertising claims to include it.) Because of the above standards, almost all high-power supplies claimed to implement separate rails, however this claim was often false; many omitted the necessary current-limit circuitry, both for cost reasons and because it is an irritation to customers. (The lack was, and is, sometimes advertised as a feature under names like "rail fusion" or "current sharing".) The requirement was withdrawn as a result, however, the issue left its mark on PSU designs, which can be categorized into single rail and multiple rail designs. Both may (and often do) contain current limiting controllers. As of ATX 2.31, a single rail design's output current can be drawn through any combination of output cables, and the management and safe allocation of that load is left for the user. A multiple rail design does the same, but limits the current supplied to each individual connector (or group of connectors), and the limits it imposes are the manufacturer's choice rather than set by the ATX standard. 12 V-only supplies. Since 2011, Fujitsu and other tier-1 manufacturers have been manufacturing systems containing motherboard variants that require only a 12 V supply from a custom-made PSU, which is typically rated at 250–300 W. DC-to-DC conversion, providing 5 V and 3.3 V, is done on the motherboard; the proposal is that 5 V and 12 V supply for other devices, such as HDDs, will be picked up at the motherboard rather than from the PSU itself, although this does not appear to be fully implemented as of  2012[ [update]]. The reasons given for this approach to power supply are that it eliminates cross-load problems, simplifies and reduces internal wiring that can affect airflow and cooling, reduces costs, increases power supply efficiency, and reduces noise by bringing the power supply fan speed under the control of the motherboard. At least two of Dell's business PCs introduced in 2013, the OptiPlex 9020 and Precision T1700, ship with 12 V–only power supplies and implement 5 V and 3.3 V conversion exclusively on the motherboard. Afterwards, Lenovo ThinkCentre M93P adopted 12 V–only PSU and performs 5 V and 3.3 V conversion exclusively on the IS8XM motherboard. In 2019 Intel released a new standard based on an all-12V design: ATX12VO. The power supply only provides 12 V voltage output; 5 V, 3.3 V powers, as needed by USB, hard disk drive and other devices, are transformed on the motherboard; and the ATX motherboard connector is reduced from 24-pin to 10-pin. Called ATX12VO, it is not expected to replace current standards but to exist alongside it. At CES 2020, FSP Group showed the first prototype based on the new ATX12VO standard. According to the Single Rail Power Supply ATX12VO design guide officially published by Intel in May 2020, the guide listed the details of 12V-only design and the major benefit which included higher efficiency and lower electrical interruption. Power rating. The overall power draw on a PSU is limited by the fact that all of the supply rails come through one transformer and any of its primary side circuitry, like switching components. Total power requirements for a personal computer may range from 250 W to more than 1000 W for a high-performance computer with multiple graphics cards. Personal computers without especially high performing CPUs or graphics cards usually require 300 to 500 W. Power supplies are designed around 40% greater than the calculated "system power consumption". This protects against system performance degradation, and against power supply overloading. Power supplies label their total power output, and label how this is determined by the electric current limits for each of the voltages supplied. Some power supplies have no-overload protection. The system power consumption is a sum of the power ratings for all of the components of the computer system that draw on the power supply. Some graphics cards (especially multiple cards) and large groups of hard drives can place very heavy demands on the 12 V lines of the PSU, and for these loads, the PSU's 12 V rating is crucial. The total 12 V rating on the power supply must be higher than the current required by such devices so that the PSU can fully serve the system when its other 12 V system components are taken into account. The manufacturers of these computer system components, especially graphics cards, tend to over-rate their power requirements, to minimize support issues due to too low of a power supply. Efficiency. Various initiatives exist to improve the efficiency of computer power supplies. Climate Savers Computing Initiative promotes energy saving and reduction of greenhouse gas emissions by encouraging development and use of more efficient power supplies. 80 Plus certifies a variety of efficiency levels for power supplies and encourages their use via financial incentives. Efficient power supplies also save money by wasting less power; as a result, they use less electricity to power the same computer, and they emit less waste heat which results significant energy savings on central air conditioning in the summer. The gains of using an efficient power supply are more substantial in computers that use a lot of power. Although a power supply with a larger than needed power rating will have an extra margin of safety against overloading, such a unit is often less efficient and wastes more electricity at lower loads than a more appropriately sized unit. For example, a 900-watt power supply with the 80 Plus Silver efficiency rating (which means that such a power supply is designed to be at least 85% efficient for loads above 180 W) may only be 73% efficient when the load is lower than 100 W, which is a typical idle power for a desktop computer. Thus, for a 100 W load, losses for this supply would be 27 W; if the same power supply was put under a 450 W load, for which the supply's efficiency peaks at 89%, the loss would be only 56 W despite supplying 4.5 times the useful power. For a comparison, a 500-watt power supply carrying the 80 Plus Bronze efficiency rating (which means that such a power supply is designed to be at least 82% efficient for loads above 100 W) may provide an 84% efficiency for a 100 W load, wasting only 19 W. Other ratings such as 80 plus gold, 80 plus platinum and 80 plus titanium also provide the same ratings respectively. 80 plus gold providing an 87% efficiency under 100% load, 80 plus platinum providing a 90% efficiency and 80 plus titanium providing the best efficiency at 94%. A power supply that is self-certified by its manufacturer may claim output ratings double or more than what is actually provided. To further complicate this possibility, when there are two rails that share power through down-regulating, it also happens that either the 12 V rail or the 5 V rail overloads "at well below the total rating" of the power supply. Many power supplies create their 3.3 V output by down-regulating their 5 V rail, or create 5 V output by down-regulating their 12 V rails. The two rails involved are labeled on the power supply with a combined current limit. For example, the and rails are rated with a combined total current limit. For a description of the potential problem, a 3.3 V rail may have a 10 A rating by itself (), and the 5 V rail may have a rating () by itself, but the two together may only be able to output 110 W. In this case, loading the 3.3 V rail to maximum (33 W), would leave the 5 V rail only able to output 77 W. A test in 2005 revealed computer power supplies are generally about 70–80% efficient. For a 75% efficient power supply to produce 75 W of DC output it would require 100 W of AC input and dissipate the remaining 25 W in heat. Higher-quality power supplies can be over 80% efficient; as a result, energy-efficient PSUs waste less energy in heat and require less airflow to cool, resulting in quieter operation. As of 2012 some high-end consumer PSUs can exceed 90% efficiency at optimal load levels, though will fall to 87–89% efficiency during heavy or light loads. Google's server power supplies are more than 90% efficient. HP's server power supplies have reached 94% efficiency. Standard PSUs sold for server workstations have around 90% efficiency, as of 2010. The energy efficiency of a power supply drops significantly at low loads. Therefore, it is important to match the capacity of a power supply to the power needs of the computer. Efficiency generally peaks at about 50–75% load. The curve varies from model to model (examples of how this curve looks can be seen on test reports of energy-efficient models found on the 80 Plus website ). Appearance. Most desktop personal computer power supplies are a square metal box, and have a large bundle of wires emerging from one end. Opposite the wire bundle is the back face of the power supply, with an air vent and an IEC 60320 C14 connector to supply AC power. There may be a power switch and/or a voltage selector switch. Historically they were mounted on the upper part of the computer case, and had two fans: one, inside the case, pulling air towards the power supply, and another, extracting air from the power supply to the outside. Many power supplies have a single large fan inside the case, and are mounted on the bottom part of the case. The fan may be always on, or turn on and vary its speed depending on the load. Some have no fans, hence are cooled passively. A label on one side of the box lists technical information about the power supply, including safety certifications and maximum output power. Common certification marks for safety are the UL mark, GS mark, TÜV, NEMKO, SEMKO, DEMKO, FIMKO, CCC, CSA, VDE, GOST R mark and BSMI. Common certificate marks for EMI/RFI are the CE mark, FCC and C-tick. The CE mark is required for power supplies sold in Europe and India. A RoHS or 80 Plus can also sometimes be seen. Dimensions of an ATX power supply are 150 mm width, 86 mm height, and typically 140 mm depth, although the depth can vary from brand to brand. Some power supplies come with sleeved cables, which besides being more aesthetically pleasing, also make wiring easier and have a less detrimental effect on airflow. Connectors. Typically, power supplies have the following connectors (all are Molex (USA) Inc Mini-Fit Jr, unless otherwise indicated): Modular power supplies. A modular power supply provides a detachable cable system, offering the ability to remove unused connections at the expense of a small amount of extra electrical resistance introduced by the additional connector. This reduces clutter, removes the risk of dangling cables interfering with other components, and can improve case airflow. Many semi modular supplies have some permanent multi-wire cables with connectors at the ends, such as ATX motherboard and 8-pin EPS, though newer supplies marketed as "fully modular" allow even these to be disconnected. The pin assignment of the detachable cables is only standardized on the output end and not on the end that is to be connected to the power supply. Thus, the cables of a modular power supply must only be used with this particular modular power supply model. Usage with another modular power supply, even if the cable prima facie appear compatible, might result in a wrong pin assignment and thus can lead to damage of connected components by supplying 12V to a 5V or 3.3V pin. Other form factors. The Small Form Factor with a 12 V connector (SFX12V) configuration has been optimized for small form factor (SFF) system layouts such as microATX. The low profile of the power supply fits easily into these systems. The Thin Form Factor with a 12 V connector (TFX12V) configuration has been optimized for small and low profile Mini ITX and Mini DTX system layouts. The long narrow profile of the power supply fits easily into low profile systems. The cooling fan placement can be used to efficiently exhaust air from the processor and core area of the motherboard, making possible smaller, more efficient systems using common industry components. Most portable computers have power supplies that provide 25 to 200 W. In portable computers (such as laptops) there is usually an external power supply (sometimes referred to as a "power brick" due to its similarity, in size, shape and weight, to a real brick) which converts AC power to one DC voltage (most commonly 19 V), and further DC-DC conversion occurs within the laptop to supply the various DC voltages required by the other components of the portable computer. External power supply could send data about itself (power, current and voltage ratings) to the computer. For example, genuine Dell power source uses 1-Wire protocol to send data by third wire to the laptop. The laptop then refuses a non-matching adapter. Some computers use a single-voltage 12 V power supply. All other voltages are generated by voltage regulator modules on the motherboard. Life span. Life span is usually specified in mean time between failures (MTBF), where higher MTBF ratings indicate longer device life and better reliability. Using higher quality electrical components at less than their maximum ratings or providing better cooling can contribute to a higher MTBF rating because lower stress and lower operating temperatures decrease component failure rates. An estimated MTBF value of 100,000 hours (roughly, 140 months) at 25 °C and under full load is fairly common. Such a rating expects that, under the described conditions, 77% of the PSUs will be operating failure-free over three years (36 months); equivalently, 23% of the units are expected to fail within three years of operation. For the same example, only 37% of the units (fewer than a half) are expected to last 100,000 hours without failing. The formula for calculating predicted reliability, R(t), is formula_0 where t is the time of operation in the same time units as the MTBF specification, e is the mathematical constant approximately equal to 2.71828, and "t" is the MTBF value as specified by a manufacturer. Power supplies for servers, industrial control equipment, or other places where reliability is important may be hot swappable, and may incorporate "N"+1 redundancy and uninterruptible power supply; if N power supplies are required to meet the load requirement, one extra is installed to provide redundancy and allow for a faulty power supply to be replaced without downtimes. Wiring diagrams. &lt;templatestyles src="Col-begin/styles.css"/&gt; Testing. A 'power supply tester' is a tool used to test the functionality of a computer's power supply. Testers can confirm the presence of the correct voltages at each power supply connector. Testing under load is recommended for the most accurate readings. Monitoring. The voltage of the PSU can be monitored by the system monitor of most modern motherboards. This can often be done through a section within the BIOS, or, once an operating system is running, through a system monitor software like lm_sensors on Linux, envstat on NetBSD, sysctl hw.sensors on OpenBSD and DragonFly BSD, or SpeedFan on Windows. Most power supply fans are not connected to the speed sensor on the motherboard and so cannot be monitored, but some high-end PSUs can provide digital control and monitoring, which requires a connection to the fan-speed sensor or USB port on the motherboard. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nR(t) = e^{-\\frac{t}{t_{MTBF}}}\n" } ]
https://en.wikipedia.org/wiki?curid=7578564
7578809
Version space learning
Version space learning is a logical approach to machine learning, specifically binary classification. Version space learning algorithms search a predefined space of hypotheses, viewed as a set of logical sentences. Formally, the hypothesis space is a disjunction formula_0 (i.e., either hypothesis 1 is true, or hypothesis 2, or any subset of the hypotheses 1 through n). A version space learning algorithm is presented with examples, which it will use to restrict its hypothesis space; for each example x, the hypotheses that are inconsistent with x are removed from the space. This iterative refining of the hypothesis space is called the candidate elimination algorithm, the hypothesis space maintained inside the algorithm, its "version space". The version space algorithm. In settings where there is a generality-ordering on hypotheses, it is possible to represent the version space by two sets of hypotheses: (1) the most specific consistent hypotheses, and (2) the most general consistent hypotheses, where "consistent" indicates agreement with observed data. The most specific hypotheses (i.e., the specific boundary SB) cover the observed positive training examples, and as little of the remaining feature space as possible. These hypotheses, if reduced any further, "exclude" a "positive" training example, and hence become inconsistent. These minimal hypotheses essentially constitute a (pessimistic) claim that the true concept is defined just by the "positive" data already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be negative. (I.e., if data has not previously been ruled in, then it's ruled out.) The most general hypotheses (i.e., the general boundary GB) cover the observed positive training examples, but also cover as much of the remaining feature space without including any negative training examples. These, if enlarged any further, "include" a "negative" training example, and hence become inconsistent. These maximal hypotheses essentially constitute a (optimistic) claim that the true concept is defined just by the "negative" data already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be positive. (I.e., if data has not previously been ruled out, then it's ruled in.) Thus, during learning, the version space (which itself is a set – possibly infinite – containing "all" consistent hypotheses) can be represented by just its lower and upper bounds (maximally general and maximally specific hypothesis sets), and learning operations can be performed just on these representative sets. After learning, classification can be performed on unseen examples by testing the hypothesis learned by the algorithm. If the example is consistent with multiple hypotheses, a majority vote rule can be applied. Historical background. The notion of version spaces was introduced by Mitchell in the early 1980s as a framework for understanding the basic problem of supervised learning within the context of solution search. Although the basic "candidate elimination" search method that accompanies the version space framework is not a popular learning algorithm, there are some practical implementations that have been developed (e.g., Sverdlik &amp; Reynolds 1992, Hong &amp; Tsang 1997, Dubois &amp; Quafafou 2002). A major drawback of version space learning is its inability to deal with noise: any pair of inconsistent examples can cause the version space to "collapse", i.e., become empty, so that classification becomes impossible. One solution of this problem is proposed by Dubois and Quafafou that proposed the Rough Version Space, where rough sets based approximations are used to learn certain and possible hypothesis in the presence of inconsistent data.
[ { "math_id": 0, "text": "H_1 \\lor H_2 \\lor ... \\lor H_n" } ]
https://en.wikipedia.org/wiki?curid=7578809
7579959
Seasonal adjustment
Statistical technique Seasonal adjustment or deseasonalization is a statistical method for removing the seasonal component of a time series. It is usually done when wanting to analyse the trend, and cyclical deviations from trend, of a time series independently of the seasonal components. Many economic phenomena have seasonal cycles, such as agricultural production, (crop yields fluctuate with the seasons) and consumer consumption (increased personal spending leading up to Christmas). It is necessary to adjust for this component in order to understand underlying trends in the economy, so official statistics are often adjusted to remove seasonal components. Typically, seasonally adjusted data is reported for unemployment rates to reveal the underlying trends and cycles in labor markets. Time series components. The investigation of many economic time series becomes problematic due to seasonal fluctuations. Time series are made up of four components: The difference between seasonal and cyclic patterns: The relation between decomposition of time series components Adjustment methods. Unlike the trend and cyclical components, seasonal components, theoretically, happen with similar magnitude during the same time period each year. The seasonal components of a series are sometimes considered to be uninteresting and to hinder the interpretation of a series. Removing the seasonal component directs focus on other components and will allow better analysis. Different statistical research groups have developed different methods of seasonal adjustment, for example X-13-ARIMA and X-12-ARIMA developed by the United States Census Bureau; TRAMO/SEATS developed by the Bank of Spain; MoveReg (for weekly data) developed by the United States Bureau of Labor Statistics; STAMP developed by a group led by S. J. Koopman; and “Seasonal and Trend decomposition using Loess” (STL) developed by Cleveland et al. (1990). While X-12/13-ARIMA can only be applied to monthly or quarterly data, STL decomposition can be used on data with any type of seasonality. Furthermore, unlike X-12-ARIMA, STL allows the user to control the degree of smoothness of the trend cycle and how much the seasonal component changes over time. X-12-ARIMA can handle both additive and multiplicative decomposition whereas STL can only be used for additive decomposition. In order to achieve a multiplicative decomposition using STL, the user can take the log of the data before decomposing, and then back-transform after the decomposition. Software. Each group provides software supporting their methods. Some versions are also included as parts of larger products, and some are commercially available. For example, SAS includes X-12-ARIMA, while Oxmetrics includes STAMP. A recent move by public organisations to harmonise seasonal adjustment practices has resulted in the development of Demetra+ by Eurostat and National Bank of Belgium which currently includes both X-12-ARIMA and TRAMO/SEATS. R includes STL decomposition. The X-12-ARIMA method can be utilized via the R package "X12". EViews supports X-12, X-13, Tramo/Seats, STL and MoveReg. Example. One well-known example is the rate of unemployment, which is represented by a time series. This rate depends particularly on seasonal influences, which is why it is important to free the unemployment rate of its seasonal component. Such seasonal influences can be due to school graduates or dropouts looking to enter into the workforce and regular fluctuations during holiday periods. Once the seasonal influence is removed from this time series, the unemployment rate data can be meaningfully compared across different months and predictions for the future can be made. When seasonal adjustment is not performed with monthly data, year-on-year changes are utilised in an attempt to avoid contamination with seasonality. Indirect seasonal adjustment. When time series data has seasonality removed from it, it is said to be "directly seasonally adjusted". If it is made up of a sum or index aggregation of time series which have been seasonally adjusted, it is said to have been "indirectly seasonally adjusted". Indirect seasonal adjustment is used for large components of GDP which are made up of many industries, which may have different seasonal patterns and which are therefore analyzed and seasonally adjusted separately. Indirect seasonal adjustment also has the advantage that the aggregate series is the exact sum of the component series. Seasonality can appear in an indirectly adjusted series; this is sometimes called "residual seasonality". Moves to standardise seasonal adjustment processes. Due to the various seasonal adjustment practices by different institutions, a group was created by Eurostat and the European Central Bank to promote standard processes. In 2009 a small group composed of experts from European Union statistical institutions and central banks produced the ESS Guidelines on Seasonal Adjustment, which is being implemented in all the European Union statistical institutions. It is also being adopted voluntarily by other public statistical institutions outside the European Union. Use of seasonally adjusted data in regressions. By the Frisch–Waugh–Lovell theorem it does not matter whether dummy variables for all but one of the seasons are introduced into the regression equation, or if the independent variable is first seasonally adjusted (by the same dummy variable method), and the regression then run. Since seasonal adjustment introduces a "non-revertible" moving average (MA) component into time series data, unit root tests (such as the Phillips–Perron test) will be biased towards non-rejection of the unit root null. Shortcomings of using seasonally adjusted data. Use of seasonally adjusted time series data can be misleading because a seasonally adjusted series contains both the trend-cycle component and the error component. As such, what appear to be "downturns" or "upturns" may actually be randomness in the data. For this reason, if the purpose is finding turning points in a series, using the trend-cycle component is recommended rather than the seasonally adjusted data. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_t" }, { "math_id": 1, "text": "T_t" }, { "math_id": 2, "text": "C_t" }, { "math_id": 3, "text": "E_t" }, { "math_id": 4, "text": "Y_t = S_t + T_t + C_t + E_t" }, { "math_id": 5, "text": "Y_t" }, { "math_id": 6, "text": "t" }, { "math_id": 7, "text": "Y_t = S_t \\cdot T_t \\cdot C_t \\cdot E_t" }, { "math_id": 8, "text": "Y_t = S_t \\cdot T_t \\cdot C_t \\cdot E_t \\rightarrow \\log Y_t = \\log S_t + \\log T_t + \\log C_t + \\log E_t" } ]
https://en.wikipedia.org/wiki?curid=7579959
7579995
Decomposition of time series
Statistical task that deconstructs a time series into several components The decomposition of time series is a statistical task that deconstructs a time series into several components, each representing one of the underlying categories of patterns. There are two principal types of decomposition, which are outlined below. Decomposition based on rates of change. This is an important technique for all types of time series analysis, especially for seasonal adjustment. It seeks to construct, from an observed time series, a number of component series (that could be used to reconstruct the original by additions or multiplications) where each of these has a certain characteristic or type of behavior. For example, time series are usually decomposed into: Hence a time series using an additive model can be thought of as formula_4 whereas a multiplicative model would be formula_5 An additive model would be used when the variations around the trend do not vary with the level of the time series whereas a multiplicative model would be appropriate if the trend is proportional to the level of the time series. Sometimes the trend and cyclical components are grouped into one, called the trend-cycle component. The trend-cycle component can just be referred to as the "trend" component, even though it may contain cyclical behavior. For example, a seasonal decomposition of time series by Loess (STL) plot decomposes a time series into seasonal, trend and irregular components using loess and plots the components separately, whereby the cyclical component (if present in the data) is included in the "trend" component plot. Decomposition based on predictability. The theory of time series analysis makes use of the idea of decomposing a times series into deterministic and non-deterministic components (or predictable and unpredictable components). See Wold's theorem and Wold decomposition. Examples. Kendall shows an example of a decomposition into smooth, seasonal and irregular factors for a set of data containing values of the monthly aircraft miles flown by UK airlines. In policy analysis, forecasting future production of biofuels is key data for making better decisions, and statistical time series models have recently been developed to forecast renewable energy sources, and a multiplicative decomposition method was designed to forecast future production of biohydrogen. The optimum length of the moving average (seasonal length) and start point, where the averages are placed, were indicated based on the best coincidence between the present forecast and actual values. Software. An example of statistical software for this type of decomposition is the program BV4.1 that is based on the Berlin procedure. The R statistical software also includes many packages for time series decomposition, such as seasonal, stl, stlplus, and bfast. Bayesian methods are also available; one example is the BEAST method in a package Rbeast in R, Matlab, and Python. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_t" }, { "math_id": 1, "text": "C_t" }, { "math_id": 2, "text": "S_t" }, { "math_id": 3, "text": "I_t" }, { "math_id": 4, "text": "y_t = T_t + C_t + S_t + I_t, " }, { "math_id": 5, "text": "y_t = T_t \\times C_t \\times S_t \\times I_t. \\, " } ]
https://en.wikipedia.org/wiki?curid=7579995
75802
Consistency
Non-contradiction of a theory In classical, deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory formula_0 is consistent if there is no formula formula_1 such that both formula_1 and its negation formula_2 are elements of the set of consequences of formula_0. Let formula_3 be a set of closed sentences (informally "axioms") and formula_4 the set of closed sentences provable from formula_3 under some (specified, possibly implicitly) formal deductive system. The set of axioms formula_3 is consistent when there is no formula formula_1 such that formula_5 and formula_6. A "trivial" theory (i.e., one which proves every sentence in the language of the theory) is clearly inconsistent. Conversely, in an explosive formal system (e.g., classical or intuitionistic propositional or first-order logics) every inconsistent theory is trivial. Consistency of a theory is a syntactic notion, whose semantic counterpart is satisfiability. A theory is satisfiable if it has a model, i.e., there exists an interpretation under which all axioms in the theory are true. This is what "consistent" meant in traditional Aristotelian logic, although in contemporary mathematical logic the term "satisfiable" is used instead. In a sound formal system, every satisfiable theory is consistent, but the converse does not hold. If there exists a deductive system for which these semantic and syntactic definitions are equivalent for any theory formulated in a particular deductive logic, the logic is called complete. The completeness of the propositional calculus was proved by Paul Bernays in 1918 and Emil Post in 1921, while the completeness of (first order) predicate calculus was proved by Kurt Gödel in 1930, and consistency proofs for arithmetics restricted with respect to the induction axiom schema were proved by Ackermann (1924), von Neumann (1927) and Herbrand (1931). Stronger logics, such as second-order logic, are not complete. A consistency proof is a mathematical proof that a particular theory is consistent. The early development of mathematical proof theory was driven by the desire to provide finitary consistency proofs for all of mathematics as part of Hilbert's program. Hilbert's program was strongly impacted by the incompleteness theorems, which showed that sufficiently strong proof theories cannot prove their consistency (provided that they are consistent). Although consistency can be proved using model theory, it is often done in a purely syntactical way, without any need to reference some model of the logic. The cut-elimination (or equivalently the normalization of the underlying calculus if there is one) implies the consistency of the calculus: since there is no cut-free proof of falsity, there is no contradiction in general. Consistency and completeness in arithmetic and set theory. In theories of arithmetic, such as Peano arithmetic, there is an intricate relationship between the consistency of the theory and its completeness. A theory is complete if, for every formula φ in its language, at least one of φ or ¬φ is a logical consequence of the theory. Presburger arithmetic is an axiom system for the natural numbers under addition. It is both consistent and complete. Gödel's incompleteness theorems show that any sufficiently strong recursively enumerable theory of arithmetic cannot be both complete and consistent. Gödel's theorem applies to the theories of Peano arithmetic (PA) and primitive recursive arithmetic (PRA), but not to Presburger arithmetic. Moreover, Gödel's second incompleteness theorem shows that the consistency of sufficiently strong recursively enumerable theories of arithmetic can be tested in a particular way. Such a theory is consistent if and only if it does "not" prove a particular sentence, called the Gödel sentence of the theory, which is a formalized statement of the claim that the theory is indeed consistent. Thus the consistency of a sufficiently strong, recursively enumerable, consistent theory of arithmetic can never be proven in that system itself. The same result is true for recursively enumerable theories that can describe a strong enough fragment of arithmetic—including set theories such as Zermelo–Fraenkel set theory (ZF). These set theories cannot prove their own Gödel sentence—provided that they are consistent, which is generally believed. Because consistency of ZF is not provable in ZF, the weaker notion &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;relative consistency is interesting in set theory (and in other sufficiently expressive axiomatic systems). If "T" is a theory and "A" is an additional axiom, "T" + "A" is said to be consistent relative to "T" (or simply that "A" is consistent with "T") if it can be proved that if "T" is consistent then "T" + "A" is consistent. If both "A" and ¬"A" are consistent with "T", then "A" is said to be independent of "T". First-order logic. Notation. In the following context of mathematical logic, the turnstile symbol formula_7 means "provable from". That is, formula_8 reads: "b" is provable from "a" (in some specified formal system). Henkin's theorem. Let formula_38 be a set of symbols. Let formula_9 be a maximally consistent set of formula_38-formulas containing witnesses. Define an equivalence relation formula_39 on the set of formula_38-terms by formula_40 if formula_41, where formula_42 denotes equality. Let formula_43 denote the equivalence class of terms containing formula_44; and let formula_45 where formula_46 is the set of terms based on the set of symbols formula_38. Define the formula_38-structure formula_47 over formula_48, also called the term-structure corresponding to formula_9, by: Define a variable assignment formula_57 by formula_58 for each variable formula_20. Let formula_59 be the term interpretation associated with formula_9. Then for each formula_38-formula formula_1: formula_60 if and only if formula_61 Sketch of proof. There are several things to verify. First, that formula_39 is in fact an equivalence relation. Then, it needs to be verified that (1), (2), and (3) are well defined. This falls out of the fact that formula_39 is an equivalence relation and also requires a proof that (1) and (2) are independent of the choice of formula_62 class representatives. Finally, formula_63 can be verified by induction on formulas. Model theory. In ZFC set theory with classical first-order logic, an inconsistent theory formula_0 is one such that there exists a closed sentence formula_1 such that formula_0 contains both formula_1 and its negation formula_64. A consistent theory is one such that the following logically equivalent conditions hold Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "\\varphi" }, { "math_id": 2, "text": "\\lnot\\varphi" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\langle A\\rangle" }, { "math_id": 5, "text": "\\varphi \\in \\langle A \\rangle" }, { "math_id": 6, "text": " \\lnot \\varphi \\in \\langle A \\rangle" }, { "math_id": 7, "text": "\\vdash" }, { "math_id": 8, "text": "a\\vdash b" }, { "math_id": 9, "text": "\\Phi" }, { "math_id": 10, "text": "\\operatorname{Con} \\Phi" }, { "math_id": 11, "text": "\\Phi \\vdash \\varphi" }, { "math_id": 12, "text": "\\Phi \\vdash \\lnot\\varphi" }, { "math_id": 13, "text": "\\operatorname{Inc}\\Phi" }, { "math_id": 14, "text": "\\operatorname{Con} (\\Phi \\cup \\{\\varphi\\})" }, { "math_id": 15, "text": "\\varphi \\in \\Phi" }, { "math_id": 16, "text": "\\exists x \\,\\varphi" }, { "math_id": 17, "text": "t" }, { "math_id": 18, "text": "(\\exists x \\, \\varphi \\to \\varphi {t \\over x}) \\in \\Phi" }, { "math_id": 19, "text": "\\varphi {t \\over x}" }, { "math_id": 20, "text": "x" }, { "math_id": 21, "text": "\\varphi,\\; \\Phi \\vdash \\varphi." }, { "math_id": 22, "text": "\\mathfrak{I}" }, { "math_id": 23, "text": "\\mathfrak{I} \\vDash \\Phi " }, { "math_id": 24, "text": " \\Phi \\vdash \\varphi" }, { "math_id": 25, "text": "\\operatorname{Con}\\left( \\Phi \\cup \\{\\lnot\\varphi\\}\\right)" }, { "math_id": 26, "text": "\\operatorname{Con}\\Phi" }, { "math_id": 27, "text": " \\operatorname{Con} \\left(\\Phi \\cup \\{\\varphi\\}\\right)" }, { "math_id": 28, "text": "\\operatorname{Con}\\left( \\Phi \\cup \\{\\varphi\\}\\right)" }, { "math_id": 29, "text": "\\operatorname{Con}\\left( \\Phi \\cup \\{\\lnot \\varphi\\}\\right)" }, { "math_id": 30, "text": " \\psi " }, { "math_id": 31, "text": "\\lnot \\varphi \\in \\Phi" }, { "math_id": 32, "text": "(\\varphi \\lor \\psi) \\in \\Phi" }, { "math_id": 33, "text": "\\psi \\in \\Phi" }, { "math_id": 34, "text": "(\\varphi\\to\\psi) \\in \\Phi" }, { "math_id": 35, "text": "\\varphi \\in \\Phi " }, { "math_id": 36, "text": "\\exists x \\, \\varphi \\in \\Phi" }, { "math_id": 37, "text": "\\varphi{t \\over x}\\in\\Phi" }, { "math_id": 38, "text": "S" }, { "math_id": 39, "text": "\\sim" }, { "math_id": 40, "text": "t_0 \\sim t_1" }, { "math_id": 41, "text": "\\; t_0 \\equiv t_1 \\in \\Phi" }, { "math_id": 42, "text": "\\equiv" }, { "math_id": 43, "text": "\\overline t" }, { "math_id": 44, "text": "t " }, { "math_id": 45, "text": "T_\\Phi := \\{ \\; \\overline t \\mid t \\in T^S \\} " }, { "math_id": 46, "text": "T^S " }, { "math_id": 47, "text": "\\mathfrak T_\\Phi " }, { "math_id": 48, "text": " T_\\Phi " }, { "math_id": 49, "text": "n" }, { "math_id": 50, "text": "R \\in S" }, { "math_id": 51, "text": "R^{\\mathfrak T_\\Phi} \\overline {t_0} \\ldots \\overline {t_{n-1}}" }, { "math_id": 52, "text": "\\; R t_0 \\ldots t_{n-1} \\in \\Phi;" }, { "math_id": 53, "text": "f \\in S" }, { "math_id": 54, "text": "f^{\\mathfrak T_\\Phi} (\\overline {t_0} \\ldots \\overline {t_{n-1}}) := \\overline {f t_0 \\ldots t_{n-1}};" }, { "math_id": 55, "text": "c \\in S" }, { "math_id": 56, "text": "c^{\\mathfrak T_\\Phi}:= \\overline c." }, { "math_id": 57, "text": "\\beta_\\Phi" }, { "math_id": 58, "text": "\\beta_\\Phi (x) := \\bar x" }, { "math_id": 59, "text": "\\mathfrak I_\\Phi := (\\mathfrak T_\\Phi,\\beta_\\Phi)" }, { "math_id": 60, "text": "\\mathfrak I_\\Phi \\vDash \\varphi" }, { "math_id": 61, "text": " \\; \\varphi \\in \\Phi." }, { "math_id": 62, "text": " t_0, \\ldots ,t_{n-1} " }, { "math_id": 63, "text": " \\mathfrak I_\\Phi \\vDash \\varphi " }, { "math_id": 64, "text": "\\varphi'" }, { "math_id": 65, "text": "\\{\\varphi,\\varphi'\\}\\not\\subseteq T" }, { "math_id": 66, "text": "\\varphi'\\not\\in T \\lor \\varphi\\not\\in T" } ]
https://en.wikipedia.org/wiki?curid=75802
7580572
Beckman–Quarles theorem
Unit-distance-preserving maps are isometries In geometry, the Beckman–Quarles theorem states that if a transformation of the Euclidean plane or a higher-dimensional Euclidean space preserves unit distances, then it preserves all Euclidean distances. Equivalently, every homomorphism from the unit distance graph of the plane to itself must be an isometry of the plane. The theorem is named after Frank S. Beckman and Donald A. Quarles Jr., who published this result in 1953; it was later rediscovered by other authors and re-proved in multiple ways. Analogous theorems for rational subsets of Euclidean spaces, or for non-Euclidean geometry, are also known. Statement and proof idea. Formally, the result is as follows. Let formula_0 be a function or multivalued function from a formula_1-dimensional Euclidean space to itself, and suppose that, for every pair of points formula_2 and formula_3 that are at unit distance from each other, every pair of images formula_4 and formula_5 are also at unit distance from each other. Then formula_0 must be an isometry: it is a one-to-one function that preserves distances between all pairs of points. One way of rephrasing the Beckman–Quarles theorem involves graph homomorphisms, mappings between undirected graphs that take vertices to vertices and edges to edges. For the unit distance graph whose vertices are all of the points in the plane, with an edge between any two points at unit distance, a homomorphism from this graph to itself is the same thing as a unit-distance-preserving transformation of the plane. Thus, the Beckman–Quarles theorem states that the only homomorphisms from this graph to itself are the obvious ones coming from isometries of the plane. For this graph, all homomorphisms are symmetries of the graph, the defining property of a class of graphs called cores. As well as the original proofs of Beckman and Quarles of the theorem, and the proofs in later papers rediscovering the result, several alternative proofs have been published. If formula_6 is the set of distances preserved by a mapping formula_0, then it follows from the triangle inequality that certain comparisons of other distances with members of formula_6 are preserved by formula_0. Therefore, if formula_6 can be shown to be a dense set, then all distances must be preserved. The main idea of several proofs of the Beckman–Quarles theorem is to use the structural rigidity of certain unit distance graphs, such as the graph of a regular simplex, to show that a mapping that preserves unit distances must preserve enough other distances to form a dense set. Counterexamples for other spaces. Beckman and Quarles observe that the theorem is not true for the real line (one-dimensional Euclidean space). As an example, consider the function formula_7 that returns formula_8 if formula_9 is an integer and returns formula_9 otherwise. This function obeys the preconditions of the theorem: it preserves unit distances. However, it does not preserve the distances between integers and non-integers. Beckman and Quarles provide another counterexample showing that their theorem cannot be generalized to an infinite-dimensional space, the Hilbert space of square-summable sequences of real numbers. "Square-summable" means that the sum of the squares of the values in a sequence from this space must be finite. The distance between any two such sequences can be defined in the same way as the Euclidean distance for finite-dimensional spaces, by summing the squares of the differences of coordinates and then taking the square root. To construct a function that preserves unit distances but not other distances, Beckman and Quarles compose two discontinuous functions: When these two transformations are combined, they map any two points at unit distance from each other to two different points in the dense subspace, and from there map them to two different points of the simplex, which are necessarily at unit distance apart. Therefore, their composition preserves unit distances. However, it is not an isometry, because it maps every pair of points, no matter their original distance, either to the same point or to a unit distance. Related results. Every Euclidean space can be mapped to a space of sufficiently higher dimension in a way that preserves unit distances but is not an isometry. To do so, following known results on the Hadwiger–Nelson problem, color the points of the given space with a finite number of colors so that no two points at unit distance have the same color. Then, map each color to a vertex of a higher-dimensional regular simplex with unit edge lengths. For instance, the Euclidean plane can be colored with seven colors, using a tiling by hexagons of slightly less than unit diameter, so that no two points of the same color are a unit distance apart. Then the points of the plane can be mapped by their colors to the seven vertices of a six-dimensional regular simplex. It is not known whether six is the smallest dimension for which this is possible, and improved results on the Hadwiger–Nelson problem could improve this bound. For transformations of the points with rational number coordinates, the situation is more complicated than for the full Euclidean plane. There exist unit-distance-preserving maps of rational points to rational points that do not preserve other distances for dimensions up to four, but none for dimensions five and above. Similar results hold also for mappings of the rational points that preserve other distances, such as the square root of two, in addition to the unit distances. For pairs of points whose distance is an algebraic number formula_12, there is a finite version of this theorem: Maehara showed that, for every algebraic number formula_12, there is a finite rigid unit distance graph formula_13 in which some two vertices formula_2 and formula_3 must be at distance formula_12 from each other. It follows from this that any transformation of the plane that preserves the unit distances in formula_13 must also preserve the distance between formula_2 and formula_3. A. D. Alexandrov asked which metric spaces have the same property, that unit-distance-preserving mappings are isometries, and following this question several authors have studied analogous results for other types of geometries. This is known as the Aleksandrov–Rassias problem. For instance, it is possible to replace Euclidean distance by the value of a quadratic form. Beckman–Quarles theorems have been proven for non-Euclidean spaces such as Minkowski space, inversive distance in the Möbius plane, finite Desarguesian planes, and spaces defined over fields with nonzero characteristic. Additionally, theorems of this type have been used to characterize transformations other than the isometries, such as Lorentz transformations. History. The Beckman–Quarles theorem was first published by Frank S. Beckman and Donald A. Quarles Jr. in 1953. It was already named as "a theorem of Beckman and Quarles" as early as 1960, by Victor Klee. It was later rediscovered by other authors, through the 1960s and 1970s. Quarles was the son of communications engineer and defense executive Donald A. Quarles. He was educated at the Phillips Academy, Yale University, and the United States Naval Academy. He served as a meteorologist in the US Navy during World War II, and became an engineer for IBM. His work there included projects for tracking Sputnik, the development of a supercomputer, inkjet printing, and magnetic resonance imaging; he completed a Ph.D. in 1964 at the Courant Institute of Mathematical Sciences on the computer simulation of shock waves, jointly supervised by Robert D. Richtmyer and Peter Lax. Beckman studied at the City College of New York and served in the US Army during the war. Like Quarles, he worked for IBM, beginning in 1951. He earned a Ph.D. in 1965, under the supervision of Louis Nirenberg at Columbia University, on partial differential equations. In 1971, he left IBM to become the founding chair of the Computer and Information Science Department at Brooklyn College, and he later directed the graduate program in computer science at the Graduate Center, CUNY. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "q" }, { "math_id": 4, "text": "f(p)" }, { "math_id": 5, "text": "f(q)" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "f(x)" }, { "math_id": 8, "text": "x+1" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "\\tfrac12" }, { "math_id": 11, "text": "1/\\sqrt2" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=7580572
75807
Contradiction
Logical incompatibility between two or more propositions In traditional logic, a contradiction occurs when a proposition conflicts either with itself or established fact. It is often used as a tool to detect disingenuous beliefs and bias. Illustrating a general tendency in applied logic, Aristotle's law of noncontradiction states that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect." In modern formal logic and type theory, the term is mainly used instead for a "single" proposition, often denoted by the falsum symbol formula_0; a proposition is a contradiction if false can be derived from it, using the rules of the logic. It is a proposition that is unconditionally false (i.e., a self-contradictory proposition). This can be generalized to a collection of propositions, which is then said to "contain" a contradiction. History. By creation of a paradox, Plato's "Euthydemus" dialogue demonstrates the need for the notion of "contradiction". In the ensuing dialogue, Dionysodorus denies the existence of "contradiction", all the while that Socrates is contradicting him: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... I in my astonishment said: What do you mean Dionysodorus? I have often heard, and have been amazed to hear, this thesis of yours, which is maintained and employed by the disciples of Protagoras and others before them, and which to me appears to be quite wonderful, and suicidal as well as destructive, and I think that I am most likely to hear the truth about it from you. The dictum is that there is no such thing as a falsehood; a man must either say what is true or say nothing. Is not that your position? Indeed, Dionysodorus agrees that "there is no such thing as false opinion ... there is no such thing as ignorance", and demands of Socrates to "Refute me." Socrates responds "But how can I refute you, if, as you say, to tell a falsehood is impossible?". In formal logic. In classical logic, particularly in propositional and first-order logic, a proposition formula_1 is a contradiction if and only if formula_2. Since for contradictory formula_1 it is true that formula_3 for all formula_4 (because formula_5), one may prove any proposition from a set of axioms which contains contradictions. This is called the "principle of explosion", or "ex falso quodlibet" ("from falsity, anything follows"). In a complete logic, a formula is contradictory if and only if it is unsatisfiable. Proof by contradiction. For a set of consistent premises formula_6 and a proposition formula_1, it is true in classical logic that formula_7 (i.e., formula_6 proves formula_1) if and only if formula_8 (i.e., formula_6 and formula_9 leads to a contradiction). Therefore, a proof that formula_8 also proves that formula_1 is true under the premises formula_6. The use of this fact forms the basis of a proof technique called proof by contradiction, which mathematicians use extensively to establish the validity of a wide range of theorems. This applies only in a logic where the law of excluded middle formula_10 is accepted as an axiom. Using minimal logic, a logic with similar axioms to classical logic but without "ex falso quodlibet" and proof by contradiction, we can investigate the axiomatic strength and properties of various rules that treat contradiction by considering theorems of classical logic that are not theorems of minimal logic. Each of these extensions leads to an intermediate logic: Symbolic representation. In mathematics, the symbol used to represent a contradiction within a proof varies. Some symbols that may be used to represent a contradiction include ↯, Opq, formula_20, ⊥, formula_21/ , and ※; in any symbolism, a contradiction may be substituted for the truth value "false", as symbolized, for instance, by "0" (as is common in Boolean algebra). It is not uncommon to see Q.E.D., or some of its variants, immediately after a contradiction symbol. In fact, this often occurs in a proof by contradiction to indicate that the original assumption was proved false—and hence that its negation must be true. The notion of contradiction in an axiomatic system and a proof of its consistency. In general, a consistency proof requires the following two things: But by whatever method one goes about it, all consistency proofs would "seem" to necessitate the primitive notion of "contradiction." Moreover, it "seems" as if this notion would simultaneously have to be "outside" the formal system in the definition of tautology. When Emil Post, in his 1921 "Introduction to a General Theory of Elementary Propositions", extended his proof of the consistency of the propositional calculus (i.e. the logic) beyond that of "Principia Mathematica" (PM), he observed that with respect to a "generalized" set of postulates (i.e. axioms), he would no longer be able to automatically invoke the notion of "contradiction"—such a notion might not be contained in the postulates: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The prime requisite of a set of postulates is that it be consistent. Since the ordinary notion of consistency involves that of contradiction, which again involves negation, and since this function does not appear in general as a primitive in [the "generalized" set of postulates] a new definition must be given. Post's solution to the problem is described in the demonstration "An Example of a Successful Absolute Proof of Consistency", offered by Ernest Nagel and James R. Newman in their 1958 "Gödel's Proof". They too observed a problem with respect to the notion of "contradiction" with its usual "truth values" of "truth" and "falsity". They observed that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The property of being a tautology has been defined in notions of truth and falsity. Yet these notions obviously involve a reference to something "outside" the formula calculus. Therefore, the procedure mentioned in the text in effect offers an "interpretation" of the calculus, by supplying a model for the system. This being so, the authors have not done what they promised, namely, "to define a property of formulas in terms of purely structural features of the formulas themselves". [Indeed] ... proofs of consistency which are based on models, and which argue from the truth of axioms to their consistency, merely shift the problem. Given some "primitive formulas" such as PM's primitives S1 V S2 [inclusive OR] and ~S (negation), one is forced to define the axioms in terms of these primitive notions. In a thorough manner, Post demonstrates in PM, and defines (as do Nagel and Newman, see below) that the property of "tautologous" – as yet to be defined – is "inherited": if one begins with a set of tautologous axioms (postulates) and a deduction system that contains substitution and modus ponens, then a "consistent" system will yield only tautologous formulas. On the topic of the definition of "tautologous", Nagel and Newman create two mutually exclusive and exhaustive classes K1 and K2, into which fall (the outcome of) the axioms when their variables (e.g. S1 and S2 are assigned from these classes). This also applies to the primitive formulas. For example: "A formula having the form S1 V S2 is placed into class K2, if both S1 and S2 are in K2; otherwise it is placed in K1", and "A formula having the form ~S is placed in K2, if S is in K1; otherwise it is placed in K1". Hence Nagel and Newman can now define the notion of "tautologous": "a formula is a tautology if and only if it falls in the class K1, no matter in which of the two classes its elements are placed". This way, the property of "being tautologous" is described—without reference to a model or an interpretation. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;For example, given a formula such as ~S1 V S2 and an assignment of K1 to S1 and K2 to S2 one can evaluate the formula and place its outcome in one or the other of the classes. The assignment of K1 to S1 places ~S1 in K2, and now we can see that our assignment causes the formula to fall into class K2. Thus by definition our formula is not a tautology. Post observed that, if the system were inconsistent, a deduction in it (that is, the last formula in a sequence of formulas derived from the tautologies) could ultimately yield S itself. As an assignment to variable S can come from either class K1 or K2, the deduction violates the inheritance characteristic of tautology (i.e., the derivation must yield an evaluation of a formula that will fall into class K1). From this, Post was able to derive the following definition of inconsistency—"without the use of the notion of contradiction": &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Definition. "A system will be said to be inconsistent if it yields the assertion of the unmodified variable p [S in the Newman and Nagel examples]." In other words, the notion of "contradiction" can be dispensed when constructing a proof of consistency; what replaces it is the notion of "mutually exclusive and exhaustive" classes. An axiomatic system need not include the notion of "contradiction". Philosophy. Adherents of the epistemological theory of coherentism typically claim that as a necessary condition of the justification of a belief, that belief must form a part of a logically non-contradictory system of beliefs. Some dialetheists, including Graham Priest, have argued that coherence may not require consistency. Pragmatic contradictions. A pragmatic contradiction occurs when the very statement of the argument contradicts the claims it purports. An inconsistency arises, in this case, because the act of utterance, rather than the content of what is said, undermines its conclusion. Dialectical materialism. In dialectical materialism: Contradiction—as derived from Hegelianism—usually refers to an opposition inherently existing within one realm, one unified force or object. This contradiction, as opposed to metaphysical thinking, is not an objectively impossible thing, because these contradicting forces exist in objective reality, not cancelling each other out, but actually defining each other's existence. According to Marxist theory, such a contradiction can be found, for example, in the fact that: Hegelian and Marxist theories stipulate that the dialectic nature of history will lead to the sublation, or synthesis, of its contradictions. Marx therefore postulated that history would logically make capitalism evolve into a socialist society where the means of production would equally serve the working and producing class of society, thus resolving the prior contradiction between (a) and (b). Outside formal logic. Colloquial usage can label actions or statements as contradicting each other when due (or perceived as due) to presuppositions which are contradictory in the logical sense. Proof by contradiction is used in mathematics to construct proofs. The scientific method uses contradiction to falsify bad theory. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bot" }, { "math_id": 1, "text": "\\varphi" }, { "math_id": 2, "text": "\\varphi\\vdash\\bot" }, { "math_id": 3, "text": "\\vdash\\varphi\\rightarrow\\psi" }, { "math_id": 4, "text": "\\psi" }, { "math_id": 5, "text": "\\bot\\vdash\\psi" }, { "math_id": 6, "text": "\\Sigma" }, { "math_id": 7, "text": "\\Sigma \\vdash\\varphi" }, { "math_id": 8, "text": "\\Sigma \\cup \\{\\neg\\varphi\\} \\vdash \\bot" }, { "math_id": 9, "text": "\\neg\\varphi" }, { "math_id": 10, "text": "A\\vee\\neg A" }, { "math_id": 11, "text": "\\neg\\neg A \\implies A" }, { "math_id": 12, "text": "\\bot \\implies A" }, { "math_id": 13, "text": "A \\land \\neg A \\implies B" }, { "math_id": 14, "text": "((A \\implies B) \\implies A) \\implies A" }, { "math_id": 15, "text": "A \\implies B \\vee B \\implies A" }, { "math_id": 16, "text": "A \\vee \\neg A" }, { "math_id": 17, "text": "(\\neg A \\implies A) \\implies A" }, { "math_id": 18, "text": "\\neg A \\vee \\neg\\neg A" }, { "math_id": 19, "text": "\\neg(A \\land B) \\iff (\\neg A) \\vee (\\neg B)" }, { "math_id": 20, "text": "\\Rightarrow \\Leftarrow" }, { "math_id": 21, "text": "\\leftrightarrow \\ \\!\\!\\!\\!\\!\\!\\!" } ]
https://en.wikipedia.org/wiki?curid=75807
7583
Cauchy–Riemann equations
Chacteristic property of holomorphic functions In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential equations which form a necessary and sufficient condition for a complex function of a complex variable to be complex differentiable. These equations are and where "u"("x", "y") and "v"("x", "y") are real differentiable bivariate functions. Typically, "u" and "v" are respectively the real and imaginary parts of a complex-valued function "f"("x" + "iy") = "f"("x", "y") = "u"("x", "y") + "iv"("x", "y") of a single complex variable "z" = "x" + "iy" where "x" and "y" are real variables; "u" and "v" are real differentiable functions of the real variables. Then "f" is complex differentiable at a complex point if and only if the partial derivatives of "u" and "v" satisfy the Cauchy–Riemann equations at that point. A holomorphic function is a complex function that is differentiable at every point of some open subset of the complex plane C. It has been proved that holomorphic functions are analytic and analytic complex functions are complex-differentiable. In particular, holomorphic functions are infinitely complex-differentiable. This equivalence between differentiability and analyticity is the starting point of all complex analysis. History. The Cauchy–Riemann equations first appeared in the work of Jean le Rond d'Alembert. Later, Leonhard Euler connected this system to the analytic functions. Cauchy then used these equations to construct his theory of functions. Riemann's dissertation on the theory of functions appeared in 1851. Simple example. Suppose that formula_0. The complex-valued function formula_1 is differentiable at any point z in the complex plane. formula_2 The real part formula_3 and the imaginary part formula_4 are formula_5 and their partial derivatives are formula_6 We see that indeed the Cauchy–Riemann equations are satisfied, formula_7 and formula_8. Interpretation and reformulation. The Cauchy-Riemann equations are one way of looking at the condition for a function to be differentiable in the sense of complex analysis: in other words, they encapsulate the notion of function of a complex variable by means of conventional differential calculus. In the theory there are several other major ways of looking at this notion, and the translation of the condition into other language is often needed. Conformal mappings. First, the Cauchy–Riemann equations may be written in complex form In this form, the equations correspond structurally to the condition that the Jacobian matrix is of the form formula_9 where formula_10 and formula_11. A matrix of this form is the matrix representation of a complex number. Geometrically, such a matrix is always the composition of a rotation with a scaling, and in particular preserves angles. The Jacobian of a function "f"("z") takes infinitesimal line segments at the intersection of two curves in z and rotates them to the corresponding segments in "f"("z"). Consequently, a function satisfying the Cauchy–Riemann equations, with a nonzero derivative, preserves the angle between curves in the plane. That is, the Cauchy–Riemann equations are the conditions for a function to be conformal. Moreover, because the composition of a conformal transformation with another conformal transformation is also conformal, the composition of a solution of the Cauchy–Riemann equations with a conformal map must itself solve the Cauchy–Riemann equations. Thus the Cauchy–Riemann equations are conformally invariant. Complex differentiability. Let formula_12 where formula_13 and formula_14 are real-valued functions, be a complex-valued function of a complex variable formula_15 where formula_16 and formula_17 are real variables. formula_18 so the function can also be regarded as a function of real variables formula_16 and formula_17. Then, the "complex-derivative" of formula_19 at a point formula_20 is defined by formula_21 provided this limit exists (that is, the limit exists along every path approaching formula_22, and does not depend on the chosen path). A fundamental result of complex analysis is that formula_23 is complex differentiable at formula_24 (that is, it has a complex-derivative), if and only if the bivariate real functions formula_25 and formula_26 are differentiable at formula_27 and satisfy the Cauchy–Riemann equations at this point. In fact, if the complex derivative exists at formula_28, then it may be computed by taking the limit at formula_28 along the real axis and the imaginary axis, and the two limits must be equal. Along the real axis, the limit is formula_29 and along the imaginary axis, the limit is formula_30 So, the equality of the derivatives implies formula_31 which is the complex form of Cauchy–Riemann equations at formula_28. Conversely, if f is differentiable at formula_22 (in the real sense) and satisfies the Cauchy-Riemann equations there, then it is complex-differentiable at this point. Assume that f as a function of two real variables x and "y" is differentiable at "z"0 (real differentiable). This is equivalent to the existence of the following linear approximation formula_36where formula_37, formula_38, "z" = "x" + "iy", and formula_39 as Δ"z" → 0. Since formula_40 and formula_41, the above can be re-written as formula_42formula_43 Now, if formula_44 is real, formula_45, while if it is imaginary, then formula_46. Therefore, the second term is independent of the path of the limit formula_47 when (and only when) it vanishes identically: formula_48, which is precisely the Cauchy–Riemann equations in the complex form. This proof also shows that, in that case, formula_49 Note that the hypothesis of real differentiability at the point formula_24 is essential and cannot be dispensed with. For example, the function formula_50, regarded as a complex function with imaginary part identically zero, has both partial derivatives at formula_51, and it moreover satisfies the Cauchy–Riemann equations at that point, but it is not differentiable in the sense of real functions (of several variables), and so the first condition, that of real differentiability, is not met. Therefore, this function is not complex differentiable. Some sources state a sufficient condition for the complex differentiability at a point formula_24 as, in addition to the Cauchy–Riemann equations, the partial derivatives of formula_52 and formula_14 be continuous at the point because this continuity condition ensures the existence of the aforementioned linear approximation. Note that it is not a necessary condition for the complex differentiability. For example, the function formula_53 is complex differentiable at 0, but its real and imaginary parts have discontinuous partial derivatives there. Since complex differentiability is usually considered in an open set, where it in fact implies continuity of all partial derivatives (see below), this distinction is often elided in the literature. Independence of the complex conjugate. The above proof suggests another interpretation of the Cauchy–Riemann equations. The complex conjugate of formula_54, denoted formula_55, is defined by formula_56 for real variables "formula_57" and formula_58. Defining the two Wirtinger derivatives asformula_59 the Cauchy–Riemann equations can then be written as a single equation formula_60 and the complex derivative of "formula_61" in that case is formula_62 In this form, the Cauchy–Riemann equations can be interpreted as the statement that a complex function "formula_61" of a complex variable "formula_63" is independent of the variable formula_55. As such, we can view analytic functions as true functions of "one" complex variable ("formula_63") instead of complex functions of "two" real variables ("formula_64" and "formula_65"). Physical interpretation. A standard physical interpretation of the Cauchy–Riemann equations going back to Riemann's work on function theory is that "u" represents a velocity potential of an incompressible steady fluid flow in the plane, and "v" is its stream function. Suppose that the pair of (twice continuously differentiable) functions "u" and "v" satisfies the Cauchy–Riemann equations. We will take "u" to be a velocity potential, meaning that we imagine a flow of fluid in the plane such that the velocity vector of the fluid at each point of the plane is equal to the gradient of "u", defined by formula_66 By differentiating the Cauchy–Riemann equations for the functions "u" and "v", with the symmetry of second derivatives, one shows that "u" solves Laplace's equation: formula_67 That is, "u" is a harmonic function. This means that the divergence of the gradient is zero, and so the fluid is incompressible. The function "v" also satisfies the Laplace equation, by a similar analysis. Also, the Cauchy–Riemann equations imply that the dot product formula_68 (formula_69), i.e., the direction of the maximum slope of "u" and that of "v" are orthogonal to each other. This implies that the gradient of "u" must point along the formula_70 curves; so these are the streamlines of the flow. The formula_71 curves are the equipotential curves of the flow. A holomorphic function can therefore be visualized by plotting the two families of level curves formula_72 and formula_73. Near points where the gradient of "u" (or, equivalently, "v") is not zero, these families form an orthogonal family of curves. At the points where formula_74, the stationary points of the flow, the equipotential curves of formula_72 intersect. The streamlines also intersect at the same point, bisecting the angles formed by the equipotential curves. Harmonic vector field. Another interpretation of the Cauchy–Riemann equations can be found in Pólya &amp; Szegő. Suppose that "u" and "v" satisfy the Cauchy–Riemann equations in an open subset of R2, and consider the vector field formula_75 regarded as a (real) two-component vector. Then the second Cauchy–Riemann equation (1b) asserts that formula_76 is irrotational (its curl is 0): formula_77 The first Cauchy–Riemann equation (1a) asserts that the vector field is solenoidal (or divergence-free): formula_78 Owing respectively to Green's theorem and the divergence theorem, such a field is necessarily a conservative one, and it is free from sources or sinks, having net flux equal to zero through any open domain without holes. (These two observations combine as real and imaginary parts in Cauchy's integral theorem.) In fluid dynamics, such a vector field is a potential flow. In magnetostatics, such vector fields model static magnetic fields on a region of the plane containing no current. In electrostatics, they model static electric fields in a region of the plane containing no electric charge. This interpretation can equivalently be restated in the language of differential forms. The pair "u" and "v" satisfy the Cauchy–Riemann equations if and only if the one-form formula_79 is both closed and coclosed (a harmonic differential form). Preservation of complex structure. Another formulation of the Cauchy–Riemann equations involves the complex structure in the plane, given by formula_80 This is a complex structure in the sense that the square of "J" is the negative of the 2×2 identity matrix: formula_81. As above, if "u"("x","y") and "v"("x","y") are two functions in the plane, put formula_82 The Jacobian matrix of "f" is the matrix of partial derivatives formula_83 Then the pair of functions "u", "v" satisfies the Cauchy–Riemann equations if and only if the 2×2 matrix "Df" commutes with "J". This interpretation is useful in symplectic geometry, where it is the starting point for the study of pseudoholomorphic curves. Other representations. Other representations of the Cauchy–Riemann equations occasionally arise in other coordinate systems. If (1a) and (1b) hold for a differentiable pair of functions "u" and "v", then so do formula_84 for any coordinate system ("n"("x", "y"), "s"("x", "y")) such that the pair formula_85 is orthonormal and positively oriented. As a consequence, in particular, in the system of coordinates given by the polar representation formula_86, the equations then take the form formula_87 Combining these into one equation for "f" gives formula_88 The inhomogeneous Cauchy–Riemann equations consist of the two equations for a pair of unknown functions "u"("x", "y") and "v"("x", "y") of two real variables formula_89 for some given functions α("x", "y") and β("x", "y") defined in an open subset of R2. These equations are usually combined into a single equation formula_90 where "f" = "u" + i"v" and "𝜑" = ("α" + i"β")/2. If "𝜑" is "C""k", then the inhomogeneous equation is explicitly solvable in any bounded domain "D", provided "𝜑" is continuous on the closure of "D". Indeed, by the Cauchy integral formula, formula_91 for all "ζ" ∈ "D". Generalizations. Goursat's theorem and its generalizations. Suppose that "f" = "u" + i"v" is a complex-valued function which is differentiable as a function "f" : R2 → R2. Then Goursat's theorem asserts that "f" is analytic in an open complex domain Ω if and only if it satisfies the Cauchy–Riemann equation in the domain. In particular, continuous differentiability of "f" need not be assumed. The hypotheses of Goursat's theorem can be weakened significantly. If "f" = "u" + i"v" is continuous in an open set Ω and the partial derivatives of "f" with respect to "x" and "y" exist in Ω, and satisfy the Cauchy–Riemann equations throughout Ω, then "f" is holomorphic (and thus analytic). This result is the Looman–Menchoff theorem. The hypothesis that "f" obey the Cauchy–Riemann equations throughout the domain Ω is essential. It is possible to construct a continuous function satisfying the Cauchy–Riemann equations at a point, but which is not analytic at the point (e.g., "f"("z") = "z"5/|z|4). Similarly, some additional assumption is needed besides the Cauchy–Riemann equations (such as continuity), as the following example illustrates formula_92 which satisfies the Cauchy–Riemann equations everywhere, but fails to be continuous at "z" = 0. Nevertheless, if a function satisfies the Cauchy–Riemann equations in an open set in a weak sense, then the function is analytic. More precisely: If "f"("z") is locally integrable in an open domain Ω ⊂ C, and satisfies the Cauchy–Riemann equations weakly, then "f" agrees almost everywhere with an analytic function in Ω. This is in fact a special case of a more general result on the regularity of solutions of hypoelliptic partial differential equations. Several variables. There are Cauchy–Riemann equations, appropriately generalized, in the theory of several complex variables. They form a significant overdetermined system of PDEs. This is done using a straightforward generalization of the Wirtinger derivative, where the function in question is required to have the (partial) Wirtinger derivative with respect to each complex variable vanish. Complex differential forms. As often formulated, the "d-bar operator" formula_93 annihilates holomorphic functions. This generalizes most directly the formulation formula_94 where formula_95 Bäcklund transform. Viewed as conjugate harmonic functions, the Cauchy–Riemann equations are a simple example of a Bäcklund transform. More complicated, generally non-linear Bäcklund transforms, such as in the sine-Gordon equation, are of great interest in the theory of solitons and integrable systems. Definition in Clifford algebra. In the Clifford algebra formula_96, the complex number formula_97 is represented as formula_98 where formula_99, (formula_100, so formula_101). The Dirac operator in this Clifford algebra is defined as formula_102. The function formula_103 is considered analytic if and only if formula_104, which can be calculated in the following way: formula_105 Grouping by formula_106 and formula_107: formula_108 Hence, in traditional notation: formula_109 Conformal mappings in higher dimensions. Let Ω be an open set in the Euclidean space R"n". The equation for an orientation-preserving mapping formula_110 to be a conformal mapping (that is, angle-preserving) is that formula_111 where "Df" is the Jacobian matrix, with transpose formula_112, and "I" denotes the identity matrix. For "n" = 2, this system is equivalent to the standard Cauchy–Riemann equations of complex variables, and the solutions are holomorphic functions. In dimension "n" &gt; 2, this is still sometimes called the Cauchy–Riemann system, and Liouville's theorem implies, under suitable smoothness assumptions, that any such mapping is a Möbius transformation. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "z = x + iy" }, { "math_id": 1, "text": "f(z) = z^2" }, { "math_id": 2, "text": "f(z) = (x + iy)^2 = x^2 - y^2 + 2ixy" }, { "math_id": 3, "text": "u(x,y)" }, { "math_id": 4, "text": "v(x, y)" }, { "math_id": 5, "text": "\\begin{align}\n u(x, y) &= x^2 - y^2 \\\\\n v(x, y) &= 2xy\n\\end{align}" }, { "math_id": 6, "text": "u_x = 2x;\\quad u_y = -2y;\\quad v_x = 2y;\\quad v_y = 2x" }, { "math_id": 7, "text": "u_x = v_y" }, { "math_id": 8, "text": "u_y = -v_x" }, { "math_id": 9, "text": "\\begin{pmatrix}\n a & -b \\\\\n b & a \n\\end{pmatrix}," }, { "math_id": 10, "text": " a = \\partial u/\\partial x = \\partial v/\\partial y" }, { "math_id": 11, "text": " b = \\partial v/\\partial x = -\\partial u/\\partial y" }, { "math_id": 12, "text": " f(z) = u(z) + i \\cdot v(z) " }, { "math_id": 13, "text": "u" }, { "math_id": 14, "text": "v" }, { "math_id": 15, "text": " z = x + i y" }, { "math_id": 16, "text": " x" }, { "math_id": 17, "text": " y" }, { "math_id": 18, "text": " f(z) = f(x + iy) = f(x,y)" }, { "math_id": 19, "text": " f " }, { "math_id": 20, "text": " z_0=x_0+iy_0 " }, { "math_id": 21, "text": " f'(z_0) =\\lim_{\\underset{h\\in\\Complex}{h\\to 0}} \\frac{f(z_0+h)-f(z_0)}{h} " }, { "math_id": 22, "text": " z_{0} " }, { "math_id": 23, "text": "f" }, { "math_id": 24, "text": "z_0" }, { "math_id": 25, "text": "u(x+iy)" }, { "math_id": 26, "text": "v(x+iy)" }, { "math_id": 27, "text": "(x_0,y_0)," }, { "math_id": 28, "text": " z_0" }, { "math_id": 29, "text": "\\lim_{\\underset{h\\in\\Reals}{h\\to 0}} \\frac{f(z_0+h)-f(z_0)}{h} = \\left. \\frac{\\partial f}{\\partial x} \\right \\vert_{z_0}" }, { "math_id": 30, "text": "\\lim_{\\underset{h\\in \\Reals}{h\\to 0}} \\frac{f(z_0+ih)-f(z_0)}{ih} = \\left. \\frac{1}{i}\\frac{\\partial f}{\\partial y} \\right \\vert _{z_0}." }, { "math_id": 31, "text": "i \\left. \\frac{\\partial f}{\\partial x} \\right \\vert _{z_0} = \\left. \\frac{\\partial f}{\\partial y} \\right \\vert _{z_0}" }, { "math_id": 32, "text": "f'(z_0)" }, { "math_id": 33, "text": "\\mathbb C" }, { "math_id": 34, "text": "|f(z)-f(z_0)-f'(z_0)(z-z_0)|/|z-z_0|\\to 0" }, { "math_id": 35, "text": "z\\to z_0" }, { "math_id": 36, "text": " \\Delta f(z_0) = f(z_0 + \\Delta z) - f(z_0) = f_x \\,\\Delta x + f_y \\,\\Delta y + \\eta(\\Delta z)" }, { "math_id": 37, "text": " f_x = \\left. \\frac{\\partial f}{\\partial x}\\right \\vert _{z_0} " }, { "math_id": 38, "text": " f_y = \\left. \\frac{\\partial f}{\\partial y} \\right \\vert _{z_0} " }, { "math_id": 39, "text": "\\eta(\\Delta z) / |\\Delta z| \\to 0" }, { "math_id": 40, "text": " \\Delta z + \\Delta \\bar{z}= 2 \\, \\Delta x " }, { "math_id": 41, "text": " \\Delta z - \\Delta \\bar{z}=2i \\, \\Delta y " }, { "math_id": 42, "text": " \\Delta f(z_0) = \\frac{f_x - if_y}{2} \\, \\Delta z + \\frac{f_x + if_y}{2} \\, \\Delta \\bar{z} + \\eta(\\Delta z)\\, " }, { "math_id": 43, "text": "\\frac{\\Delta f}{\\Delta z} = \\frac{f_x -i f_y}{2}+ \\frac{f_x + i f_y}{2}\\cdot \\frac{\\Delta\\bar{z}}{\\Delta z} + \\frac{\\eta(\\Delta z)}{\\Delta z}, \\;\\;\\;\\;(\\Delta z \\neq 0). " }, { "math_id": 44, "text": "\\Delta z" }, { "math_id": 45, "text": "\\Delta\\bar z/\\Delta z = 1" }, { "math_id": 46, "text": "\\Delta\\bar z/\\Delta z=-1" }, { "math_id": 47, "text": "\\Delta z\\to 0" }, { "math_id": 48, "text": "f_x + i f_y=0" }, { "math_id": 49, "text": "\\left.\\frac{df}{dz}\\right|_{z_0} = \\lim_{\\Delta z\\to 0}\\frac{\\Delta f}{\\Delta z} = \\frac{f_x - i f_y}{2}." }, { "math_id": 50, "text": "f(x,y) = \\sqrt{|xy|}" }, { "math_id": 51, "text": "(x_0,y_0)=(0,0)" }, { "math_id": 52, "text": "u" }, { "math_id": 53, "text": "f(z) = z^2e^{i/|z|}" }, { "math_id": 54, "text": "z" }, { "math_id": 55, "text": "\\bar{z}" }, { "math_id": 56, "text": "\\overline{x + iy} := x - iy" }, { "math_id": 57, "text": "x" }, { "math_id": 58, "text": "y" }, { "math_id": 59, "text": " \\frac{\\partial}{\\partial z}\n = \\frac{1}{2} \\left( \\frac{\\partial}{\\partial x} - i \\frac{\\partial}{\\partial y} \\right), \\;\\;\\; \\frac{\\partial}{\\partial\\bar{z}}\n = \\frac{1}{2} \\left( \\frac{\\partial}{\\partial x} + i \\frac{\\partial}{\\partial y} \\right),\n" }, { "math_id": 60, "text": "\\frac{\\partial f}{\\partial\\bar{z}} = 0," }, { "math_id": 61, "text": "f" }, { "math_id": 62, "text": "\\frac{df}{dz}=\\frac{\\partial f}{\\partial z}." }, { "math_id": 63, "text": "z" }, { "math_id": 64, "text": "x" }, { "math_id": 65, "text": "y" }, { "math_id": 66, "text": "\\nabla u = \\frac{\\partial u}{\\partial x}\\mathbf i + \\frac{\\partial u}{\\partial y}\\mathbf j." }, { "math_id": 67, "text": "\\frac{\\partial^2u}{\\partial x^2} + \\frac{\\partial^2u}{\\partial y^2} = 0." }, { "math_id": 68, "text": "\\nabla u\\cdot\\nabla v = 0" }, { "math_id": 69, "text": "\\nabla u\\cdot\\nabla v = \\frac{\\partial u}{\\partial x} \\cdot \\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y} \\cdot \\frac{\\partial v}{\\partial y} = \\frac{\\partial u}{\\partial x} \\cdot \\frac{\\partial v}{\\partial x} - \\frac{\\partial u}{\\partial x} \\cdot \\frac{\\partial v}{\\partial x} = 0" }, { "math_id": 70, "text": "v = \\text{const}" }, { "math_id": 71, "text": "u = \\text{const}" }, { "math_id": 72, "text": "u=\\text{const}" }, { "math_id": 73, "text": "v=\\text{const}" }, { "math_id": 74, "text": "\\nabla u=0" }, { "math_id": 75, "text": "\\bar{f} = \\begin{bmatrix} u\\\\ -v \\end{bmatrix}" }, { "math_id": 76, "text": "\\bar{f}" }, { "math_id": 77, "text": "\\frac{\\partial (-v)}{\\partial x} - \\frac{\\partial u}{\\partial y} = 0." }, { "math_id": 78, "text": "\\frac{\\partial u}{\\partial x} + \\frac{\\partial (-v)}{\\partial y}=0." }, { "math_id": 79, "text": "v\\,dx + u\\, dy" }, { "math_id": 80, "text": "J = \\begin{bmatrix} 0 & -1 \\\\ 1 & 0 \\end{bmatrix}." }, { "math_id": 81, "text": "J^2 = -I" }, { "math_id": 82, "text": "f(x,y) = \\begin{bmatrix}u(x,y)\\\\v(x,y)\\end{bmatrix}." }, { "math_id": 83, "text": "Df(x,y) = \\begin{bmatrix}\n\\dfrac{\\partial u}{\\partial x} & \\dfrac{\\partial u}{\\partial y} \\\\[5pt]\n\\dfrac{\\partial v}{\\partial x} & \\dfrac{\\partial v}{\\partial y}\n\\end{bmatrix}" }, { "math_id": 84, "text": "\n \\frac{\\partial u}{\\partial n} = \\frac{\\partial v}{\\partial s},\\quad\n \\frac{\\partial v}{\\partial n} = -\\frac{\\partial u}{\\partial s}\n" }, { "math_id": 85, "text": "(\\nabla n,\\nabla s)" }, { "math_id": 86, "text": "z = r e^{i\\theta}" }, { "math_id": 87, "text": "\n {\\partial u \\over \\partial r} = {1 \\over r}{\\partial v \\over \\partial\\theta},\\quad\n {\\partial v \\over \\partial r} = -{1 \\over r}{\\partial u \\over \\partial\\theta}.\n" }, { "math_id": 88, "text": "{\\partial f \\over \\partial r} = {1 \\over ir}{\\partial f \\over \\partial\\theta}." }, { "math_id": 89, "text": "\\begin{align}\n \\frac{\\partial u}{\\partial x} - \\frac{\\partial v}{\\partial y} &= \\alpha(x, y) \\\\[4pt]\n \\frac{\\partial u}{\\partial y} + \\frac{\\partial v}{\\partial x} &= \\beta(x, y)\n\\end{align}" }, { "math_id": 90, "text": "\\frac{\\partial f}{\\partial\\bar{z}} = \\varphi(z,\\bar{z})" }, { "math_id": 91, "text": "f\\left(\\zeta, \\bar{\\zeta}\\right) = \\frac{1}{2\\pi i} \\iint_D \\varphi\\left(z, \\bar{z}\\right) \\, \\frac{dz\\wedge d\\bar{z}}{z - \\zeta}" }, { "math_id": 92, "text": "f(z) = \\begin{cases}\n \\exp\\left(-z^{-4}\\right) & \\text{if }z \\not= 0\\\\\n 0 & \\text{if }z = 0\n\\end{cases}" }, { "math_id": 93, "text": "\\bar{\\partial}" }, { "math_id": 94, "text": "{\\partial f \\over \\partial \\bar z} = 0," }, { "math_id": 95, "text": "{\\partial f \\over \\partial \\bar z} = {1 \\over 2}\\left({\\partial f \\over \\partial x} + i{\\partial f \\over \\partial y}\\right)." }, { "math_id": 96, "text": "C\\ell(2)" }, { "math_id": 97, "text": "z = x+iy " }, { "math_id": 98, "text": "z \\equiv x + J y" }, { "math_id": 99, "text": "J \\equiv \\sigma_1 \\sigma_2" }, { "math_id": 100, "text": "\\sigma_1^2=\\sigma_2^2=1, \\sigma_1 \\sigma_2 + \\sigma_2 \\sigma_1 = 0" }, { "math_id": 101, "text": "J^2=-1" }, { "math_id": 102, "text": "\\nabla \\equiv \\sigma_1 \\partial_x + \\sigma_2\\partial_y" }, { "math_id": 103, "text": "f=u + J v" }, { "math_id": 104, "text": "\\nabla f = 0" }, { "math_id": 105, "text": "\n\\begin{align}\n0 & =\\nabla f= ( \\sigma_1 \\partial_x + \\sigma_2 \\partial_y )(u + \\sigma_1 \\sigma_2 v) \\\\[4pt]\n& =\\sigma_1 \\partial_x u + \\underbrace{\\sigma_1 \\sigma_1 \\sigma_2}_{=\\sigma_2} \\partial_x v + \\sigma_2 \\partial_y u + \\underbrace{\\sigma_2 \\sigma_1 \\sigma_2}_{=-\\sigma_1} \\partial_y v =0\n\\end{align}\n" }, { "math_id": 106, "text": "\\sigma_1" }, { "math_id": 107, "text": "\\sigma_2" }, { "math_id": 108, "text": "\\nabla f = \\sigma_1 ( \\partial_x u - \\partial_y v) + \\sigma_2 ( \\partial_x v + \\partial_y u) = 0 \\Leftrightarrow \\begin{cases}\n\\partial_x u - \\partial_y v = 0\\\\[4pt]\n\\partial_x v + \\partial_y u = 0\n\\end{cases}" }, { "math_id": 109, "text": "\\begin{cases}\n\\dfrac{ \\partial u }{ \\partial x } = \\dfrac{ \\partial v }{ \\partial y }\\\\[12pt]\n\\dfrac{ \\partial u }{ \\partial y } = -\\dfrac{ \\partial v }{ \\partial x }\n\\end{cases}" }, { "math_id": 110, "text": "f:\\Omega\\to\\mathbb{R}^n" }, { "math_id": 111, "text": "Df^\\mathsf{T} Df = (\\det(Df))^{2/n}I" }, { "math_id": 112, "text": "Df^\\mathsf{T}" } ]
https://en.wikipedia.org/wiki?curid=7583
7583397
Tortuosity
Parameter for diffusion and fluid flow in porous media Tortuosity is widely used as a critical parameter to predict transport properties of porous media, such as rocks and soils. But unlike other standard microstructural properties, the concept of tortuosity is vague with multiple definitions and various evaluation methods introduced in different contexts. Hydraulic, electrical, diffusional, and thermal tortuosities are defined to describe different transport processes in porous media, while geometrical tortuosity is introduced to characterize the morphological property of porous microstructures. Tortuosity in 2-D. Subjective estimation (sometimes aided by optometric grading scales) is often used. The simplest mathematical method to estimate tortuosity is the arc-chord ratio: the ratio of the length of the curve ("C") to the distance between its ends ("L"): formula_0 Arc-chord ratio equals 1 for a straight line and is infinite for a circle. Another method, proposed in 1999, is to estimate the tortuosity as the integral of the square (or module) of the curvature. Dividing the result by length of curve or chord has also been tried. In 2002 several Italian scientists proposed one more method. At first, the curve is divided into several ("N") parts with constant sign of curvature (using hysteresis to decrease sensitivity to noise). Then the arc-chord ratio for each part is found and the tortuosity is estimated by: formula_1 In this case tortuosity of both straight line and circle is estimated to be 0. In 1993 Swiss mathematician Martin Mächler proposed an analogy: it’s relatively easy to drive a bicycle or a car in a trajectory with a constant curvature (an arc of a circle), but it’s much harder to drive where curvature changes. This would imply that roughness (or tortuosity) could be measured by relative change of curvature. In this case the proposed "local" measure was derivative of logarithm of curvature: formula_2 However, in this case tortuosity of a straight line is left undefined. In 2005 it was proposed to measure tortuosity by an integral of square of derivative of curvature, divided by the length of a curve: formula_3 In this case tortuosity of both straight line and circle is estimated to be 0. Fractal dimension has been used to quantify tortuosity. The fractal dimension in 2D for a straight line is 1 (the minimal value), and ranges up to 2 for a plane-filling curve or Brownian motion. In most of these methods digital filters and approximation by splines can be used to decrease sensitivity to noise. Tortuosity in 3-D. Usually subjective estimation is used. However, several ways to adapt methods estimating tortuosity in 2-D have also been tried. The methods include arc-chord ratio, arc-chord ratio divided by number of inflection points and integral of square of curvature, divided by length of the curve (curvature is estimated assuming that small segments of curve are planar). Another method used for quantifying tortuosity in 3D has been applied in 3D reconstructions of solid oxide fuel cell cathodes where the Euclidean distance sums of the centroids of a pore were divided by the length of the pore. Applications of tortuosity. Tortuosity of blood vessels (for example, retinal and cerebral blood vessels) is known to be used as a medical sign. In mathematics, cubic splines minimize the functional, equivalent to integral of square of curvature (approximating the curvature as the second derivative). In many engineering domains dealing with mass transfer in porous materials, such as hydrogeology or heterogeneous catalysis, the tortuosity refers to the ratio of the diffusivity in the free space to the diffusivity in the porous medium (analogous to arc-chord ratio of path). Strictly speaking, however, the effective diffusivity is proportional to the reciprocal of the square of the geometrical tortuosity Because of the porous materials found in several layers of fuel cells, the tortuosity is an important variable to be analyzed. It is important to notice that there are different kind of tortuosity, i.e., gas-phase, ionic and electronic tortuosity. In acoustics and following initial works by Maurice Anthony Biot in 1956, the tortuosity is used to describe sound propagation in fluid-saturated porous media. In such media, when frequency of the sound wave is high enough, the effect of viscous drag force between the solid and the fluid can be ignored. In this case, velocity of sound propagation in the fluid in the pores is non-dispersive and compared with the value of the velocity of sound in the free fluid is reduced by a ratio equal to the square root of the tortuosity. This has been used for a number of applications including the study of materials for acoustic isolation, and for oil prospection using acoustics means. In analytical chemistry applied to polymers and sometimes small molecules tortuosity is applied in gel permeation chromatography (GPC) also known as size exclusion chromatography (SEC). As with any chromatography it is used to separate mixtures. In the case of GPC the separation is based on molecular size and it works by the use of stationary media with an appropriate porous microstructure and adequate pore dimensions and distribution. The separation occurs because larger molecules cannot enter the smaller porosity for steric hindrance reasons (constrictivity of the narrow pores) and remain in the macropores, eluting more quickly, while smaller molecules can pass into smaller pores and take a longer, more tortuous path and elute later. In pharmaceutical sciences, tortuosity is used in relation to diffusion-controlled release from solid dosage forms. Insoluble matrix formers, such as ethyl cellulose, certain vinyl polymers, starch acetate and others control the permeation of the drug from the preparation and into the surrounding liquid. The rate of mass transfer per area unit is, among other factors, related to the shape of polymeric chains within the dosage form. Higher tortuosity or curviness retards mass transfer as it acts obstructively on the drug particles within the formulation. HVAC makes extensive use of tortuosity in evaporator and condenser coils for heat exchangers, whereas Ultra-high vacuum makes use of the inverse of tortuosity, which is conductivity, with short, straight, voluminous paths. Tortuosity has been used in ecology to describe the movement paths of animals.
[ { "math_id": 0, "text": "\\tau = \\frac{C}{L}" }, { "math_id": 1, "text": "\\tau = \\frac{{N - 1}}{L} \\cdot \\sum\\limits_{i = 1}^N {\\left( {\\frac{{L_i }}{{S_i }} - 1} \\right)}" }, { "math_id": 2, "text": "\\frac{d}{{dx}}\\log \\left( \\kappa \\right) = \\frac{{\\kappa'}}{\\kappa}" }, { "math_id": 3, "text": "\\tau = \\frac{{\\int\\limits_{t_1 }^{t_2 } {\\left( {\\kappa'\\left( t \\right)} \\right)^2 } dt}}{L}" } ]
https://en.wikipedia.org/wiki?curid=7583397
7583543
Counter machine
Abstract machine used in a formal logic and theoretical computer science A counter machine or counter automaton is an abstract machine used in a formal logic and theoretical computer science to model computation. It is the most primitive of the four types of register machines. A counter machine comprises a set of one or more unbounded "registers", each of which can hold a single non-negative integer, and a list of (usually sequential) arithmetic and control instructions for the machine to follow. The counter machine is typically used in the process of designing parallel algorithms in relation to the mutual exclusion principle. When used in this manner, the counter machine is used to model the discrete time-steps of a computational system in relation to memory accesses. By modeling computations in relation to the memory accesses for each respective computational step, parallel algorithms may be designed in such a matter to avoid interlocking, the simultaneous writing operation by two (or more) threads to the same memory address. Counter machines with three counters can compute any partial recursive function of a single variable. Counter machines with two counters are Turing complete: they can simulate any appropriately-encoded Turing machine, but there are some simple functions that they cannot compute. Counter machines with only a single counter can recognize a proper superset of the regular languages and a subset of the deterministic context free languages. Basic features. For a given counter machine model the instruction set is tiny—from just one to six or seven instructions. Most models contain a few arithmetic operations and at least one conditional operation (if "condition" is true, then jump). Three "base models", each using three instructions, are drawn from the following collection. (The abbreviations are arbitrary.) In addition, a machine usually has a HALT instruction, which stops the machine (normally after the result has been computed). Using the instructions mentioned above, various authors have discussed certain counter machines: The three counter machine base models have the same computational power since the instructions of one model can be derived from those of another. All are equivalent to the computational power of Turing machines. Due to their unary processing style, counter machines are typically exponentially slower than comparable Turing machines. Alternative names, alternative models. The counter machine models go by a number of different names that may help to distinguish them by their peculiarities. In the following the instruction "JZDEC ( r )" is a compound instruction that tests to see if a register r is empty; if so then jump to instruction Iz, else if not then DECrement the contents of r: Formal definition. A counter machine consists of: { Increment (r), Decrement (r), Clear (r); Copy (rj,rk), conditional Jump if contents of r=0, conditional Jump if rj=rk, unconditional Jump, HALT } Some models have either further atomized some of the above into no-parameter instructions, or combined them into a single instruction such as "Decrement" preceded by conditional jump-if-zero "JZ ( r, z )" . The atomization of instructions or the inclusion of convenience instructions causes no change in conceptual power, as any program in one variant can be straightforwardly translated to the other. "Alternative instruction-sets are discussed in the supplement Register-machine models." Example: COPY the count from register #2 to #3. This example shows how to create three more useful instructions: "clear", "unconditional jump", and "copy". Afterward "rs" will contain its original count (unlike MOVE which empties the source register, i.e., clears it to zero). The basic set (1) is used as defined here: Initial conditions. Initially, register #2 contains "2". Registers #0, #1 and #3 are empty (contain "0"). Register #0 remains unchanged throughout calculations because it is used for the unconditional jump. Register #1 is a scratch pad. The program begins with instruction 1. Final conditions. The program HALTs with the contents of register #2 at its original count and the contents of register #3 equal to the original contents of register #2, i.e., [2] = [3]. Program high-level description. The program COPY ( #2, #3) has two parts. In the first part the program "moves" the contents of source register #2 to both scratch-pad #1 and to destination register #3; thus #1 and #3 will be "copies" of one another and of the original count in #2, but #2 is cleared in the process of decrementing it to zero. Unconditional jumps J (z) are done by tests of register #0, which always contains the number 0: [#2] →#3; [#2] →#1; 0 →#2 In the second the part the program "moves" (returns, restores) the contents of scratch-pad #1 back to #2, clearing the scratch-pad #1 in the process: [#1] →#2; 0 →#1 Program. The program, highlighted in yellow, is shown written left-to-right in the upper right. A "run" of the program is shown below. Time runs down the page. The instructions are in yellow, the registers in blue. The program is flipped 90 degrees, with the instruction numbers (addresses) along the top, the instruction mnemonics under the addresses, and the instruction parameters under the mnemonics (one per cell): The partial recursive functions: building "convenience instructions" using recursion. The example above demonstrates how the first basic instructions { INC, DEC, JZ } can create three more instructions—unconditional jump J, CLR, CPY. In a sense CPY used both CLR and J plus the base set. If register #3 had had contents initially, the "sum" of contents of #2 and #3 would have ended up in #3. So to be fully accurate CPY program should have preceded its moves with CLR (1) and CLR (3). However, we do see that ADD would have been possible, easily. And in fact the following is summary of how the primitive recursive functions such as ADD, MULtiply and EXPonent can come about (see Boolos–Burgess–Jeffrey (2002) p. 45-51). { J, DEC, INC, JZ, H } { CLR, J, DEC, INC, JZ, H } { CPY, CLR, J, DEC, INC, JZ, H } "The above is the instruction set of Shepherdson–Sturgis (1963)." { ADD, CPY, CLR, J, DEC, INC, JZ, H } { MUL, ADD, CPY, CLR, J, DEC, INC, JZ, H } { EXP, MUL, ADD, CPY, CLR, J, DEC, INC, JZ, H } In general, we can build "any" partial- or total- primitive recursive function that we wish, by using the same methods. Indeed, Minsky (1967), Shepherdson–Sturgis (1963) and Boolos–Burgess–Jeffrey (2002) give demonstrations of how to form the five primitive recursive function "operators" (1-5 below) from the base set (1). But what about full Turing equivalence? We need to add the sixth operator—the μ operator—to obtain the full equivalence, capable of creating the total- and partial- recursive functions: The authors show that this is done easily within any of the available base sets (1, 2, or 3) (an example can be found at μ operator). This means that any mu recursive function can be implemented as a counter machine, despite the finite instruction set and program size of the counter machine design. However, the required construction may be counter-intuitive, even for functions that are relatively easy to define in more complex register machines such as the random-access machine. This is because the μ operator can iterate an unbounded number of times, but any given counter machine cannot address an unbounded number of distinct registers due to the finite size of its instruction list. For instance, the above hierarchy of primitive recursive operators can be further extended past exponentiation into higher-ordered arrow operations in Knuth's up-arrow notation. For any fixed formula_0, the function formula_1 is primitive recursive, and can be implemented as a counter machine in a straightforward way. But the function formula_2 is not primitive recursive. One may be tempted to implement the up-arrow operator formula_3 using a construction similar to the successor, addition, multiplication, and exponentiation instructions above, by implementing a call stack so that the function can be applied recursively on smaller values of formula_4. This idea is similar to how one may implement the function in practice in many programming languages. However, the counter machine cannot use an unbounded number of registers in its computation, which would be necessary to implement a call stack that can grow arbitrarily large. The up-arrow operation can still be implemented as a counter machine since it is mu recursive, however the function would be implemented by encoding an unbounded amount of information inside a finite number of registers, such as by using Gödel numbering. "The problems are discussed in detail in the article Random-access machine. The problems fall into two major classes and a third "inconvenience" class:" Problems with the counter machine model. (1) Unbounded "capacities" of registers versus bounded capacities of state-machine instructions: How will the machine create constants larger than the capacity of its finite state machine? (2) Unbounded "numbers" of registers versus bounded numbers of state-machine instructions: How will the machine access registers with address-numbers beyond the reach/capability of its finite state machine? (3) The fully reduced models are cumbersome: Shepherdson and Sturgis (1963) are unapologetic about their 6-instruction set. They have made their choice based on "ease of programming... rather than economy" (p. 219 footnote 1). Shepherdson and Sturgis' instructions ( [r] indicates "contents of register r"): Minsky (1967) expanded his 2-instruction set { INC (z), JZDEC (r, Iz) } to { CLR (r), INC (r), JZDEC (r, Iz), J (Iz) } before his proof that a "Universal Program Machine" can be built with only two registers (p. 255ff). Two-counter machines are Turing equivalent (with a caveat). For every Turing machine, there is a 2CM that simulates it, given that the 2CM's input and output are properly encoded. This is proved in Minsky's book ("Computation", 1967, p. 255-258), and an alternative proof is sketched below in three steps. First, a Turing machine can be simulated by a finite-state machine (FSM) equipped with two stacks. Then, two stacks can be simulated by four counters. Finally, four counters can be simulated by two counters. The two counters machine use the set of instruction { INC ( r, z ), JZDEC ( r, ztrue, zfalse) }. Step 1: A Turing machine can be simulated by two stacks.. A Turing machine consists of an FSM and an infinite tape, initially filled with zeros, upon which the machine can write ones and zeros. At any time, the read/write head of the machine points to one cell on the tape. This tape can be conceptually cut in half at that point. Each half of the tape can be treated as a stack, where the top is the cell nearest the read/write head, and the bottom is some distance away from the head, with all zeros on the tape beyond the bottom. Accordingly, a Turing machine can be simulated by an FSM plus two stacks. Moving the head left or right is equivalent to popping a bit from one stack and pushing it onto the other. Writing is equivalent to changing the bit before pushing it. Step 2: A stack can be simulated by two counters.. A stack containing zeros and ones can be simulated by two counters when the bits on the stack are thought of as representing a binary number (the topmost bit on the stack being the least significant bit). Pushing a zero onto the stack is equivalent to doubling the number. Pushing a one is equivalent to doubling and adding 1. Popping is equivalent to dividing by 2, where the remainder is the bit that was popped. Two counters can simulate this stack, in which one of the counters holds a number whose binary representation represents the bits on the stack, and the other counter is used as a scratchpad. To double the number in the first counter, the FSM can initialize the second counter to zero, then repeatedly decrement the first counter once and increment the second counter twice. This continues until the first counter reaches zero. At that point, the second counter will hold the doubled number. Halving is performed by decrementing one counter twice and increment the other once, and repeating until the first counter reaches zero. The remainder can be determined by whether it reached zero after an even or an odd number of steps, where the parity of the number of steps is encoded in the state of the FSM. Step 3: Four counters can be simulated by two counters.. As before, one of the counters is used as scratchpad. The other holds an integer whose prime factorization is 2"a"3"b"5"c"7"d". The exponents "a", "b", "c", and "d" can be thought of as four virtual counters that are being packed (via Gödel numbering) into a single real counter. If the real counter is set to zero then incremented once, that is equivalent to setting all the virtual counters to zero. If the real counter is doubled, that is equivalent to incrementing "a", and if it's halved, that's equivalent to decrementing "a". By a similar procedure, it can be multiplied or divided by 3, which is equivalent to incrementing or decrementing "b". Similarly, "c" and "d" can be incremented or decremented. To check if a virtual counter such as "c" is equal to zero, just divide the real counter by 5, see what the remainder is, then multiply by 5 and add back the remainder. That leaves the real counter unchanged. The remainder will have been nonzero if and only if "c" was zero. As a result, an FSM with two counters can simulate four counters, which are in turn simulating two stacks, which are simulating a Turing machine. Therefore, an FSM plus two counters is at least as powerful as a Turing machine. A Turing machine can easily simulate an FSM with two counters, therefore the two machines have equivalent power. The caveat: *If* its counters are initialised to "N" and 0, then a 2CM cannot calculate 2"N". This result, together with a list of other functions of "N" that are not calculable by a two-counter machine — "when initialised with "N" in one counter and 0 in the other" — such as "N"2, sqrt("N"), log2("N"), etc., appears in a paper by Schroeppel (1972). The result is not surprising, because the two-counter machine model was proved (by Minsky) to be universal only when the argument "N" is appropriately encoded (by Gödelization) to simulate a Turing machine whose initial tape contains "N" encoded in unary; moreover, the output of the two-counter machine will be similarly encoded. This phenomenon is typical of very small bases of computation whose universality is proved only by simulation (e.g., many Turing tarpits, the smallest-known universal Turing machines, etc.). The proof is preceded by some interesting theorems: With regard to the second theorem that "A 3CM can compute any partial recursive function" the author challenges the reader with a "Hard Problem: Multiply two numbers using only three counters" (p. 2). The main proof involves the notion that two-counter machines cannot compute arithmetic sequences with non-linear growth rates (p. 15) i.e. "the function 2X grows more rapidly than any arithmetic progression." (p. 11). A practical example of calculation by counting. The Friden EC-130 calculator had no adder logic as such. Its logic was highly serial, doing arithmetic by counting. Internally, decimal digits were radix-1 — for instance, a six was represented by six consecutive pulses within the time slot allocated for that digit. Each time slot carried one digit, least significant first. Carries set a flip-flop which caused one count to be added to the digit in the next time slot. Adding was done by an up-counter, while subtracting was done by a down-counter, with a similar scheme for dealing with borrows. The time slot scheme defined six registers of 13 decimal digits, each with a sign bit. Multiplication and division were done basically by repeated addition and subtraction. The square root version, the EC-132, effectively subtracted consecutive odd integers, each decrement requiring two consecutive subtractions. After the first, the minuend was incremented by one before the second subtraction. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Jan van Leeuwen, ed. "Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity", The MIT PRESS/Elsevier, 1990. (volume A). QA 76.H279 1990. van Emde Boas' treatment of SMMs appears on pp. 32-35. This treatment clarifies Schōnhage 1980 -- it closely follows but expands slightly the Schōnhage treatment. Both references may be needed for effective understanding.
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "Q(x, y) = x \\uparrow^k y" }, { "math_id": 2, "text": "R(n, x, y) = x \\uparrow^n y" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=7583543
758386
Vector fields in cylindrical and spherical coordinates
Vector field representation in 3D curvilinear coordinate systems Note: This page uses common physics notation for spherical coordinates, in which formula_0 is the angle between the "z" axis and the radius vector connecting the origin to the point in question, while formula_1 is the angle between the projection of the radius vector onto the "x-y" plane and the "x" axis. Several other definitions are in use, and so care must be taken in comparing different sources. Cylindrical coordinate system. Vector fields. Vectors are defined in cylindrical coordinates by ("ρ", "φ", "z"), where ("ρ", "φ", "z") is given in Cartesian coordinates by: formula_2 or inversely by: formula_3 Any vector field can be written in terms of the unit vectors as: formula_4 The cylindrical unit vectors are related to the Cartesian unit vectors by: formula_5 Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. Time derivative of a vector field. To find out how the vector field A changes in time, the time derivatives should be calculated. For this purpose Newton's notation will be used for the time derivative (formula_6). In Cartesian coordinates this is simply: formula_7 However, in cylindrical coordinates this becomes: formula_8 The time derivatives of the unit vectors are needed. They are given by: formula_9 So the time derivative simplifies to: formula_10 Second time derivative of a vector field. The second time derivative is of interest in physics, as it is found in equations of motion for classical mechanical systems. The second time derivative of a vector field in cylindrical coordinates is given by: formula_11 To understand this expression, A is substituted for P, where P is the vector ("ρ", "φ", "z"). This means that formula_12. After substituting, the result is given: formula_13 In mechanics, the terms of this expression are called: Spherical coordinate system. Vector fields. Vectors are defined in spherical coordinates by ("r", "θ", "φ"), where ("r", "θ", "φ") is given in Cartesian coordinates by: formula_14 or inversely by: formula_15 Any vector field can be written in terms of the unit vectors as: formula_16 The spherical unit vectors are related to the Cartesian unit vectors by: formula_17 Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. The Cartesian unit vectors are thus related to the spherical unit vectors by: formula_18 Time derivative of a vector field. To find out how the vector field A changes in time, the time derivatives should be calculated. In Cartesian coordinates this is simply: formula_19 However, in spherical coordinates this becomes: formula_20 The time derivatives of the unit vectors are needed. They are given by: formula_21 Thus the time derivative becomes: formula_22
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\\begin{bmatrix} \\rho \\\\ \\phi \\\\ z \\end{bmatrix} = \n\\begin{bmatrix}\n\\sqrt{x^2 + y^2} \\\\ \\operatorname{arctan}(y / x) \\\\ z\n\\end{bmatrix},\\ \\ \\ 0 \\le \\phi < 2\\pi,\n" }, { "math_id": 3, "text": "\\begin{bmatrix} x \\\\ y \\\\ z \\end{bmatrix} =\n\\begin{bmatrix} \\rho\\cos\\phi \\\\ \\rho\\sin\\phi \\\\ z \\end{bmatrix}." }, { "math_id": 4, "text": "\\mathbf A\n= A_x \\mathbf{\\hat x} + A_y \\mathbf{\\hat y} + A_z \\mathbf{\\hat z} \n= A_\\rho \\mathbf{\\hat \\rho} + A_\\phi \\boldsymbol{\\hat \\phi} + A_z \\mathbf{\\hat z}" }, { "math_id": 5, "text": "\\begin{bmatrix}\\mathbf{\\hat \\rho} \\\\ \\boldsymbol{\\hat\\phi} \\\\ \\mathbf{\\hat z}\\end{bmatrix}\n= \\begin{bmatrix}\n \\cos\\phi & \\sin\\phi & 0 \\\\\n -\\sin\\phi & \\cos\\phi & 0 \\\\\n 0 & 0 & 1\n\\end{bmatrix}\n\\begin{bmatrix} \\mathbf{\\hat x} \\\\ \\mathbf{\\hat y} \\\\ \\mathbf{\\hat z} \\end{bmatrix}" }, { "math_id": 6, "text": "\\dot{\\mathbf{A}}" }, { "math_id": 7, "text": "\\dot{\\mathbf{A}} = \\dot{A}_x \\hat{\\mathbf{x}} + \\dot{A}_y \\hat{\\mathbf{y}} + \\dot{A}_z \\hat{\\mathbf{z}}" }, { "math_id": 8, "text": "\\dot{\\mathbf{A}} = \\dot{A}_\\rho \\hat{\\boldsymbol{\\rho}} + A_\\rho \\dot{\\hat{\\boldsymbol{\\rho}}} \n + \\dot{A}_\\phi \\hat{\\boldsymbol{\\phi}} + A_\\phi \\dot{\\hat{\\boldsymbol{\\phi}}}\n + \\dot{A}_z \\hat{\\boldsymbol{z}} + A_z \\dot{\\hat{\\boldsymbol{z}}}" }, { "math_id": 9, "text": "\\begin{align}\n \\dot{\\hat{\\mathbf{\\rho}}} & = \\dot{\\phi} \\hat{\\boldsymbol{\\phi}} \\\\\n \\dot{\\hat{\\boldsymbol{\\phi}}} & = - \\dot\\phi \\hat{\\mathbf{\\rho}} \\\\\n \\dot{\\hat{\\mathbf{z}}} & = 0\n\\end{align}" }, { "math_id": 10, "text": "\\dot{\\mathbf{A}}\n= \\hat{\\boldsymbol{\\rho}} \\left(\\dot{A}_\\rho - A_\\phi \\dot{\\phi}\\right)\n + \\hat{\\boldsymbol{\\phi}} \\left(\\dot{A}_\\phi + A_\\rho \\dot{\\phi}\\right)\n + \\hat{\\mathbf{z}} \\dot{A}_z" }, { "math_id": 11, "text": "\\mathbf{\\ddot A}\n= \\mathbf{\\hat \\rho} \\left(\\ddot A_\\rho - A_\\phi \\ddot\\phi - 2 \\dot A_\\phi \\dot\\phi - A_\\rho \\dot\\phi^2\\right)\n + \\boldsymbol{\\hat\\phi} \\left(\\ddot A_\\phi + A_\\rho \\ddot\\phi + 2 \\dot A_\\rho \\dot\\phi - A_\\phi \\dot\\phi^2\\right)\n + \\mathbf{\\hat z} \\ddot A_z" }, { "math_id": 12, "text": "\\mathbf{A} = \\mathbf{P} = \\rho \\mathbf{\\hat \\rho} + z \\mathbf{\\hat z}" }, { "math_id": 13, "text": "\\ddot\\mathbf{P}\n= \\mathbf{\\hat \\rho} \\left(\\ddot \\rho - \\rho \\dot\\phi^2\\right)\n + \\boldsymbol{\\hat\\phi} \\left(\\rho \\ddot\\phi + 2 \\dot \\rho \\dot\\phi\\right)\n + \\mathbf{\\hat z} \\ddot z" }, { "math_id": 14, "text": "\\begin{bmatrix}r \\\\ \\theta \\\\ \\phi \\end{bmatrix} = \n\\begin{bmatrix}\n\\sqrt{x^2 + y^2 + z^2} \\\\ \\arccos(z / \\sqrt{x^2 + y^2 + z^2}) \\\\ \\arctan(y / x)\n\\end{bmatrix},\\ \\ \\ 0 \\le \\theta \\le \\pi,\\ \\ \\ 0 \\le \\phi < 2\\pi,\n" }, { "math_id": 15, "text": "\\begin{bmatrix} x \\\\ y \\\\ z \\end{bmatrix} =\n\\begin{bmatrix} r\\sin\\theta\\cos\\phi \\\\ r\\sin\\theta\\sin\\phi \\\\ r\\cos\\theta\\end{bmatrix}." }, { "math_id": 16, "text": "\\mathbf A\n = A_x\\mathbf{\\hat x} + A_y\\mathbf{\\hat y} + A_z\\mathbf{\\hat z} \n = A_r\\boldsymbol{\\hat r} + A_\\theta\\boldsymbol{\\hat \\theta} + A_\\phi\\boldsymbol{\\hat \\phi}" }, { "math_id": 17, "text": "\\begin{bmatrix}\\boldsymbol{\\hat{r}} \\\\ \\boldsymbol{\\hat\\theta} \\\\ \\boldsymbol{\\hat\\phi} \\end{bmatrix}\n = \\begin{bmatrix} \\sin\\theta\\cos\\phi & \\sin\\theta\\sin\\phi & \\cos\\theta \\\\\n \\cos\\theta\\cos\\phi & \\cos\\theta\\sin\\phi & -\\sin\\theta \\\\\n -\\sin\\phi & \\cos\\phi & 0 \\end{bmatrix}\n \\begin{bmatrix} \\mathbf{\\hat x} \\\\ \\mathbf{\\hat y} \\\\ \\mathbf{\\hat z} \\end{bmatrix}" }, { "math_id": 18, "text": "\\begin{bmatrix}\\mathbf{\\hat x} \\\\ \\mathbf{\\hat y} \\\\ \\mathbf{\\hat z} \\end{bmatrix}\n = \\begin{bmatrix} \\sin\\theta\\cos\\phi & \\cos\\theta\\cos\\phi & -\\sin\\phi \\\\\n \\sin\\theta\\sin\\phi & \\cos\\theta\\sin\\phi & \\cos\\phi \\\\\n \\cos\\theta & -\\sin\\theta & 0 \\end{bmatrix}\n \\begin{bmatrix} \\boldsymbol{\\hat{r}} \\\\ \\boldsymbol{\\hat\\theta} \\\\ \\boldsymbol{\\hat\\phi} \\end{bmatrix}" }, { "math_id": 19, "text": "\\mathbf{\\dot A} = \\dot A_x \\mathbf{\\hat x} + \\dot A_y \\mathbf{\\hat y} + \\dot A_z \\mathbf{\\hat z}" }, { "math_id": 20, "text": "\\mathbf{\\dot A} = \\dot A_r \\boldsymbol{\\hat r} + A_r \\boldsymbol{\\dot{\\hat r}}\n + \\dot A_\\theta \\boldsymbol{\\hat\\theta} + A_\\theta \\boldsymbol{\\dot{\\hat\\theta}}\n + \\dot A_\\phi \\boldsymbol{\\hat\\phi} + A_\\phi \\boldsymbol{\\dot{\\hat\\phi}}" }, { "math_id": 21, "text": "\\begin{align}\n \\boldsymbol{\\dot{\\hat r}} &= \\dot\\theta \\boldsymbol{\\hat\\theta} + \\dot\\phi\\sin\\theta \\boldsymbol{\\hat\\phi} \\\\\n \\boldsymbol{\\dot{\\hat\\theta}} &= - \\dot\\theta \\boldsymbol{\\hat r} + \\dot\\phi\\cos\\theta \\boldsymbol{\\hat\\phi} \\\\\n \\boldsymbol{\\dot{\\hat\\phi}} &= - \\dot\\phi\\sin\\theta \\boldsymbol{\\hat{r}} - \\dot\\phi\\cos\\theta \\boldsymbol{\\hat\\theta}\n\\end{align}" }, { "math_id": 22, "text": "\\mathbf{\\dot A}\n= \\boldsymbol{\\hat r} \\left(\\dot A_r - A_\\theta \\dot\\theta - A_\\phi \\dot\\phi \\sin\\theta \\right)\n + \\boldsymbol{\\hat\\theta} \\left(\\dot A_\\theta + A_r \\dot\\theta - A_\\phi \\dot\\phi \\cos\\theta\\right)\n + \\boldsymbol{\\hat\\phi} \\left(\\dot A_\\phi + A_r \\dot\\phi \\sin\\theta + A_\\theta \\dot\\phi \\cos\\theta\\right)" } ]
https://en.wikipedia.org/wiki?curid=758386
758413
James Jeans
English physicist, astronomer and mathematician (1877–1946) Sir James Hopwood Jeans (11 September 1877 – 16 September 1946) was an English physicist, astronomer and mathematician. Early life. Born in Ormskirk, Lancashire, the son of William Tulloch Jeans, a parliamentary correspondent and author. Jeans was educated at Merchant Taylors' School, Wilson's Grammar School, Camberwell and Trinity College, Cambridge. As a gifted student, Jeans was counselled to take an aggressive approach to the Cambridge Mathematical Tripos competition: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Career. Jeans was elected Fellow of Trinity College in October 1901, and taught at Cambridge, but went to Princeton University in 1904 as a professor of applied mathematics. He returned to Cambridge in 1910. He made important contributions in many areas of physics, including quantum theory, the theory of radiation and stellar evolution. His analysis of rotating bodies led him to conclude that Pierre-Simon Laplace's theory that the solar system formed from a single cloud of gas was incorrect, proposing instead that the planets condensed from material drawn out of the sun by a hypothetical catastrophic near-collision with a passing star. This theory is not accepted today. Jeans, along with Arthur Eddington, is a founder of British cosmology. In 1928, Jeans was the first to conjecture a steady state cosmology based on a hypothesized continuous creation of matter in the universe. In his book "Astronomy and Cosmogony" (1928) he stated: "The type of conjecture which presents itself, somewhat insistently, is that the centers of the nebulae are of the nature 'singular points' at which matter is poured into our universe from some other, and entirely extraneous spatial dimension, so that, to a denizen of our universe, they appear as points at which matter is being continually created." This theory fell out of favour when the 1965 discovery of the cosmic microwave background was widely interpreted as the tell-tale signature of the Big Bang. His scientific reputation is grounded in the monographs "The Dynamical Theory of Gases" (1904), "Theoretical Mechanics" (1906), and "Mathematical Theory of Electricity and Magnetism" (1908). After retiring in 1929, he wrote a number of books for the lay public, including "The Stars in Their Courses" (1931), "The Universe Around Us", "Through Space and Time" (1934), "The New Background of Science" (1933), and "The Mysterious Universe." These books made Jeans fairly well known as an expositor of the revolutionary scientific discoveries of his day, especially in relativity and physical cosmology. In 1939, the Journal of the British Astronomical Association reported that Jeans was going to stand as a candidate for parliament for the Cambridge University constituency. The election, expected to take place in 1939 or 1940, did not take place until 1945, and without his involvement. He also wrote the book "Physics and Philosophy" (1943) where he explores the different views on reality from two different perspectives: science and philosophy. On his religious views, Jeans was an agnostic Freemason. Personal life. Jeans married twice, first to the American poet Charlotte Tiffany Mitchell in 1907, who died, and then to the Austrian organist and harpsichordist Suzanne Hock (better known as Susi Jeans) in 1935. Susi and Jeans had three children: George, Christopher, and Catherine.As a birthday present for his wife, he wrote the book "Science and Music". Death. Jeans died in 1947 with the presence of his wife and Joy Adamson, who suggested to the widow to create a death mask of Jeans. It is now held by the Royal Society. Major accomplishments. One of Jeans' major discoveries, named Jeans length, is a critical radius of an interstellar cloud in space. It depends on the temperature, and density of the cloud, and the mass of the particles composing the cloud. A cloud that is smaller than its Jeans length will not have sufficient gravity to overcome the repulsive gas pressure forces and condense to form a star, whereas a cloud that is larger than its Jeans length will collapse. formula_0 Jeans came up with another version of this equation, called Jeans mass or Jeans instability, that solves for the critical mass a cloud must attain before being able to collapse. Jeans also helped to discover the Rayleigh–Jeans law, which relates the energy density of black-body radiation to the temperature of the emission source. formula_1 Jeans is also credited with calculating the rate of atmospheric escape from a planet due to kinetic energy of the gas molecules, a process known as Jeans escape. Idealism. Jeans espoused a philosophy of science rooted in the metaphysical doctrine of idealism and opposed to materialism in his speaking engagements and books. His popular science publications first advanced these ideas in 1929's "The Universe Around Us" when he likened "discussing the creation of the universe in terms of time and space," to, "trying to discover the artist and the action of painting, by going to the edge of the canvas." But he turned to this idea as the primary subject of his best-selling 1930 book, "The Mysterious Universe", where he asserted that a picture of the universe as a "non-mechanical reality" was emerging from the science of the day. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter... we ought rather hail it as the creator and governor of the realm of matter. In a 1931 interview published in "The Observer", Jeans was asked if he believed that life was an accident or if it was, "part of some great scheme." He said that he favored, "the idealistic theory that consciousness is fundamental, and that the material universe is derivative from consciousness," going on to suggest that, "each individual consciousness ought to be compared to a brain-cell in a universal mind." In his 1934 address to the British Association for the Advancement of Science meeting in Aberdeen as the Association's president, Jeans spoke specifically to the work of Descartes and its relevance to the modern philosophy of science. He argued that, "There is no longer room for the kind of dualism which has haunted philosophy since the days of Descartes." When Daniel Helsing reviewed "The Mysterious Universe" for Physics Today in 2020, he summarized the philosophical conclusions of the book, "Jeans argues that we must give up science’s long-cherished materialistic and mechanical worldview, which posits that nature operates like a machine and consists solely of material particles interacting with each other." His evaluation of Jeans contrasted these philosophical views with modern science communicators such as Neil deGrasse Tyson and Sean Carroll who he suggested, "would likely take issue with Jeans’s idealism." Bibliography. The Astronmical Horizon https://www.amazon.co.uk/dp/B000NIS57O?ref=myi_title_dp- The Philip Maurice Deneke Lecture 1944 - Published Oxford University Press 1945 References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda_{\\rm J}=\\sqrt{\\frac{15k_{\\rm B}T}{4\\pi Gm\\rho}}" }, { "math_id": 1, "text": " f(\\lambda) = 8\\pi c \\frac{k_{\\rm B}T}{\\lambda^4}" } ]
https://en.wikipedia.org/wiki?curid=758413
75843108
Regular path query
In databases and specifically in graph databases, a regular path query or RPQ is a query asking for pairs of endpoints in the database that are connected by a path satisfying a certain regular expression. A similar feature exists in the SPARQL query language as "property paths". Definition. A graph database consists of a directed graph whose edges carry a label. A regular path query is just a regular expression over the set of labels. For instance, in a graph database where vertices represent users and there is an edge label "parent" for edges from a parent to a child, the regular path query formula_0 would select pairs of a node "x" and a descendant "y" of "x", with a path from "x" to "y" of "parent" edges having length 1 or more. Semantics. The answers to RPQs can consist of "endpoint pairs", i.e., pairs of nodes "x" and "y" that are connected by some path satisfying the regular expression; or it can consist of the "list of all paths" satisfying the regular expression. However, this set of paths is generally infinite. To ensure that the number of results is not infinite, the semantics of RPQs is sometimes defined to return only the simple paths, i.e., the paths that do not go twice via the same vertex; or the trails, i.e., the paths that do not go twice through the same edge. Complexity. The evaluation of regular path queries (RPQ), in the sense of returning all endpoint pairs, can be performed in polynomial time. To do this, for every endpoint pair, we can see the graph database as a finite automaton, also represent the regular path query as a finite automaton, and check if a suitable path exists by checking that the intersection of both languages is nonempty (i.e., solving the emptiness problem), for instance via the product automaton construction. Other problems. Several classical problems about queries have been studied for regular path queries, such as query containment and query rewriting. Extensions. Database theory research has investigated more expressive variants of RPQs:
[ { "math_id": 0, "text": "\\text{parent} \\text{parent}^*" }, { "math_id": 1, "text": "\\text{parent}^- \\text{parent}" } ]
https://en.wikipedia.org/wiki?curid=75843108
75843166
Yannakakis algorithm
The Yannakakis algorithm is an algorithm in database theory for computing the output of an (alpha-)acyclic conjunctive query. The algorithm is named after Mihalis Yannakakis. High-level description. The algorithm relies on a join tree of the query, which is guaranteed to exist and can be computed in linear time for any acyclic query. The join tree is a tree structure that contains the query atoms as nodes and has the connectedness (or running intersection) property which states that for every query variable, the tree nodes that contain that variable form a connected subgraph. The tree can be rooted arbitrarily. The algorithm materializes a relation for each query atom (this is necessary because the same input relation may be referenced by multiple query atoms) and performs two sweeps, one bottom-up in join tree order (from the leaves to the root), and one top-down (from the root to the leaves). In each node visited, it performs a semi-join between the corresponding relation and its parent or children (depending on the sweep phase). After these two sweeps, all spurious tuples that do not participate in producing any query answer are guaranteed to be removed from the relations. A final pass over the relations, performing joins and early projections, produces the query output. Complexity. Let formula_0 be the size of the database (i.e., the total number of tuples across all input relations), formula_1 the size of the query, and formula_2 the number of tuples in the query output. If the query does not project out any variables (referred to as a full conjunctive query or a join query or a conjunctive query with no existential quantifiers), then the complexity of the algorithm is formula_3. Assuming a fixed query formula_4 (a setting referred to as data complexity), this means that the algorithm's worst-case running time is asymptotically the same as reading the input and writing the output, which is a natural lower bound. If some variables are projected out in the query, then there is an additional formula_0 factor, making the complexity formula_5. Connections to other problems. The algorithm has been influential in database theory and its core ideas are found in algorithms for other tasks such as enumeration and aggregate computation. An important realization is that the algorithm implicitly operates on the Boolean semiring (the elimination of a tuple corresponds to a False value in the semiring), but its correctness is maintained if we use any other semiring. For example, using the natural numbers semiring, where the operations are addition and multiplication, the same algorithm computes the total number of query answers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|D|" }, { "math_id": 1, "text": "|Q|" }, { "math_id": 2, "text": "|OUT|" }, { "math_id": 3, "text": "O(|Q|(|D| + |OUT|)" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "O(|Q||D||OUT|)" } ]
https://en.wikipedia.org/wiki?curid=75843166
75845489
Systems Group
Organisation of fine artists (1969 to 1976) The Systems Group was a group of British artists working in the constructivist tradition. The group was formed after an inaugural Helsinki exhibition in 1969 entitled "Systeemi•System". The exhibition coordinator Jeffrey Steele together with Malcolm Hughes, invited the participating artists to form a group in 1970. The Systems Group had no manifesto and no formal membership; it existed for the purpose of discussion and exhibition rather than direct collaboration. Some group members were influenced by Swiss Concrete artists, including Richard Paul Lohse; some by the Op art of the Groupe de Recherche d'Art Visuel. Others were influenced by the Constructionists: Victor Pasmore, Mary Martin, Kenneth Martin and Anthony Hill. "Above all, they shared a commitment to a non-figurative art that was not abstracted from the appearance of nature but constructed from within and built up of balanced relations of clear, geometric forms." The group disbanded in 1976 following political differences among its members. Despite this, individual members kept in touch and exhibited together for over four decades. Membership. The core members of the Systems Group were: The following artists exhibited with the group: Gillian Wise and John Ernest had previously exhibited with the Constructionist Group. Regarding group meetings, although Steele brought the group together and was a key member, Hughes subsequently took over the running of the group, which met regularly at his Putney studio. Beginnings. In November 1969, nine artists selected by Jeffrey Steele exhibited in an exhibition entitled "Systeemi•System: An exhibition of syntactic art from Britain" at the invitation of the Amos Anderson Art Museum in Helsinki. The exhibition was organised by Steele's Finnish wife Arja Nenonen (1936-2011) and the exhibiting artists were: Malcolm Hughes, Michael Kidner, Peter Lowe, David Saunders, Peter Sedgley, Jean Spencer, Jeffrey Steele, Michael Tyzack and Gillian Wise. Steele chose artists whose interests were associated with his own developing interest in the theory of syntax in art. Each artist selected a different choice of elements, using some kind of rational principle to construct their work. Syntactic Art. "Syntactic art" considers syntactic (structural) relationships between artwork elements more important than any semantic (referential) or pragmatic (expressive) relationships. In other words, in "syntactic art" the structure and form of the artwork takes precedence over its figurative representation or the viewer's interpretation. According to semiotician Charles Morris "language is a social system of signs mediating the response of members of the community to one another and to their environment." Additionally "to understand a language or to use it correctly is to follow the rules of usage (syntactical, semantical, and pragmatical) current in the given social community." Semiotics is the science of semiosis - a process involving the relationships between a sign, what it designates and how it is interpreted by an agent. "Semantics" is the relationship between a sign and what it designates; "pragmatics" is the relationship between a sign and how it is interpreted; and "syntactics" is the relationship between a sign and other signs. Anthony Hill appropriated Morris's "syntactic-semantic-pragmatic" framework into his own work, which in turn influenced some members of the "Systems Group". 'By "syntactic", Hill meant ""the relations in the constituent structure, the internal plastic logic", or, put more simply, what happens within the paintings.' A clear example of "syntactic", or "constructionist", art is found in Peter Lowe's "Spiral of 8 integers"" where, starting from a single central square, a sequence of integers is added until the square root of the sum becomes a whole number, i.e. 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = formula_0. Lowe represents the syntactic relationship "visually" as a spiral pattern of smaller squares, culminating in the larger 6 x 6 square. Although it's possible to interpret his work mathematically, Lowe emphasises that he discovered this particular relationship empirically. Political Milieu. The Cold War lasted from 1945 to 1991. In the short period of its existence the Systems Group accepted the label of Constructivist, but this term was identified with Russia and hence identified with "The Evil Empire". Quoting Peter Lowe: "In the art world, the CIA was covertly ensuring the supremacy of Abstract Expressionism and Minimalism over Russian Constructivism and Formalism as an element of US Cold War propaganda. Local "abstract expressionists" proliferated in the UK and "Abstract Expressionism" was promoted in art schools. Journalists and directors of our national institutions favoured US art and linked their careers to it. There was also a good deal of tabloid comment with "Syntactic" work being invariably labelled 'cold and clinical'. The term 'system' had acquired negative connotations and it was an act of defiance on our part to use it in relation to our group." Political Differences. Several members of the Systems Group held the view that all acts were political, therefore art was a vehicle for political ideology. At the time, Lowe could not agree, feeling his visual research was apolitical, having been influenced by the writings of Theo van Doesburg's in his essay "An Answer to the Question: Should the New Art Serve the Proletariat?". Things came to a head at a meeting in 1976, after which Lowe resigned from the group. The remaining members found no resolution to their political differences and disbanded shortly afterwards. Selected Group exhibitions. After the group disbanded. Following the decline of the "Systems Group", other groups of British constructivists emerged, such as Group Proceedings (1979-1983), Exhibiting Space (1983-1989), journal Constructivist Forum (1985-1991), and Countervail. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{36}" } ]
https://en.wikipedia.org/wiki?curid=75845489
75850137
Aboodh transform
An integral transform The Aboodh transform is a type of integral transform. Khalid Suliman Aboodh formulated it in 2013. It is defined as a set formula_0 formula_1 formula_2 The Aboodh transform has been used in fields such as the double, triple, and quadruple Aboodh transforms, fuzzy logic and fractional theory. Patil compared it to the Laplace transform. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\{ f(t) : \\ni M , a , b > 0 ,|f(t) | < M e^{-vt}\\} " }, { "math_id": 1, "text": "a \\leq v \\le b " }, { "math_id": 2, "text": " A [f(t)]= \\frac {1}{v} \\int_0^\\infty f(t) e^{- vt} \\, dt" } ]
https://en.wikipedia.org/wiki?curid=75850137
75855
LZ77 and LZ78
Lossless data compression algorithms LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations including LZW, LZSS, LZMA and others. Besides their academic influence, these algorithms formed the basis of several ubiquitous compression schemes, including GIF and the DEFLATE algorithm used in PNG and ZIP. They are both theoretically dictionary coders. LZ77 maintains a sliding window during compression. This was later shown to be equivalent to the "explicit dictionary" constructed by LZ78—however, they are only equivalent when the entire data is intended to be decompressed. Since LZ77 encodes and decodes from a sliding window over previously seen characters, decompression must always start at the beginning of the input. Conceptually, LZ78 decompression could allow random access to the input if the entire dictionary were known in advance. However, in practice the dictionary is created during encoding and decoding by creating a new phrase whenever a token is output. The algorithms were named an IEEE Milestone in 2004. In 2021 Jacob Ziv was awarded the IEEE Medal of Honor for his involvement in their development. Theoretical efficiency. In the second of the two papers that introduced these algorithms they are analyzed as encoders defined by finite-state machines. A measure analogous to information entropy is developed for individual sequences (as opposed to probabilistic ensembles). This measure gives a bound on the data compression ratio that can be achieved. It is then shown that there exists finite lossless encoders for every sequence that achieve this bound as the length of the sequence grows to infinity. In this sense an algorithm based on this scheme produces asymptotically optimal encodings. This result can be proven more directly, as for example in notes by Peter Shor. Formally, (Theorem 13.5.2 ). &lt;templatestyles src="Math_theorem/styles.css" /&gt; LZ78 is universal and entropic — If formula_0 is a binary source that is stationary and ergodic, then formula_1 with probability 1. Here formula_2 is the entropy rate of the source. Similar theorems apply to other versions of LZ algorithm. LZ77. LZ77 algorithms achieve compression by replacing repeated occurrences of data with references to a single copy of that data existing earlier in the uncompressed data stream. A match is encoded by a pair of numbers called a "length-distance pair", which is equivalent to the statement "each of the next "length" characters is equal to the characters exactly "distance" characters behind it in the uncompressed stream". (The "distance" is sometimes called the "offset" instead.) To spot matches, the encoder must keep track of some amount of the most recent data, such as the last 2 KB, 4 KB, or 32 KB. The structure in which this data is held is called a "sliding window", which is why LZ77 is sometimes called "sliding-window compression". The encoder needs to keep this data to look for matches, and the decoder needs to keep this data to interpret the matches the encoder refers to. The larger the sliding window is, the longer back the encoder may search for creating references. It is not only acceptable but frequently useful to allow length-distance pairs to specify a length that actually exceeds the distance. As a copy command, this is puzzling: "Go back "four" characters and copy "ten" characters from that position into the current position". How can ten characters be copied over when only four of them are actually in the buffer? Tackling one byte at a time, there is no problem serving this request, because as a byte is copied over, it may be fed again as input to the copy command. When the copy-from position makes it to the initial destination position, it is consequently fed data that was pasted from the "beginning" of the copy-from position. The operation is thus equivalent to the statement "copy the data you were given and repetitively paste it until it fits". As this type of pair repeats a single copy of data multiple times, it can be used to incorporate a flexible and easy form of run-length encoding. Another way to see things is as follows: While encoding, for the search pointer to continue finding matched pairs past the end of the search window, all characters from the first match at offset "D" and forward to the end of the search window must have matched input, and these are the (previously seen) characters that comprise a single run unit of length "L"R, which must equal "D". Then as the search pointer proceeds past the search window and forward, as far as the run pattern repeats in the input, the search and input pointers will be in sync and match characters until the run pattern is interrupted. Then "L" characters have been matched in total, "L" &gt; "D", and the code is ["D", "L", "c"]. Upon decoding ["D", "L", "c"], again, "D" = "L"R. When the first "L"R characters are read to the output, this corresponds to a single run unit appended to the output buffer. At this point, the read pointer could be thought of as only needing to return int("L"/"L"R) + (1 if "L" mod "L"R ≠ 0) times to the start of that single buffered run unit, read "L"R characters (or maybe fewer on the last return), and repeat until a total of "L" characters are read. But mirroring the encoding process, since the pattern is repetitive, the read pointer need only trail in sync with the write pointer by a fixed distance equal to the run length "L"R until "L" characters have been copied to output in total. Considering the above, especially if the compression of data runs is expected to predominate, the window search should begin at the end of the window and proceed backwards, since run patterns, if they exist, will be found first and allow the search to terminate, absolutely if the current maximal matching sequence length is met, or judiciously, if a sufficient length is met, and finally for the simple possibility that the data is more recent and may correlate better with the next input. Pseudocode. The following pseudocode is a reproduction of the LZ77 compression algorithm sliding window. while input is not empty do match := longest repeated occurrence of input that begins in window if match exists then d := distance to start of match l := length of match c := char following match in input else d := 0 l := 0 c := first char of input end if output (d, l, c) discard "l" + 1 chars from front of window s := pop "l" + 1 chars from front of input append s to back of window repeat Implementations. Even though all LZ77 algorithms work by definition on the same basic principle, they can vary widely in how they encode their compressed data to vary the numerical ranges of a length–distance pair, alter the number of bits consumed for a length–distance pair, and distinguish their length–distance pairs from "literals" (raw data encoded as itself, rather than as part of a length–distance pair). A few examples: LZ78. The LZ78 algorithms compress sequential data by building a dictionary of token sequences from the input, and then replacing the second and subsequent occurrence of the sequence in the data stream with a reference to the dictionary entry. The observation is that the number of repeated sequences is a good measure of the non random nature of a sequence. The algorithms represent the dictionary as an n-ary tree where n is the number of tokens used to form token sequences. Each dictionary entry is of the form , where index is the index to a dictionary entry representing a previously seen sequence, and token is the next token from the input that makes this entry unique in the dictionary. Note how the algorithm is greedy, and so nothing is added to the table until a unique making token is found. The algorithm is to initialize last matching index = 0 and next available index = 1 and then, for each token of the input stream, the dictionary searched for a match: . If a match is found, then last matching index is set to the index of the matching entry, nothing is output, and last matching index is left representing the input so far. Input is processed until a match is "not" found. Then a new dictionary entry is created, , and the algorithm outputs last matching index, followed by token, then resets last matching index = 0 and increments next available index. As an example consider the sequence of tokens &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;AABBA&lt;/samp&gt; which would assemble the dictionary; &lt;templatestyles src="Pre/styles.css"/&gt; and the output sequence of the compressed data would be &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;0A1B0B&lt;/samp&gt;. Note that the last A is not represented yet as the algorithm cannot know what comes next. In practice an EOF marker is added to the input – &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;AABBA$&lt;/samp&gt; for example. Note also that in this case the output &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;0A1B0B1$&lt;/samp&gt; is longer than the original input but compression ratio improves considerably as the dictionary grows, and in binary the indexes need not be represented by any more than the minimum number of bits. Decompression consists of rebuilding the dictionary from the compressed sequence. From the sequence &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;0A1B0B1$&lt;/samp&gt; the first entry is always the terminator &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;0 {...}&lt;/samp&gt;, and the first from the sequence would be &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;1 {0,A}&lt;/samp&gt;. The &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;A&lt;/samp&gt; is added to the output. The second pair from the input is &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;1B&lt;/samp&gt; and results in entry number 2 in the dictionary, &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;{1,B}&lt;/samp&gt;. The token "B" is output, preceded by the sequence represented by dictionary entry 1. Entry 1 is an 'A' (followed by "entry 0" – nothing) so &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;AB&lt;/samp&gt; is added to the output. Next &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;0B&lt;/samp&gt; is added to the dictionary as the next entry, &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;3 {0,B}&lt;/samp&gt;, and B (preceded by nothing) is added to the output. Finally a dictionary entry for &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;1$&lt;/samp&gt; is created and &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;A$&lt;/samp&gt; is output resulting in &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;A AB B A$&lt;/samp&gt; or &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;AABBA&lt;/samp&gt; removing the spaces and EOF marker. LZW. LZW is an LZ78-based algorithm that uses a dictionary pre-initialized with all possible characters (symbols) or emulation of a pre-initialized dictionary. The main improvement of LZW is that when a match is not found, the current input stream character is assumed to be the first character of an existing string in the dictionary (since the dictionary is initialized with all possible characters), so only the "last matching index" is output (which may be the pre-initialized dictionary index corresponding to the previous (or the initial) input character). Refer to the LZW article for implementation details. BTLZ is an LZ78-based algorithm that was developed for use in real-time communications systems (originally modems) and standardized by CCITT/ITU as V.42bis. When the trie-structured dictionary is full, a simple re-use/recovery algorithm is used to ensure that the dictionary can keep adapting to changing data. A counter cycles through the dictionary. When a new entry is needed, the counter steps through the dictionary until a leaf node is found (a node with no dependents). This is deleted and the space re-used for the new entry. This is simpler to implement than LRU or LFU and achieves equivalent performance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\limsup_n \\frac 1n l_{LZ78}(X_{1:n}) \\leq h(X)" }, { "math_id": 2, "text": "h(X)" } ]
https://en.wikipedia.org/wiki?curid=75855
75858386
Sumudu transform
Integral transform introduced in 1990 The Sumudu transform is an integral transform introduced in 1990 by G K Watagala. It is defined as the set as formula_0 where formula_1 and the Sumudu transform is defined as formula_2 Relationship with other transforms. Sumudu transform is 1/"u" Laplace transform formula_3 And with "u"2 Elzaki transform formula_4 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S = \\{f(t) : \\ni M, p ,q> 0 , |f(t)|= M \\exp(1/u) \\}" }, { "math_id": 1, "text": " p \\leq u \\leq q " }, { "math_id": 2, "text": "A[f(t)]= \\frac 1 u \\int_0^\\infty f(t) \\exp\\left(\\frac1u\\right) \\, dt. " }, { "math_id": 3, "text": "S[f(t)](u)=\\frac{1}{u} L[f(t)](\\frac{1}{u})" }, { "math_id": 4, "text": "S[f(t)](u)= u^2 E[f(t)](u)" } ]
https://en.wikipedia.org/wiki?curid=75858386
75859
Deflate
Data compression algorithm In computing, Deflate (stylized as DEFLATE, and also called Flate) is a lossless data compression file format that uses a combination of LZ77 and Huffman coding. It was designed by Phil Katz, for version 2 of his PKZIP archiving tool. Deflate was later specified in RFC 1951 (1996). Katz also designed the original algorithm used to construct Deflate streams. This algorithm was patented as U.S. patent 5051745, and assigned to PKWARE, Inc. As stated in the RFC document, an algorithm producing Deflate files was widely thought to be implementable in a manner not covered by patents. This led to its widespread use – for example, in gzip compressed files and PNG image files, in addition to the ZIP file format for which Katz originally designed it. The patent has since expired. Stream format. A Deflate stream consists of a series of blocks. Each block is preceded by a 3-bit header: The "stored" block option adds minimal overhead and is used for data that is incompressible. Most compressible data will end up being encoded using method codice_4, the "dynamic Huffman" encoding, which produces an optimized Huffman tree customized for each block of data individually. Instructions to generate the necessary Huffman tree immediately follow the block header. The static Huffman option is used for short messages, where the fixed saving gained by omitting the tree outweighs the percentage compression loss due to using a non-optimal (thus, not technically Huffman) code. Compression is achieved through two steps: Duplicate string elimination. Within compressed blocks, if a duplicate series of bytes is spotted (a repeated string), then a back-reference is inserted, linking to the previous location of that identical string instead. An encoded match to an earlier string consists of an 8-bit length (3–258 bytes) and a 15-bit distance (1–32,768 bytes) to the beginning of the duplicate. Relative back-references can be made across any number of blocks, as long as the distance appears within the last 32 KiB of uncompressed data decoded (termed the "sliding window"). If the distance is less than the length, the duplicate overlaps itself, indicating repetition. For example, a run of 10 identical bytes can be encoded as one byte, followed by a duplicate of length 9, beginning with the previous byte. Searching the preceding text for duplicate substrings is the most computationally expensive part of the DEFLATE algorithm, and the operation which compression level settings affect. Bit reduction. The second compression stage consists of replacing commonly used symbols with shorter representations and less commonly used symbols with longer representations. The method used is Huffman coding which creates an unprefixed tree of non-overlapping intervals, where the length of each sequence is inversely proportional to the logarithm of the probability of that symbol needing to be encoded. The more likely it is that a symbol has to be encoded, the shorter its bit-sequence will be. A tree is created, containing space for 288 symbols: A match length code will always be followed by a distance code. Based on the distance code read, further "extra" bits may be read in order to produce the final distance. The distance tree contains space for 32 symbols: Note that for the match distance symbols 2–29, the number of extra bits can be calculated as formula_0. The two codes (the 288-symbol length/literal tree and the 32-symbol distance tree) are themselves encoded as canonical Huffman codes by giving the bit length of the code for each symbol. The bit lengths are themselves run-length encoded to produce as compact a representation as possible. As an alternative to including the tree representation, the "static tree" option provides standard fixed Huffman trees. The compressed size using the static trees can be computed using the same statistics (the number of times each symbol appears) as are used to generate the dynamic trees, so it is easy for a compressor to choose whichever is smaller. Encoder/compressor. During the compression stage, it is the "encoder" that chooses the amount of time spent looking for matching strings. The zlib/gzip reference implementation allows the user to select from a sliding scale of likely resulting compression-level vs. speed of encoding. Options range from codice_1 (do not attempt compression, just store uncompressed) to codice_8 representing the maximum capability of the reference implementation in zlib/gzip. Other Deflate encoders have been produced, all of which will also produce a compatible bitstream capable of being decompressed by any existing Deflate decoder. Differing implementations will likely produce variations on the final encoded bit-stream produced. The focus with non-zlib versions of an encoder has normally been to produce a more efficiently compressed and smaller encoded stream. Deflate64/Enhanced Deflate. Deflate64, specified by PKWARE, is a proprietary variant of Deflate. It's fundamentally the same algorithm. What has changed is the increase in dictionary size from 32 KB to 64 KB, an extension of the distance codes to 16 bits so that they may address a range of 64 KB, and the length code, which is extended to 16 bits so that it may define lengths of three to 65,538 bytes. This leads to Deflate64 having a longer compression time, and potentially a slightly higher compression ratio, than Deflate. Several free and/or open source projects support Deflate64, such as 7-Zip, while others, such as zlib, do not, as a result of the proprietary nature of the procedure and the very modest performance increase over Deflate. Using Deflate in new software. Implementations of Deflate are freely available in many languages. Apps written in C typically use the zlib library (under the permissive zlib License). Apps in Borland Pascal (and compatible languages) can use paszlib. Apps in C++ can take advantage of the improved Deflate library in 7-Zip. Both Java and .NET Framework offer out-of-the-box support for Deflate in their libraries (respectively, codice_9 and System.IO.Compression). Apps in Ada can use Zip-Ada (pure) or ZLib-Ada. Encoder implementations. AdvanceCOMP uses the higher compression ratio versions of Deflate in 7-Zip, libdeflate, and Zopfli to enable recompression of gzip, PNG, MNG and ZIP files with the possibility of smaller file sizes than zlib is able to achieve at maximum settings. Decoder/decompressor. Inflate is the decoding process that takes a Deflate bitstream for decompression and correctly produces the original full-size data or file. Inflate-only implementations. The normal intent with an alternative Inflate implementation is highly optimized decoding speed, or extremely predictable RAM usage for micro-controller embedded systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left\\lfloor\\frac{n}{2}\\right\\rfloor-1" } ]
https://en.wikipedia.org/wiki?curid=75859
7586402
Adjusted ERA+
Baseball statistic Adjusted ERA+, often simply abbreviated to ERA+ or ERA plus, is a pitching statistic in baseball. It adjusts a pitcher's earned run average (ERA) according to the pitcher's ballpark (in case the ballpark favors batters or pitchers) and the ERA of the pitcher's league. Formula. ERA+ is calculated as: formula_0 Where ERA is the pitcher's ERA, lgERA is the average ERA of the league, and PF is the park factor of the pitcher in question. This formula is now standard, although Baseball-Reference.com briefly used a different formula which took values strictly between 0 and 200 instead of between 0 and infinity, but the current website shows values above 200 so it is clearly no longer in use: formula_1 The average ERA+ is set to be 100; a score above 100 indicates that the pitcher performed better than average, while below 100 indicates worse than average. For instance, imagine the average ERA in the league is 4.00: if pitcher A has an ERA of 4.00 but is pitching in a ballpark that favors hitters, his ERA+ will be over 100. Likewise, if pitcher B has an ERA of 4.00 but pitches in a ballpark favoring pitchers, then his ERA+ will be below 100. As a result, ERA+ can be used to compare pitchers across different run environments. In the above example, while ERA will lead one to believe that both pitchers pitched at the same level due to their ERAs being equivalent, ERA+ indicates that pitcher A performed better than pitcher B. ERA+ can thus be used to neutralize the effects of some well-known advantages and disadvantages on pitchers' ERA scores. Leaders. Pedro Martínez holds the modern record for highest ERA+ in a single season; he posted a 1.74 ERA in the 2000 season while pitching in the American League, which had an average ERA of 4.92, which gave Martínez an ERA+ of 291. While Bob Gibson has the lowest ERA in modern times (1.12 in the National League in 1968), the average ERA was 2.99 that year (the so-called Year of the Pitcher) and so Gibson's ERA+ is 258, eighth highest since 1900. 1968 was the last year that Major League Baseball employed the use of a pitcher's mound at , since . The career record for ERA+ (with a minimum of 1,000 innings pitched) is held by Mariano Rivera, a closer whose career ERA+ is 205. Upon retirement in 2013, with an ERA+ of 194 in his final season, Rivera's career record of 205 surpassed the record among retired players of 154, held by Martínez, bumping Jim Devlin, a pitcher in the 1870s, to third with 151. Among qualifying pitchers, Pedro Martínez has the most separate seasons with an ERA+ over 200, with five, and the most consecutive 200 ERA+ seasons (4), though the closer Rivera, with too few innings each year to qualify officially, has surpassed 200 ERA+ in 13 seasons of his 19 seasons, including 4 consecutive seasons twice and 5 consecutive seasons once and also surpassing 300 in 2004 and again in 2008. Roger Clemens topped a 200 ERA+ three times, and Greg Maddux had two such seasons. Players in bold are active as of the end of the 2020 season and have not announced their retirement. Single-season leaders include only pitchers eligible for the ERA title (a pitcher must throw a minimum of one inning per game scheduled for his team during the season to qualify for the ERA title). Only pitchers with 1,000 or more innings pitched are shown in the career leader list. &lt;templatestyles src="Col-begin/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathit{ERA+} = 100 \\cdot {{\\mathit{lgERA} \\over \\mathit{ERA}}} \\cdot \\mathit{PF}" }, { "math_id": 1, "text": "\\mathit{ERA+} = 100 \\cdot (2 - {{\\mathit{ERA} \\over \\mathit{lgERA}}} \\cdot {{1 \\over \\mathit{PF}}})" } ]
https://en.wikipedia.org/wiki?curid=7586402
758718
Noncototient
Positive integers with specific properties In number theory, a noncototient is a positive integer n that cannot be expressed as the difference between a positive integer m and the number of coprime integers below it. That is, "m" − "φ"("m") = "n", where φ stands for Euler's totient function, has no solution for m. The "cototient" of n is defined as "n" − "φ"("n"), so a noncototient is a number that is never a cototient. It is conjectured that all noncototients are even. This follows from a modified form of the slightly stronger version of the Goldbach conjecture: if the even number n can be represented as a sum of two distinct primes p and q, then formula_0 It is expected that every even number larger than 6 is a sum of two distinct primes, so probably no odd number larger than 5 is a noncototient. The remaining odd numbers are covered by the observations 1 = 2 – "φ"(2), 3 = 9 – "φ"(9), and 5 = 25 – "φ"(25). For even numbers, it can be shown formula_1 Thus, all even numbers n such that "n" + 2 can be written as ("p" + 1)("q" + 1) with p, q primes are cototients. The first few noncototients are 10, 26, 34, 50, 52, 58, 86, 100, 116, 122, 130, 134, 146, 154, 170, 172, 186, 202, 206, 218, 222, 232, 244, 260, 266, 268, 274, 290, 292, 298, 310, 326, 340, 344, 346, 362, 366, 372, 386, 394, 404, 412, 436, 466, 470, 474, 482, 490, ... (sequence in the OEIS) The cototient of n are 0, 1, 1, 2, 1, 4, 1, 4, 3, 6, 1, 8, 1, 8, 7, 8, 1, 12, 1, 12, 9, 12, 1, 16, 5, 14, 9, 16, 1, 22, 1, 16, 13, 18, 11, 24, 1, 20, 15, 24, 1, 30, 1, 24, 21, 24, 1, 32, 7, 30, 19, 28, 1, 36, 15, 32, 21, 30, 1, 44, 1, 32, 27, 32, 17, 46, 1, 36, 25, 46, 1, 48, ... (sequence in the OEIS) Least k such that the cototient of k is n are (start with "n" = 0, 0 if no such k exists) 1, 2, 4, 9, 6, 25, 10, 15, 12, 21, 0, 35, 18, 33, 26, 39, 24, 65, 34, 51, 38, 45, 30, 95, 36, 69, 0, 63, 52, 161, 42, 87, 48, 93, 0, 75, 54, 217, 74, 99, 76, 185, 82, 123, 60, 117, 66, 215, 72, 141, 0, ... (sequence in the OEIS) Greatest k such that the cototient of k is n are (start with "n" = 0, 0 if no such k exists) 1, ∞, 4, 9, 8, 25, 10, 49, 16, 27, 0, 121, 22, 169, 26, 55, 32, 289, 34, 361, 38, 85, 30, 529, 46, 133, 0, 187, 52, 841, 58, 961, 64, 253, 0, 323, 68, 1369, 74, 391, 76, 1681, 82, 1849, 86, 493, 70, 2209, 94, 589, 0, ... (sequence in the OEIS) Number of ks such that "k" − "φ"("k") is n are (start with "n" = 0) 1, ∞, 1, 1, 2, 1, 1, 2, 3, 2, 0, 2, 3, 2, 1, 2, 3, 3, 1, 3, 1, 3, 1, 4, 4, 3, 0, 4, 1, 4, 3, 3, 4, 3, 0, 5, 2, 2, 1, 4, 1, 5, 1, 4, 2, 4, 2, 6, 5, 5, 0, 3, 0, 6, 2, 4, 2, 5, 0, 7, 4, 3, 1, 8, 4, 6, 1, 3, 1, 5, 2, 7, 3, ... (sequence in the OEIS) Erdős (1913–1996) and Sierpinski (1882–1969) asked whether there exist infinitely many noncototients. This was finally answered in the affirmative by Browkin and Schinzel (1995), who showed every member of the infinite family formula_2 is an example (See Riesel number). Since then other infinite families, of roughly the same form, have been given by Flammenkamp and Luca (2000).
[ { "math_id": 0, "text": "\\begin{align}\n pq - \\varphi(pq) &= pq - (p-1)(q-1) \\\\\n &= p + q - 1 \\\\ \n &= n - 1. \n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\n 2pq - \\varphi(2pq) &= 2pq - (p-1)(q-1) \\\\\n &= pq + p + q - 1 \\\\\n &= (p+1)(q+1) - 2\n\\end{align}" }, { "math_id": 2, "text": " 2^k \\cdot 509203" } ]
https://en.wikipedia.org/wiki?curid=758718
75873
Nernst equation
Physical law in electrochemistry In electrochemistry, the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction (half-cell or full cell reaction) from the standard electrode potential, absolute temperature, the number of electrons involved in the redox reaction, and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst, a German physical chemist who formulated the equation. Expression. General form with chemical activities. When an oxidizer (Ox) accepts a number "z" of electrons ( "e"−) to be converted in its reduced form (Red), the half-reaction is expressed as: &lt;chem&gt;Ox + ze- -&gt; Red&lt;/chem&gt; The reaction quotient ("Qr"), also often called the ion activity product ("IAP"), is the ratio between the chemical activities ("a") of the reduced form (the reductant, aRed) and the oxidized form (the oxidant, aOx). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration ("C", also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes: formula_0 At chemical equilibrium, the ratio "Qr" of the activity of the reaction product ("a"Red) by the reagent activity ("a"Ox) is equal to the equilibrium constant "K" of the half-reaction: formula_1 The standard thermodynamics also says that the actual Gibbs free energy Δ"G" is related to the free energy change under standard state Δ"G" by the relationship: formula_2 where "Q"r is the reaction quotient and R is the universal ideal gas constant. The cell potential E associated with the electrochemical reaction is defined as the decrease in Gibbs free energy per coulomb of charge transferred, which leads to the relationship formula_3 The constant F (the Faraday constant) is a unit conversion factor "F" = "N"A"q", where "N"A is the Avogadro constant and q is the fundamental electron charge. This immediately leads to the Nernst equation, which for an electrochemical half-cell is formula_4 For a complete electrochemical reaction (full cell), the equation can be written as formula_5 where: Thermal voltage. At room temperature (25 °C), the thermal voltage formula_6 is approximately 25.693 mV. The Nernst equation is frequently expressed in terms of base-10 logarithms ("i.e.", common logarithms) rather than natural logarithms, in which case it is written: formula_7 where "λ" = ln(10) ≈ 2.3026 and "λVT" ≈ 0.05916 Volt. Form with activity coefficients and concentrations. Similarly to equilibrium constants, activities are always measured with respect to the standard state (1 mol/L for solutes, 1 atm for gases, and T = 298.15 K, "i.e.", 25 °C or 77 °F). The chemical activity of a species i, "a"i, is related to the measured concentration "C"i via the relationship "a"i = "γ"i "C"i, where "γ"i is the activity coefficient of the species i. Because activity coefficients tend to unity at low concentrations, or are unknown or difficult to determine at medium and high concentrations, activities in the Nernst equation are frequently replaced by simple concentrations and then, formal standard reduction potentials formula_8 used. Taking into account the activity coefficients (formula_9) the Nernst equation becomes: formula_10 formula_11 formula_12 Where the first term including the activity coefficients (formula_9) is denoted formula_13 and called the formal standard reduction potential, so that formula_14 can be directly expressed as a function of formula_13 and the concentrations in the simplest form of the Nernst equation: formula_15 Formal standard reduction potential. When wishing to use simple concentrations in place of activities, but that the activity coefficients are far from unity and can no longer be neglected and are unknown or too difficult to determine, it can be convenient to introduce the notion of the "so-called" standard formal reduction potential (formula_13) which is related to the standard reduction potential as follows: formula_16 So that the Nernst equation for the half-cell reaction can be correctly formally written in terms of concentrations as: formula_15 and likewise for the full cell expression. According to Wenzel (2020), a formal reduction potential formula_13 is the reduction potential that applies to a half reaction under a set of specified conditions such as, e.g., pH, ionic strength, or the concentration of complexing agents. The formal reduction potential formula_13 is often a more convenient, but conditional, form of the standard reduction potential, taking into account activity coefficients and specific conditions characteristics of the reaction medium. Therefore, its value is a conditional value, "i.e.", that it depends on the experimental conditions and because the ionic strength affects the activity coefficients, formula_13 will vary from medium to medium. Several definitions of the formal reduction potential can be found in the literature, depending on the pursued objective and the experimental constraints imposed by the studied system. The general definition of formula_13 refers to its value determined when formula_17. A more particular case is when formula_13 is also determined at pH 7, as e.g. for redox reactions important in biochemistry or biological systems. 1==== The formal standard reduction potential formula_13 can be defined as the measured reduction potential formula_14 of the half-reaction at unity concentration ratio of the oxidized and reduced species ("i.e.", when 1) under given conditions. Indeed: as, formula_18, when formula_19, formula_20, when formula_17, because formula_21, and that the term formula_22 is included in formula_13. The formal reduction potential makes possible to more simply work with molar (mol/L, M) or molal (mol/kg , m) concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective "formal" in the expression "formal" potential. The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, "i.e." from reduction to oxidation or "vice versa", the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions ("i.e." the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, Pgas = 1 bar) it becomes "de facto" a standard potential. According to Brown and Swift (1949): "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal". In this case, as for the standard reduction potentials, the concentrations of dissolved species remain equal to one molar (M) or one molal (m), and so are said to be one formal (F). So, expressing the concentration C in molarity M (1 mol/L): formula_23 The term formal concentration (F) is now largely ignored in the current literature and can be commonly assimilated to molar concentration (M), or molality (m) in case of thermodynamic calculations. The formal potential is also found halfway between the two peaks in a cyclic voltammogram, where at this point the concentration of Ox (the oxidized species) and Red (the reduced species) at the electrode surface are equal. The activity coefficients formula_24 and formula_25 are included in the formal potential formula_13, and because they depend on experimental conditions such as temperature, ionic strength, and pH, formula_13 cannot be referred as an immutable standard potential but needs to be systematically determined for each specific set of experimental conditions. Formal reduction potentials are applied to simplify calculations of a considered system under given conditions and measurements interpretation. The experimental conditions in which they are determined and their relationship to the standard reduction potentials must be clearly described to avoid to confuse them with standard reduction potentials. Formal standard reduction potential at pH 7. Formal standard reduction potentials (formula_13) are also commonly used in biochemistry and cell biology for referring to standard reduction potentials measured at pH 7, a value closer to the pH of most physiological and intracellular fluids than the standard state pH of 0. The advantage is to defining a more appropriate redox scale better corresponding to real conditions than the standard state. Formal standard reduction potentials (formula_13) allow to more easily estimate if a redox reaction supposed to occur in a metabolic process or to fuel microbial activity under some conditions is feasible or not. While, standard reduction potentials always refer to the standard hydrogen electrode (SHE), with [ H+] = 1 M corresponding to a pH 0, and formula_26 fixed arbitrarily to zero by convention, it is no longer the case at a pH of 7. Then, the reduction potential formula_14 of a hydrogen electrode operating at pH 7 is -0.413 V with respect to the standard hydrogen electrode (SHE). Expression of the Nernst equation as a function of pH. The formula_27 and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram (formula_27 – pH plot). formula_27 explicitly denotes formula_14 expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction ("i.e.", electrons accepted by an oxidant on the left side): formula_28 The half-cell standard reduction potential formula_29 is given by formula_30 where formula_31 is the standard Gibbs free energy change, z is the number of electrons involved, and F is the Faraday's constant. The Nernst equation relates pH and formula_27 as follows: formula_32   where curly brackets indicate activities, and exponents are shown in the conventional manner. This equation is the equation of a straight line for formula_14 as a function of pH with a slope of formula_33 volt (pH has no units). This equation predicts lower formula_14 at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for the reduction of H+ into H2. formula_14 is then often noted as formula_27 to indicate that it refers to the standard hydrogen electrode (SHE) whose formula_14 = 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, Pgas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0). Main factors affecting the formal standard reduction potentials. The main factor affecting the formal reduction potentials in biochemical or biological processes is most often the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be explicitly indicated. When using, or comparing, several formal reduction potentials they must also be internally consistent. Problems may occur when mixing different sources of data using different conventions or approximations ("i.e.", with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials versus SHE (pH = 0) with formal reduction potentials (pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and mixing data from classical electrochemistry and microbiology textbooks without paying attention to the different conventions on which they are based). Examples with a Pourbaix diagram. To illustrate the dependency of the reduction potential on pH, one can simply consider the two oxido-reduction equilibria determining the water stability domain in a Pourbaix diagram (Eh–pH plot). When water is submitted to electrolysis by applying a sufficient difference of electrical potential between two electrodes immersed in water, hydrogen is produced at the cathode (reduction of water protons) while oxygen is formed at the anode (oxidation of water oxygen atoms). The same may occur if a reductant stronger than hydrogen (e.g., metallic Na) or an oxidant stronger than oxygen (e.g., F2) enters in contact with water and reacts with it. In the Eh–pH plot here beside (the simplest possible version of a Pourbaix diagram), the water stability domain (grey surface) is delimited in term of redox potential by two inclined red dashed lines: (cathode: reduction) (anode: oxidation) When solving the Nernst equation for each corresponding reduction reaction (need to revert the water oxidation reaction producing oxygen), both equations have a similar form because the number of protons and the number of electrons involved within a reaction are the same and their ratio is one (2 H+/2 "e"− for H2 and 4 H+/4 "e"− with respectively), so it simplifies when solving the Nernst equation expressed as a function of pH. The result can be numerically expressed as follows: formula_34 Note that the slopes of the two water stability domain upper and lower lines are the same (-59.16 mV/pH unit), so they are parallel on a Pourbaix diagram. As the slopes are negative, at high pH, both hydrogen and oxygen evolution requires a much lower reduction potential than at low pH. For the reduction of H+ into H2 the here above mentioned relationship becomes: formula_35 because by convention formula_29 = 0 V for the standard hydrogen electrode (SHE: pH = 1). So, at pH = 7, formula_14 = -0.414 V for the reduction of protons. For the reduction of O2 into 2 H2O the here above mentioned relationship becomes: formula_36 because formula_29 = +1.229 V with respect to the standard hydrogen electrode (SHE: pH = 1). So, at pH = 7, formula_14 = +0.815 V for the reduction of oxygen. The offset of -414 mV in formula_14 is the same for both reduction reactions because they share the same linear relationship as a function of pH and the slopes of their lines are the same. This can be directly verified on a Pourbaix diagram. For other reduction reactions, the value of the formal reduction potential at a pH of 7, commonly referred for biochemical reactions, also depends on the slope of the corresponding line in a Pourbaix diagram "i.e." on the ratio "&lt;templatestyles src="Fraction/styles.css" /&gt;h⁄z" of the number of  H+ to the number of  "e"− involved in the reduction reaction, and thus on the stoichiometry of the half-reaction. The determination of the formal reduction potential at pH = 7 for a given biochemical half-reaction requires thus to calculate it with the corresponding Nernst equation as a function of pH. One cannot simply apply an offset of -414 mV to the Eh value (SHE) when the ratio "&lt;templatestyles src="Fraction/styles.css" /&gt;h⁄z" differs from 1. Applications in biology. Beside important redox reactions in biochemistry and microbiology, the Nernst equation is also used in physiology for calculating the electric potential of a cell membrane with respect to one type of ion. It can be linked to the acid dissociation constant. Nernst potential. The Nernst equation has a physiological application when used to calculate the potential of an ion of charge "z" across a membrane. This potential is determined using the concentration of the ion both inside and outside the cell: formula_37 When the membrane is in thermodynamic equilibrium (i.e., no net flux of ions), and if the cell is permeable to only one ion, then the membrane potential must be equal to the Nernst potential for that ion. Goldman equation. When the membrane is permeable to more than one ion, as is inevitably the case, the resting potential can be determined from the Goldman equation, which is a solution of G-H-K influx equation under the constraints that total current density driven by electrochemical force is zero: formula_38 where The potential across the cell membrane that exactly opposes net diffusion of a particular ion through the membrane is called the Nernst potential for that ion. As seen above, the magnitude of the Nernst potential is determined by the ratio of the concentrations of that specific ion on the two sides of the membrane. The greater this ratio the greater the tendency for the ion to diffuse in one direction, and therefore the greater the Nernst potential required to prevent the diffusion. A similar expression exists that includes r (the absolute value of the transport ratio). This takes transporters with unequal exchanges into account. See: sodium-potassium pump where the transport ratio would be 2/3, so r equals 1.5 in the formula below. The reason why we insert a factor r = 1.5 here is that current density "by electrochemical force" Je.c.(Na+) + Je.c.(K+) is no longer zero, but rather Je.c.(Na+) + 1.5Je.c.(K+) = 0 (as for both ions flux by electrochemical force is compensated by that by the pump, i.e. Je.c. = −Jpump), altering the constraints for applying GHK equation. The other variables are the same as above. The following example includes two ions: potassium (K+) and sodium (Na+). Chloride is assumed to be in equilibrium. formula_39 When chloride (Cl−) is taken into account, formula_40 Derivation. Using Boltzmann factor. For simplicity, we will consider a solution of redox-active molecules that undergo a one-electron reversible reaction Ox + e− ⇌ Red and that have a standard potential of zero, and in which the activities are well represented by the concentrations (i.e. unit activity coefficient). The chemical potential "μ"c of this solution is the difference between the energy barriers for taking electrons from and for giving electrons to the working electrode that is setting the solution's electrochemical potential. The ratio of oxidized to reduced molecules, , is equivalent to the probability of being oxidized (giving electrons) over the probability of being reduced (taking electrons), which we can write in terms of the Boltzmann factor for these processes: formula_41 Taking the natural logarithm of both sides gives formula_42 If "μ"c ≠ 0 at  = 1, we need to add in this additional constant: formula_43 Dividing the equation by e to convert from chemical potentials to electrode potentials, and remembering that =, we obtain the Nernst equation for the one-electron process : formula_44 Using thermodynamics (chemical potential). Quantities here are given per molecule, not per mole, and so Boltzmann constant "k" and the electron charge "e" are used instead of the gas constant "R" and Faraday's constant "F". To convert to the molar quantities given in most chemistry textbooks, it is simply necessary to multiply by the Avogadro constant: "R" = "kN"A and "F" = "eN"A. The entropy of a molecule is defined as formula_45 where Ω is the number of states available to the molecule. The number of states must vary linearly with the volume "V" of the system (here an idealized system is considered for better understanding, so that activities are posited very close to the true concentrations. Fundamental statistical proof of the mentioned linearity goes beyond the scope of this section, but to see this is true it is simpler to consider usual isothermal process for an ideal gas where the change of entropy Δ"S" = "nR" ln() takes place. It follows from the definition of entropy and from the condition of constant temperature and quantity of gas n that the change in the number of states must be proportional to the relative change in volume . In this sense there is no difference in statistical properties of ideal gas atoms compared with the dissolved species of a solution with activity coefficients equaling one: particles freely "hang around" filling the provided volume), which is inversely proportional to the concentration c, so we can also write the entropy as formula_46 The change in entropy from some state 1 to another state 2 is therefore formula_47 so that the entropy of state 2 is formula_48 If state 1 is at standard conditions, in which "c"1 is unity (e.g., 1 atm or 1 M), it will merely cancel the units of "c"2. We can, therefore, write the entropy of an arbitrary molecule A as formula_49 where formula_50 is the entropy at standard conditions and [A] denotes the concentration of A. The change in entropy for a reaction &lt;templatestyles src="Block indent/styles.css"/&gt;aA + bB → yY + zZ is then given by formula_51 We define the ratio in the last term as the reaction quotient: formula_52 where the numerator is a product of reaction product activities, "aj", each raised to the power of a stoichiometric coefficient, "νj", and the denominator is a similar product of reactant activities. All activities refer to a time "t". Under certain circumstances (see chemical equilibrium) each activity term such as "a" may be replaced by a concentration term, [A].In an electrochemical cell, the cell potential "E" is the chemical potential available from redox reactions ("E" =). "E" is related to the Gibbs free energy change Δ"G" only by a constant: Δ"G" = −"zFE", where "n" is the number of electrons transferred and "F" is the Faraday constant. There is a negative sign because a spontaneous reaction has a negative Gibbs free energy Δ"G" and a positive potential "E". The Gibbs free energy is related to the entropy by "G" = "H" − "TS", where "H" is the enthalpy and "T" is the temperature of the system. Using these relations, we can now write the change in Gibbs free energy, formula_53 and the cell potential, formula_54 This is the more general form of the Nernst equation. For the redox reaction , formula_55 and we have: formula_56 The cell potential at standard temperature and pressure (STP) formula_57 is often replaced by the formal potential formula_58, which includes the activity coefficients of the dissolved species under given experimental conditions (T, P, ionic strength, pH, and complexing agents) and is the potential that is actually measured in an electrochemical cell. Relation to the chemical equilibrium. The standard Gibbs free energy formula_31 is related to the equilibrium constant K as follows: formula_59 At the same time, formula_31 is also equal to the product of the total charge (zF) transferred during the reaction and the cell potential (formula_60): formula_61 The sign is negative, because the considered system performs the work and thus releases energy. So, formula_62 And therefore: formula_63 Starting from the Nernst equation, one can also demonstrate the same relationship in the reverse way. At chemical equilibrium, or thermodynamic equilibrium, the electrochemical potential ("E") = 0 and therefore the reaction quotient ("Qr") attains the special value known as the equilibrium constant ("K"eq): "Qr" = "K"eq Therefore, formula_64 Or at standard state, formula_65 We have thus related the standard electrode potential and the equilibrium constant of a redox reaction. Limitations. In dilute solutions, the Nernst equation can be expressed directly in the terms of concentrations (since activity coefficients are close to unity). But at higher concentrations, the true activities of the ions must be used. This complicates the use of the Nernst equation, since estimation of non-ideal activities of ions generally requires experimental measurements. The Nernst equation also only applies when there is no net current flow through the electrode. The activity of ions at the electrode surface changes when there is current flow, and there are additional overpotential and resistive loss terms which contribute to the measured potential. At very low concentrations of the potential-determining ions, the potential predicted by Nernst equation approaches toward ±∞. This is physically meaningless because, under such conditions, the exchange current density becomes very low, and there may be no thermodynamic equilibrium necessary for Nernst equation to hold. The electrode is called unpoised in such case. Other effects tend to take control of the electrochemical behavior of the system, like the involvement of the solvated electron in electricity transfer and electrode equilibria, as analyzed by Alexander Frumkin and B. Damaskin, Sergio Trasatti, etc. Time dependence of the potential. The expression of time dependence has been established by Karaoglanoff. Significance in other scientific fields. The Nernst equation has been involved in the scientific controversy about cold fusion. Fleischmann and Pons, claiming that cold fusion could exist, calculated that a palladium cathode immersed in a heavy water electrolysis cell could achieve up to 1027 atmospheres of pressure inside the crystal lattice of the metal of the cathode, enough pressure to cause spontaneous nuclear fusion. In reality, only 10,000–20,000 atmospheres were achieved. The American physicist John R. Huizenga claimed their original calculation was affected by a misinterpretation of the Nernst equation. He cited a paper about Pd–Zr alloys. The Nernst equation allows the calculation of the extent of reaction between two redox systems and can be used, for example, to assess whether a particular reaction will go to completion or not. At chemical equilibrium, the electromotive forces (emf) of the two half cells are equal. This allows the equilibrium constant "K" of the reaction to be calculated and hence the extent of the reaction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_r = \\frac{a_\\text{Red}}{a_\\text{Ox}} = \\frac{[\\operatorname{Red}]}{[\\operatorname{Ox}]}" }, { "math_id": 1, "text": "K = \\frac{a_\\text{Red}}{a_\\text{Ox}}" }, { "math_id": 2, "text": "\\Delta G = \\Delta G^{\\ominus} + RT\\ln Q_r" }, { "math_id": 3, "text": "\\Delta G = -zFE." }, { "math_id": 4, "text": "E_\\text{red} = E^\\ominus_\\text{red} - \\frac{RT}{zF} \\ln Q_r=E^\\ominus_\\text{red} - \\frac{RT}{zF} \\ln\\frac{a_\\text{Red}}{a_\\text{Ox}}." }, { "math_id": 5, "text": "E_\\text{cell} = E^\\ominus_\\text{cell} - \\frac{RT}{zF} \\ln Q_r" }, { "math_id": 6, "text": "V_T=\\frac{RT}{F}" }, { "math_id": 7, "text": "E = E^\\ominus - \\frac{V_T}{z} \\ln\\frac{a_\\text{Red}}{a_\\text{Ox}} = E^\\ominus - \\frac{\\lambda V_T}{z} \\log_{10}\\frac{a_\\text{Red}}{a_\\text{Ox}}." }, { "math_id": 8, "text": "E^{\\ominus'}_\\text{red}" }, { "math_id": 9, "text": "\\gamma" }, { "math_id": 10, "text": "E_\\text{red} = E^\\ominus_\\text{red} - \\frac{RT}{zF} \\ln\\left(\\frac{\\gamma_\\text{Red}}{\\gamma_\\text{Ox}}\\frac{C_\\text{Red}}{C_\\text{Ox}}\\right)" }, { "math_id": 11, "text": "E_\\text{red} = E^\\ominus_\\text{red} - \\frac{RT}{zF} \\left(\\ln\\frac{\\gamma_\\text{Red}}{\\gamma_\\text{Ox}} + \\ln\\frac{C_\\text{Red}}{C_\\text{Ox}}\\right)" }, { "math_id": 12, "text": "E_\\text{red} = \\underbrace{\\left(E^\\ominus_\\text{red} - \\frac{RT}{zF} \\ln\\frac{\\gamma_\\text{Red}}{\\gamma_\\text{Ox}}\\right)}_{E^{\\ominus '}_\\text{red}} - \\frac{RT}{zF} \\ln\\frac{C_\\text{Red}}{C_\\text{Ox}}" }, { "math_id": 13, "text": "E^{\\ominus '}_\\text{red}" }, { "math_id": 14, "text": "E_\\text{red}" }, { "math_id": 15, "text": "E_\\text{red}=E^{\\ominus '}_\\text{red} - \\frac{RT}{zF} \\ln\\frac{C_\\text{Red}}{C_\\text{Ox}}" }, { "math_id": 16, "text": "E^{\\ominus '}_\\text{red}=E^{\\ominus}_\\text{red}-\\frac{RT}{zF}\\ln\\frac{\\gamma_\\text{Red}}{\\gamma_\\text{Ox}}" }, { "math_id": 17, "text": "\\frac{C_\\text{red}} {C_\\text{ox}} = 1" }, { "math_id": 18, "text": "E_\\text{red} = E^{\\ominus}_\\text{red}" }, { "math_id": 19, "text": "\\frac{a_\\text{red}} {a_\\text{ox}} = 1" }, { "math_id": 20, "text": "E_\\text{red} = E^{\\ominus'}_\\text{red}" }, { "math_id": 21, "text": "\\ln{1} = 0" }, { "math_id": 22, "text": "\\frac{\\gamma_\\text{red}} {\\gamma_\\text{ox}}" }, { "math_id": 23, "text": "\\frac{C_\\text{red}} {C_\\text{ox}} = \\frac{1 \\, \\mathrm{M}_\\text{red}} {1 \\, \\mathrm{M}_\\text{ox}} = 1" }, { "math_id": 24, "text": "\\gamma_{red}" }, { "math_id": 25, "text": "\\gamma_{ox}" }, { "math_id": 26, "text": "E^{\\ominus}_\\text{red H+}" }, { "math_id": 27, "text": "E_h" }, { "math_id": 28, "text": "a \\, A + b \\, B + h \\, \\ce{H+} + z \\, e^{-} \\quad \\ce{<=>} \\quad c \\, C + d \\, D" }, { "math_id": 29, "text": "E^{\\ominus}_\\text{red}" }, { "math_id": 30, "text": "E^{\\ominus}_\\text{red} (\\text{volt}) = -\\frac{\\Delta G^\\ominus}{zF}" }, { "math_id": 31, "text": "\\Delta G^\\ominus" }, { "math_id": 32, "text": "E_h = E_\\text{red} = E^{\\ominus}_\\text{red} - \\frac{0.05916}{z} \\log\\left(\\frac{\\{C\\}^c\\{D\\}^d}{\\{A\\}^a\\{B\\}^b}\\right) - \\frac{0.05916\\,h}{z} \\text{pH}" }, { "math_id": 33, "text": "-0.05916\\,\\left(\\frac{h}{z}\\right)" }, { "math_id": 34, "text": "E_\\text{red} = E^{\\ominus}_\\text{red} - 0.05916 \\ pH" }, { "math_id": 35, "text": "E_\\text{red} = - 0.05916 \\ pH" }, { "math_id": 36, "text": "E_\\text{red} = 1.229 - 0.05916 \\ pH" }, { "math_id": 37, "text": "E = \\frac{R T}{z F} \\ln\\frac{[\\text{ion outside cell}]}{[\\text{ion inside cell}]} = 2.3026\\frac{R T}{z F} \\log_{10}\\frac{[\\text{ion outside cell}]}{[\\text{ion inside cell}]}." }, { "math_id": 38, "text": "E_\\mathrm{m} = \\frac{RT}{F} \\ln{ \\left( \\frac{ \\displaystyle\\sum_i^N P_{\\mathrm{M}^+_i}\\left[\\mathrm{M}^+_i\\right]_\\mathrm{out} + \\displaystyle\\sum_j^M P_{\\mathrm{A}^-_j}\\left[\\mathrm{A}^-_j\\right]_\\mathrm{in}}{ \\displaystyle\\sum_i^N P_{\\mathrm{M}^+_i}\\left[\\mathrm{M}^+_i\\right]_\\mathrm{in} + \\displaystyle\\sum_j^M P_{\\mathrm{A}^-_j}\\left[\\mathrm{A}^-_j\\right]_\\mathrm{out}} \\right) }," }, { "math_id": 39, "text": "E_{m} = \\frac{RT}{F} \\ln{ \\left( \\frac{ rP_{\\mathrm{K}^+}\\left[\\mathrm{K}^+\\right]_\\mathrm{out} + P_{\\mathrm{Na}^+}\\left[\\mathrm{Na}^+\\right]_\\mathrm{out}}{ rP_{\\mathrm{K}^+}\\left[\\mathrm{K}^+\\right]_\\mathrm{in} + P_{\\mathrm{Na}^+}\\left[\\mathrm{Na}^+\\right]_\\mathrm{in}} \\right) }." }, { "math_id": 40, "text": "E_{m} = \\frac{RT}{F} \\ln{ \\left( \\frac{r P_{\\mathrm{K}^+}\\left[\\mathrm{K}^+\\right]_\\mathrm{out} + P_{\\mathrm{Na}^+}\\left[\\mathrm{Na}^+\\right]_\\mathrm{out} + P_{\\mathrm{Cl}^-}\\left[\\mathrm{Cl}^-\\right]_\\mathrm{in}}{r P_{\\mathrm{K}^+}\\left[\\mathrm{K}^+\\right]_\\mathrm{in} + P_{\\mathrm{Na}^+}\\left[\\mathrm{Na}^+\\right]_\\mathrm{in} + P_{\\mathrm{Cl}^-}\\left[\\mathrm{Cl}^-\\right]_\\mathrm{out}} \\right) }." }, { "math_id": 41, "text": "\\begin{align}\n\\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]}\n&= \\frac{\\exp \\left(-[\\text{barrier for gaining an electron}]/kT\\right)}{\\exp \\left(-[\\text{barrier for losing an electron}]/kT\\right)}\\\\[6px]\n&= \\exp \\left(\\frac{\\mu_\\mathrm{c}}{kT} \\right).\n\\end{align}" }, { "math_id": 42, "text": "\\mu_\\mathrm{c} = kT \\ln \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]}." }, { "math_id": 43, "text": "\\mu_\\mathrm{c} = \\mu_\\mathrm{c}^\\ominus + kT \\ln \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]}." }, { "math_id": 44, "text": "\\begin{align}\nE &= E^\\ominus - \\frac{kT}{e} \\ln \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]} \\\\\n &= E^\\ominus - \\frac{RT}{F} \\ln \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]}.\n\\end{align}" }, { "math_id": 45, "text": "S \\ \\stackrel{\\mathrm{def}}{=}\\ k \\ln \\Omega," }, { "math_id": 46, "text": "S = k\\ln \\ (\\mathrm{constant}\\times V) = -k\\ln \\ (\\mathrm{constant}\\times c)." }, { "math_id": 47, "text": "\\Delta S = S_2 - S_1 = - k \\ln \\frac{c_2}{c_1}," }, { "math_id": 48, "text": "S_2 = S_1 - k \\ln \\frac{c_2}{c_1}." }, { "math_id": 49, "text": "S(\\mathrm{A}) = S^\\ominus(\\mathrm{A}) - k \\ln [\\mathrm{A}]," }, { "math_id": 50, "text": "S^\\ominus" }, { "math_id": 51, "text": "\n\\Delta S_\\mathrm{rxn} = \\big(yS(\\mathrm{Y}) + zS(\\mathrm{Z})\\big) - \\big(aS(\\mathrm{A}) + bS(\\mathrm{B})\\big) \n= \\Delta S^\\ominus_\\mathrm{rxn} - k \\ln \\frac{[\\mathrm{Y}]^y [\\mathrm{Z}]^z}{[\\mathrm{A}]^a [\\mathrm{B}]^b}.\n" }, { "math_id": 52, "text": "Q_r = \\frac{\\displaystyle\\prod_j a_j^{\\nu_j}}{\\displaystyle\\prod_i a_i^{\\nu_i}} \\approx \\frac{[\\mathrm{Z}]^z [\\mathrm{Y}]^y}{[\\mathrm{A}]^a [\\mathrm{B}]^b}," }, { "math_id": 53, "text": "\\Delta G = \\Delta H - T \\Delta S = \\Delta G^\\ominus + kT \\ln Q_r," }, { "math_id": 54, "text": "E = E^\\ominus - \\frac{kT}{ze} \\ln Q_r." }, { "math_id": 55, "text": "Q_r = \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]}," }, { "math_id": 56, "text": "\\begin{align}\nE &= E^\\ominus - \\frac{kT}{ze} \\ln \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]} \\\\\n&= E^\\ominus - \\frac{RT}{zF} \\ln \\frac{[\\mathrm{Red}]}{[\\mathrm{Ox}]} \\\\\n&= E^\\ominus - \\frac{RT}{zF} \\ln Q_r.\n\\end{align}" }, { "math_id": 57, "text": "E^\\ominus" }, { "math_id": 58, "text": "E^{\\ominus'}" }, { "math_id": 59, "text": "\\Delta G^\\ominus = -RT \\ln{K}" }, { "math_id": 60, "text": "E^\\ominus_{cell}" }, { "math_id": 61, "text": "\\Delta G^\\ominus = -zF E^\\ominus_{cell}" }, { "math_id": 62, "text": "-zFE^\\ominus_{cell} = -RT \\ln{K}" }, { "math_id": 63, "text": "E^\\ominus_{cell} = \\frac{RT} {zF} \\ln{K}" }, { "math_id": 64, "text": "\\begin{align}\n0 &= E^\\ominus - \\frac{RT}{z F} \\ln K \\\\\n\\frac{RT}{z F} \\ln K & = E^\\ominus \\\\\n\\ln K &= \\frac{z F E^\\ominus}{RT}\n\\end{align}" }, { "math_id": 65, "text": "\\log_{10} K = \\frac{zE^\\ominus}{\\lambda V_T} = \\frac{zE^\\ominus}{0.05916\\text{ V}} \\quad\\text{at }T = 298.15~\\text{K}" } ]
https://en.wikipedia.org/wiki?curid=75873
75874009
Signpost sequence
Generalized rounding rule In mathematics and apportionment theory, a signpost sequence is a sequence of real numbers, called signposts, used in defining generalized rounding rules. A signpost sequence defines a set of "signposts" that mark the boundaries between neighboring whole numbers: a real number less than the signpost is rounded down, while numbers greater than the signpost are rounded up. Signposts allow for a more general concept of rounding than the usual one. For example, the signposts of the rounding rule "always round down" (truncation) are given by the signpost sequence formula_0 Formal definition. Mathematically, a signpost sequence is a "localized" sequence"," meaning the formula_1th signpost lies in the formula_1th interval with integer endpoints: formula_2 for all formula_3. This allows us to define a general rounding function using the floor function: formula_4 Where exact equality can be handled with any tie-breaking rule, most often by rounding to the nearest even. Applications. In the context of apportionment theory, signpost sequences are used in defining highest averages methods, a set of algorithms designed to achieve equal representation between different groups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s_0 = 1, s_1 = 2, s_2 = 3 \\dots" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "s_n \\in (n, n+1] " }, { "math_id": 3, "text": "n " }, { "math_id": 4, "text": "\\operatorname{round}(x) = \\begin{cases}\n \\lfloor x \\rfloor & x < s(\\lfloor x \\rfloor) \\\\\n \\lfloor x \\rfloor + 1 & x > s(\\lfloor x \\rfloor)\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=75874009
7588072
Dispersion point
In topology, a dispersion point or explosion point is a point in a topological space the removal of which leaves the space highly disconnected. More specifically, if "X" is a connected topological space containing the point "p" and at least two other points, "p" is a dispersion point for "X" if and only if formula_0 is totally disconnected (every subspace is disconnected, or, equivalently, every connected component is a single point). If "X" is connected and formula_0 is totally separated (for each two points "x" and "y" there exists a clopen set containing "x" and not containing "y") then "p" is an explosion point. A space can have at most one dispersion point or explosion point. Every totally separated space is totally disconnected, so every explosion point is a dispersion point. The Knaster–Kuratowski fan has a dispersion point; any space with the particular point topology has an explosion point. If "p" is an explosion point for a space "X", then the totally separated space formula_0 is said to be "pulverized".
[ { "math_id": 0, "text": "X\\setminus \\{p\\}" } ]
https://en.wikipedia.org/wiki?curid=7588072
75881082
Control dependency
Control dependency is a situation in which a program instruction executes if the previous instruction evaluates in a way that allows its execution. An instruction B has a "control dependency" on a preceding instruction A if the outcome of A determines whether B should be executed or not. In the following example, the instruction formula_0 has a control dependency on instruction formula_1. However, formula_2 does not depend on formula_1 because formula_2 is always executed irrespective of the outcome of formula_1. S1. if (a == b) S2. a = a + b S3. b = a + b Intuitively, there is control dependence between two statements A and B if A typical example is that there are control dependences between the condition part of an if statement and the statements in its true/false bodies. A formal definition of control dependence can be presented as follows: A statement formula_0 is said to be control dependent on another statement formula_1 iff Expressed with the help of (post-)dominance the two conditions are equivalent to Construction of control dependences. Control dependences are essentially the dominance frontier in the reverse graph of the control-flow graph (CFG). Thus, one way of constructing them, would be to construct the post-dominance frontier of the CFG, and then reversing it to obtain a control dependence graph. The following is a pseudo-code for constructing the post-dominance frontier: for each X in a bottom-up traversal of the post-dominator tree do: PostDominanceFrontier(X) ← ∅ for each Y ∈ Predecessors(X) do: if immediatePostDominator(Y) ≠ X: done for each Z ∈ Children(X) do: for each Y ∈ PostDominanceFrontier(Z) do: if immediatePostDominator(Y) ≠ X: done done done Here, Children(X) is the set of nodes in the CFG that are immediately post-dominated by X, and Predecessors(X) are the set of nodes in the CFG that directly precede X in the CFG. Note that node X shall be processed only after all its Children have been processed. Once the post-dominance frontier map is computed, reversing it will result in a map from the nodes in the CFG to the nodes that have a control dependence on them. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_2" }, { "math_id": 1, "text": "S_1" }, { "math_id": 2, "text": "S_3" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "S_i" } ]
https://en.wikipedia.org/wiki?curid=75881082
75889002
Rank-index method
Class of apportionment methods In apportionment theory, rank-index methods are a set of apportionment methods that generalize the divisor method. These have also been called Huntington methods, since they generalize an idea by Edward Vermilye Huntington. Input and output. Like all apportionment methods, the inputs of any rank-index method are: Its output is a vector of integers formula_6 with formula_7, called an apportionment of formula_0, where formula_8 is the number of items allocated to agent "i". Iterative procedure. Every rank-index method is parametrized by a "rank-index function" formula_9, which is increasing in the entitlement "formula_10" and decreasing in the current allocation formula_11. The apportionment is computed iteratively as follows: Divisor methods are a special case of rank-index methods: a divisor method with divisor function formula_13 is equivalent to a rank-index method with rank-index function formula_14. Min-max formulation. Every rank-index method can be defined using a min-max inequality: a is an allocation for the rank-index method with function "r", if-and-only-if:formula_15. Properties. Every rank-index method is "house-monotone". This means that, when formula_0 increases, the allocation of each agent weakly increases. This immediately follows from the iterative procedure. Every rank-index method is "uniform". This means that, we take some subset of the agents formula_16, and apply the same method to their combined allocation, then the result is exactly the vector formula_17. In other words: every part of a fair allocation is fair too. This immediately follows from the min-max inequality. Moreover: Quota-capped divisor methods. A "quota-capped divisor method" is an apportionment method where we begin by assigning every state its lower quota of seats. Then, we add seats one-by-one to the state with the highest votes-per-seat average, so long as adding an additional seat does not result in the state exceeding its upper quota. However, quota-capped divisor methods violate the participation criterion (also called population monotonicity)—it is possible for a party to "lose" a seat as a result of winning "more" votes. Every quota-capped divisor method satisfies house monotonicity. Moreover, quota-capped divisor methods satisfy the quota rule. However, quota-capped divisor methods violate the participation criterion (also called population monotonicity)—it is possible for a party to "lose" a seat as a result of winning "more" votes. This occurs when: Moreover, quota-capped versions of other algorithms frequently violate the true quota in the presence of error (e.g. census miscounts). Jefferson's method frequently violates the true quota, even after being quota-capped, while Webster's method and Huntington-Hill perform well even without quota-caps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "(t_1,\\ldots,t_n)" }, { "math_id": 3, "text": "\\sum_{i=1}^n t_i = 1" }, { "math_id": 4, "text": "t_i" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "a_1,\\ldots,a_n" }, { "math_id": 7, "text": "\\sum_{i=1}^n a_i = h" }, { "math_id": 8, "text": "a_i" }, { "math_id": 9, "text": "r(t,a)" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "r(t_i,a_i)" }, { "math_id": 13, "text": "d(a)" }, { "math_id": 14, "text": "r(t,a) = t/d(a)" }, { "math_id": 15, "text": "\\min_{i: a_i > 0} r(t_i, a_i-1) \\geq \\max_{i} r(t_i, a_i)" }, { "math_id": 16, "text": "1,\\ldots,k" }, { "math_id": 17, "text": "(a_1,\\ldots,a_k)" } ]
https://en.wikipedia.org/wiki?curid=75889002
758911
Richard S. Hamilton
American mathematician (born 1943) Richard Streit Hamilton (born 10 January 1943) is an American mathematician who serves as the Davies Professor of Mathematics at Columbia University. He is known for contributions to geometric analysis and partial differential equations. Hamilton is best known for foundational contributions to the theory of the Ricci flow and the development of a corresponding program of techniques and ideas for resolving the Poincaré conjecture and geometrization conjecture in the field of geometric topology. Grigori Perelman built upon Hamilton's results to prove the conjectures, and was awarded a Millennium Prize for his work. However, Perelman declined the award, regarding Hamilton's contribution as being equal to his own. Biography. Hamilton received his B.A. in 1963 from Yale University and Ph.D. in 1966 from Princeton University. Robert Gunning supervised his thesis. He has taught at University of California, Irvine, University of California, San Diego, Cornell University, and Columbia University. Hamilton's mathematical contributions are primarily in the field of differential geometry and more specifically geometric analysis. He is best known for having discovered the Ricci flow and starting a research program that ultimately led to the proof, by Grigori Perelman, of William Thurston's geometrization conjecture and the Poincaré conjecture. For his work on the Ricci flow, Hamilton was awarded the Oswald Veblen Prize in Geometry in 1996 and the Clay Research Award in 2003. He was elected to the National Academy of Sciences in 1999 and the American Academy of Arts and Sciences in 2003. He also received the AMS Leroy P. Steele Prize for Seminal Contribution to Research in 2009, for his 1982 article "Three-manifolds with positive Ricci curvature", in which he introduced and analyzed the Ricci flow.[H82b] In March 2010, the Clay Mathematics Institute, having listed the Poincaré conjecture among their Millennium Prize Problems, awarded Perelman with one million USD for his 2003 proof of the conjecture. In July 2010, Perelman turned down the award and prize money, saying that he believed his contribution in proving the Poincaré conjecture was no greater than that of Hamilton, who had developed the program for the solution. In June 2011, it was announced that the million-dollar Shaw Prize would be split equally between Hamilton and Demetrios Christodoulou "for their highly innovative works on nonlinear partial differential equations in Lorentzian and Riemannian geometry and their applications to general relativity and topology." In 2022, Hamilton joined University of Hawaiʻi at Mānoa as an adjunct professor. Mathematical work. As of 2022, Hamilton has been the author of forty-six research articles, around forty of which are in the field of geometric flows. Harnack inequalities for heat equations. In 1986, Peter Li and Shing-Tung Yau discovered a new method for applying the maximum principle to control the solutions of the heat equation. Among other results, they showed that if one has a positive solution u of the heat equation on a closed Riemannian manifold of nonnegative Ricci curvature, then one has formula_0 for any tangent vector v. Such inequalities, known as "differential Harnack inequalities" or "Li–Yau inequalities," are useful since they can be integrated along paths to compare the values of u at any two spacetime points. They also directly give pointwise information about u, by taking v to be zero. In 1993, Hamilton showed that the computations of Li and Yau could be extended to show that their differential Harnack inequality was a consequence of a stronger matrix inequality.[H93a] His result required the closed Riemannian manifold to have nonnegative sectional curvature and parallel Ricci tensor (such as the flat torus or the Fubini–Study metric on complex projective space), in the absence of which he obtained with a slightly weaker result. Such matrix inequalities are sometimes known as "Li–Yau–Hamilton inequalities". Hamilton also discovered that the Li–Yau methodology could be adapted to the Ricci flow. In the case of two-dimensional manifolds, he found that the computation of Li and Yau can be directly adapted to the scalar curvature along the Ricci flow.[H88] In general dimensions, he showed that the Riemann curvature tensor satisfies a complicated inequality, formally analogous to his matrix extension of the Li–Yau inequality, in the case that the curvature operator is nonnegative.[H93b] As an immediate algebraic consequence, the scalar curvature satisfies an inequality which is almost identical to that of Li and Yau. This fact is used extensively in Hamilton and Perelman's further study of Ricci flow. Hamilton later adapted his Li–Yau estimate for the Ricci flow to the setting of the mean curvature flow, which is slightly simpler since the geometry is governed by the second fundamental form, which has a simpler structure than the Riemann curvature tensor.[H95c] Hamilton's theorem, which requires strict convexity, is naturally applicable to certain singularities of mean curvature flow due to the convexity estimates of Gerhard Huisken and Carlo Sinestrari. Nash–Moser theorem. In 1956, John Nash resolved the problem of smoothly isometrically embedding Riemannian manifolds in Euclidean space. The core of his proof was a novel "small perturbation" result, showing that if a Riemannian metric could be isometrically embedded in a certain way, then any nearby Riemannian metric could be isometrically embedded as well. Such a result is highly reminiscent of an implicit function theorem, and many authors have attempted to put the logic of the proof into the setting of a general theorem. Such theorems are now known as Nash–Moser theorems. In 1982, Hamilton published his formulation of Nash's reasoning, casting the theorem into the setting of "tame Fréchet spaces"; Nash's fundamental use of restricting the Fourier transform to regularize functions was abstracted by Hamilton to the setting of exponentially decreasing sequences in Banach spaces.[H82a] His formulation has been widely quoted and used in the subsequent time. He used it himself to prove a general existence and uniqueness theorem for geometric evolution equations; the standard implicit function theorem does not often apply in such settings due to the degeneracies introduced by invariance under the action of the diffeomorphism group.[H82b] In particular, the well-posedness of the Ricci flow follows from Hamilton's general result. Although Dennis DeTurck gave a simpler proof in the particular case of the Ricci flow, Hamilton's result has been used for some other geometric flows for which DeTurck's method is inaccessible. Harmonic map heat flow. In 1964, James Eells and Joseph Sampson initiated the study of harmonic map heat flow, using a convergence theorem for the flow to show that any smooth map from a closed manifold to a closed manifold of nonpositive curvature can be deformed to a harmonic map. In 1975, Hamilton considered the corresponding boundary value problem for this flow, proving an analogous result to Eells and Sampson's for the Dirichlet condition and Neumann condition.[H75] The analytic nature of the problem is more delicate in this setting, since Eells and Sampson's key application of the maximum principle to the parabolic Bochner formula cannot be trivially carried out, due to the fact that size of the gradient at the boundary is not automatically controlled by the boundary conditions. By taking limits of Hamilton's solutions of the boundary value problem for increasingly large boundaries, Richard Schoen and Shing-Tung Yau observed that a finite-energy map from a complete Riemannian manifold to a closed Riemannian manifold of nonpositive curvature could be deformed into a harmonic map of finite energy. By proving extension of Eells and Sampson's vanishing theorem in various geometric settings, they were able to draw striking geometric conclusions, such as that if ("M", "g") is a complete Riemannian manifold of nonnegative Ricci curvature, then for any precompact open set D with smooth and simply-connected boundary, there cannot exist a nontrivial homomorphism from the fundamental group of D into any group which is the fundamental group of a closed Riemannian manifold of nonpositive curvature. Mean curvature flow. In 1986, Hamilton and Michael Gage applied Hamilton's Nash–Moser theorem and well-posedness result for parabolic equations to prove the well-posedness for mean curvature flow; they considered the general case of a one-parameter family of immersions of a closed manifold into a smooth Riemannian manifold.[GH86] Then, they specialized to the case of immersions of the circle "S"1 into the two-dimensional Euclidean space ℝ2, which is the simplest context for curve shortening flow. Using the maximum principle as applied to the distance between two points on a curve, they proved that if the initial immersion is an embedding, then all future immersions in the mean curvature flow are embeddings as well. Furthermore, convexity of the curves is preserved into the future. Gage and Hamilton's main result is that, given any smooth embedding "S"1 → ℝ2 which is convex, the corresponding mean curvature flow exists for a finite amount of time, and as the time approaches its maximal value, the curves asymptotically become increasingly small and circular.[GH86] They made use of previous results of Gage, as well as a few special results for curves, such as Bonnesen's inequality. In 1987, Matthew Grayson proved a complementary result, showing that for any smooth embedding "S"1 → ℝ2, the corresponding mean curvature flow eventually becomes convex. In combination with Gage and Hamilton's result, one has essentially a complete description of the asymptotic behavior of the mean curvature flow of embedded circles in ℝ2. This result is sometimes known as the Gage–Hamilton–Grayson theorem. It is somewhat surprising that there is such a systematic and geometrically defined means of deforming an arbitrary loop in ℝ2 into a round circle. The modern understanding of the results of Gage–Hamilton and of Grayson usually treat both settings at once, without the need for showing that arbitrary curves become convex and separately studying the behavior of convex curves. Their results can also be extended to settings other than the mean curvature flow. Ricci flow. Hamilton extended the maximum principle for parabolic partial differential equations to the setting of symmetric 2-tensors which satisfy a parabolic partial differential equation.[H82b] He also put this into the general setting of a parameter-dependent section of a vector bundle over a closed manifold which satisfies a heat equation, giving both strong and weak formulations.[H86] Partly due to these foundational technical developments, Hamilton was able to give an essentially complete understanding of how Ricci flow behaves on three-dimensional closed Riemannian manifolds of positive Ricci curvature[H82b] and nonnegative Ricci curvature[H86], four-dimensional closed Riemannian manifolds of positive or nonnegative curvature operator[H86], and two-dimensional closed Riemannian manifolds of nonpositive Euler characteristic or of positive curvature[H88]. In each case, after appropriate normalizations, the Ricci flow deforms the given Riemannian metric to one of constant curvature. This has strikingly simple immediate corollaries, such as the fact that any closed smooth 3-manifold which admits a Riemannian metric of positive curvature also admits a Riemannian metric of constant positive sectional curvature. Such results are notable in highly restricting the topology of such manifolds; the space forms of positive curvature are largely understood. There are other corollaries, such as the fact that the topological space of Riemannian metrics of positive Ricci curvature on a closed smooth 3-manifold is path-connected. These "convergence theorems" of Hamilton have been extended by later authors, in the 2000s, to give a proof of the differentiable sphere theorem, which had been a major conjecture in Riemannian geometry since the 1960s. In 1995, Hamilton extended Jeff Cheeger's compactness theory for Riemannian manifolds to give a compactness theorem for sequences of Ricci flows.[H95a] Given a Ricci flow on a closed manifold with a finite-time singularity, Hamilton developed methods of rescaling around the singularity to produce a sequence of Ricci flows; the compactness theory ensures the existence of a limiting Ricci flow, which models the small-scale geometry of a Ricci flow around a singular point.[H95b] Hamilton used his maximum principles to prove that, for any Ricci flow on a closed three-dimensional manifold, the smallest value of the sectional curvature is small compared to its largest value. This is known as the Hamilton–Ivey estimate; it is extremely significant as a curvature inequality which holds with no conditional assumptions beyond three-dimensionality. An important consequence is that, in three dimensions, a limiting Ricci flow as produced by the compactness theory automatically has nonnegative curvature.[H95b] As such, Hamilton's Harnack inequality is applicable to the limiting Ricci flow. These methods were extended by Grigori Perelman, who due to his "noncollapsing theorem" was able to apply Hamilton's compactness theory in a number of extended contexts. In 1997, Hamilton was able to combine the methods he had developed to define "Ricci flow with surgery" for four-dimensional Riemannian manifolds of positive isotropic curvature.[H97] For Ricci flows with initial data in this class, he was able to classify the possibilities for the small-scale geometry around points with large curvature, and hence to systematically modify the geometry so as to continue the Ricci flow. As a consequence, he obtained a result which classifies the smooth four-dimensional manifolds which support Riemannian metrics of positive isotropic curvature. Shing-Tung Yau has described this article as the "most important event" in geometric analysis in the period after 1993, marking it as the point at which it became clear that it could be possible to prove Thurston's geometrization conjecture by Ricci flow methods. The essential outstanding issue was to carry out an analogous classification, for the small-scale geometry around high-curvature points on Ricci flows on three-dimensional manifolds, without any curvature restriction; the Hamilton–Ivey curvature estimate is the analogue to the condition of positive isotropic curvature. This was resolved by Grigori Perelman in his renowned "canonical neighborhoods theorem." Building off of this result, Perelman modified the form of Hamilton's surgery procedure to define a "Ricci flow with surgery" given an arbitrary smooth Riemannian metric on a closed three-dimensional manifold. This led to the resolution of the geometrization conjecture in 2003. Other work. In one of his earliest works, Hamilton proved the Earle–Hamilton fixed point theorem in collaboration with Clifford Earle.[EH70] In unpublished lecture notes from the 1980s, Hamilton introduced the Yamabe flow and proved its long-time existence. In collaboration with Shiing-Shen Chern, Hamilton studied certain variational problems for Riemannian metrics in contact geometry. He also made contributions to the prescribed Ricci curvature problem. Major publications. &lt;templatestyles src="Refbegin/styles.css" /&gt; The collection contains twelve of Hamilton's articles on Ricci flow, in addition to ten related articles by other authors. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "\\frac{\\partial u}{\\partial t}+\\frac{u}{2t}+2du(v)+u|v|_g^2\\geq 0" } ]
https://en.wikipedia.org/wiki?curid=758911
75893889
Plasmalysis
Chemical process in a plasma Plasmalysis is a electrochemical process that requires a voltage source. On the one hand, it describes the plasma-chemical dissociation of organic and inorganic compounds (e.g. C-H and N-H compounds) in interaction with a thermal/non-thermal plasma between two electrodes. On the other hand, it describes the synthesis, i.e. the combination of two or more elements to form a new molecule (e.g. methane synthesis/methanation). Plasmalysis is an artificial word made of plasma and lysis (Greek λύσις, "[dissolution]"). Thermal/non-thermal plasma. Thermal plasmas. can be technically generated, for example, by inductive coupling of high-frequency fields in the MHz range (ICP: Inductively coupled plasma) or by direct current coupling (arc discharges). A thermal plasma is characterized by the fact that electrons, ions and neutral particles are in thermodynamic equilibrium. For atmospheric-pressure plasmas, the temperatures in thermal plasmas are usually above 6000 K. This corresponds to average kinetic energies of less than 1 eV. Nonthermal plasmas are found in low-pressure arc discharges, such as fluorescent lamps, in dielectrically barrier discharges (DBD), such as ozone tubes, in microwave plasmas (plasma torches, i.e. PLexc oder MagJet) or in GHz-plasmajets. A non-thermal plasma shows a significant difference between the electron and gas temperature. For example, the electron temperature can be several 10,000 K, which corresponds to average kinetic energies of more than 1 eV while a gas temperature close to room temperature is measured. Despite their low temperature, such plasmas can trigger chemical reactions and excitation states via electron collisions. Pulsed coronal and dielectrically impeded discharges belong to the family of nonthermal plasmas. Here the electrons are much hotter (several eV) than the ions/neutral gas particles (room temperature). Technical aspects. To generate a nonthermal plasma at atmospheric pressure, a working gas (molecular or inert gas, e.g. air, nitrogen, argon, helium) is passed through an electric field. Electrons originating from ionization processes can be accelerated in this field to trigger impact ionization processes. If more free electrons are produced during this process than are lost, a discharge can build up. The degree of ionization in technically used plasmas is usually very low, typically a few per mille or less. The electrical conductivity generated by these free charge carriers is used to couple in electrical power. When colliding with other gas atoms or molecules, the free electrons can transfer their energy to them and thus generate highly reactive species that act on the material to be treated (gaseous, liquid, solid). The electron energy is sufficient to split covalent bonds in organic molecules. The energy required to split single bonds is in the range of about 1.5 - 6.2 eV, for double bonds in the range of about 4.4 - 7.4 eV and for triple bonds in the range of 8.5 - 11.2 eV . For gases that can also be used as process gases, dissociation energies are e.g. 5.7 eV (O2) and 9.8 eV (N2) Applications of atmospheric pressure plasmas. Atmospheric-pressure plasmas have been used for a variety of industrial applications, including volatile organic compound (VOC) removal, exhaust gas emission treatment and polymer surface and food treatment. For decades, non-thermal plasmas have also been used to generate ozone for water purification. Atmospheric pressure plasmas can be characterized primarily by a large number of electrical discharges in which the majority of the electrical energy is used to generate energetic electrons. These energetic electrons produce chemically excited species - free radicals and ions - and additional electrons by dissociation, excitation and ionization of background gas molecules by electron impact. These excited species in turn oxidize, reduce or decompose the molecules, such as wastewater or biomethane, that are brought into contact with them. Part of the electrical energy is converted into chemical energy. Plasmalysis can thus be used to store energy, for example in the plasma analysis of ammonium from waste water or liquid fermentation residue, which produces hydrogen and nitrogen. The hydrogen thus produced can serve as an energy carrier for a hydrogen economy. Dissociation mechanisms of gases and liquids. In the following section XH stands for any hydrogen compound, e.g. CH- and NH-compounds. The density of radicals scales with the electron density and higher gas and electron temperatures (thermal dissociation and electron impact). This process generates negative ions as well as neutral particles. The collision electron is captured by collision excitation. The energy difference between the ground state and the excited state dissociates the molecule. The electron-induced dissociation of water depends on the electron temperature, which influences the ratio of the OH density (n_OH) to the electron density (n_e) significantly. The maximum OH density is reached in the early afterglow when the electron temperature (T_e) is low. Dissociation efficiency of different hydrogen sources. Water Electrolysis. Since the focus is always on the most energy-efficient dissociation of chemical compounds, the benchmark is the energy input of the electrolysis of distilled water (45 kWh/kgH2) as in the following reaction equation: Methane-plasmalysis. A particularly efficient way of generating hydrogen (10 kWh/kgH2) is the methane plasmalysis. In this process, methane (e.g. from natural gas) is decomposed in the plasma under oxygen exclusion, forming hydrogen and elemental carbon, as in the following reaction equation: Methane plasmalysis offers, among other things, the possibility of decentralized decarbonization of natural gas or, if biogas is used, also the realization of a CO2 sink, whereby, in contrast to the CCS process commonly used to date, no gas has to be compressed and stored, but the elemental carbon produced can be bound in product form. This technology can also be used to prevent the flaring of so-called "flare gases" by using them as a feedstock for the production of hydrogen and carbon. Wastewater-plasmalysis. The plasmalysis of wastewater and liquid manure enables hydrogen to be recovered from pollutants contained in the wastewater (ammonium (NH4) or hydrocarbon compounds (COD)). The plasma-catalytic decomposition of ammonia takes place as shown in the following reaction equation: The treated wastewater is purified in the process. The energy requirement for the production of green hydrogen is approx. 12 kWh/kgH2. This technology can also be used as ammonia cracking (chemistry) technology for splitting the hydrogen carrier ammonia. Dissociation of hydrogen sulfide. Hydrogen sulfide - a component of crude oil and natural gas and a by-product in anaerobic digestion of biomass - is also suitable for plasma-catalytic decomposition to produce hydrogen and elemental sulfur due to its weak binding energy. The energy requirement for the production of hydrogen from H2S is approx. 5 kWh/kgH2. Reactor geometry. It is apparent that both the reactor geometry and the method by which the plasma is generated strongly influence the performance of the system.
[ { "math_id": 0, "text": "e + XH(s,l,g) \\rightarrow H(g) + X(s,l,g) " }, { "math_id": 1, "text": "A^+ + XH(g) \\rightarrow A^+ + H + X(s,l,g)\n" }, { "math_id": 2, "text": "e + XH^* \\rightarrow X^-(s,l,g) + H(g)" }, { "math_id": 3, "text": " \\mathrm{ 2\\ H_2O(f) \\rightleftharpoons O_2(g) + 2\\ H_2(g)} \\qquad \\Delta H_{R\\ 298}^{1bar(a)} = -286\\ \\mathrm{kJ/mol}" }, { "math_id": 4, "text": " \\mathrm{ 2\\ CH_4(g) \\rightleftharpoons C_2(f) + 4\\ H_2(g)} \\qquad \\Delta H_{R\\ 298}^{1bar(a)} = -75\\ \\mathrm{kJ/mol}" }, { "math_id": 5, "text": " \\mathrm{ 2\\ NH_3(f) \\rightleftharpoons N_2(g) + 3\\ H_2(g)} \\qquad \\Delta H_{R\\ 298}^{1bar(a)} = -92\\ \\mathrm{kJ/mol}" }, { "math_id": 6, "text": " \\mathrm{ H_2S(g) \\rightleftharpoons H_2(g) + S(s)} \\qquad \\Delta H_{R\\ 298}^{1bar(a)} = -20,5\\ \\mathrm{kJ/mol}" } ]
https://en.wikipedia.org/wiki?curid=75893889
7589621
Material selection
Step in the process of designing physical objects Material selection is a step in the process of designing any physical object. In the context of product design, the main goal of material selection is to minimize cost while meeting product performance goals. Systematic selection of the best material for a given application begins with properties and costs of candidate materials. Material selection is often benefited by the use of material index or performance index relevant to the desired material properties. For example, a thermal blanket must have poor thermal conductivity in order to minimize heat transfer for a given temperature difference. It is essential that a designer should have a thorough knowledge of the properties of the materials and their behavior under working conditions. Some of the important characteristics of materials are : strength, durability, flexibility, weight, resistance to heat and corrosion, ability to cast, welded or hardened, machinability, electrical conductivity, etc. In contemporary design, sustainability is a key consideration in material selection. Growing environmental consciousness prompts professionals to prioritize factors such as ecological impact, recyclability, and life cycle analysis in their decision-making process. Systematic selection for applications requiring multiple criteria is more complex. For example, when the material should be both stiff and light, for a rod a combination of high Young's modulus and low density indicates the best material, whereas for a plate the "cube root" of stiffness divided by density formula_0 is the best indicator, since a plate's bending stiffness scales by its thickness cubed. Similarly, again considering both stiffness and lightness, for a rod that will be pulled in tension the specific modulus, or modulus divided by density formula_1 should be considered, whereas for a beam that will be subject to bending, the material index formula_2 is the best indicator. Reality often presents limitations, and the utilitarian factor must be taken in consideration. The cost of the ideal material, depending on shape, size and composition, may be prohibitive, and the demand, the commonality of frequently utilized and known items, its characteristics and even the region of the market dictate its availability. Ashby plots. An Ashby plot, named for Michael Ashby of Cambridge University, is a scatter plot which displays two or more properties of many materials or classes of materials. These plots are useful to compare the ratio between different properties. For the example of the stiff/light part discussed above would have Young's modulus on one axis and density on the other axis, with one data point on the graph for each candidate material. On such a plot, it is easy to find not only the material with the highest stiffness, or that with the lowest density, but that with the best ratio formula_1. Using a log scale on both axes facilitates selection of the material with the best plate stiffness formula_0. The first plot on the right shows density and Young's modulus, in a linear scale. The second plot shows the same materials attributes in a log-log scale. Materials families (polymers, foams, metals, etc.) are identified by colors. Cost issues. Cost of materials plays a very significant role in their selection. The most straightforward way to weight cost against properties is to develop a monetary metric for properties of parts. For example, life cycle assessment can show that the net present value of reducing the weight of a car by 1 kg averages around $5, so material substitution which reduces the weight of a car can cost up to $5 per kilogram of weight reduction more than the original material. However, the geography- and time-dependence of energy, maintenance and other operating costs, and variation in discount rates and usage patterns (distance driven per year in this example) between individuals, means that there is no single correct number for this. For commercial aircraft, this number is closer to $450/kg, and for spacecraft, launch costs around $20,000/kg dominate selection decisions. Thus as energy prices have increased and technology has improved, automobiles have substituted increasing amounts of lightweight magnesium and aluminium alloys for steel, aircraft are substituting carbon fiber reinforced plastic and titanium alloys for aluminium, and satellites have long been made out of exotic composite materials. Of course, cost per kg is not the only important factor in material selection. An important concept is 'cost per unit of function'. For example, if the key design objective was the stiffness of a plate of the material, as described in the introductory paragraph above, then the designer would need a material with the optimal combination of density, Young's modulus, and price. Optimizing complex combinations of technical and price properties is a hard process to achieve manually, so rational material selection software is an important tool. General method for using an Ashby chart. Utilizing an "Ashby chart" is a common method for choosing the appropriate material. First, three different sets of variables are identified: Next, an equation for the performance index is derived. This equation numerically quantifies how desirable the material will be for a specific situation. By convention, a higher performance index denotes a better material. Lastly, the performance index is plotted on the Ashby chart. Visual inspection reveals the most desirable material. Example of using an Ashby chart. In this example, the material will be subject to both tension and bending. Therefore, the optimal material will perform well under both circumstances. Performance index during tension. In the first situation the beam experiences two forces: the weight of gravity formula_3 and tension formula_4. The material variables are density formula_5 and strength formula_6. Assume that the length formula_7 and tension formula_4 are fixed, making them design variables. Lastly the cross sectional area formula_8 is a free variable. The objective in this situation is to minimize the weight formula_3 by choosing a material with the best combination of material variables formula_9. Figure 1 illustrates this loading. The stress in the beam is measured as formula_10 whereas weight is described by formula_11. Deriving a performance index requires that all free variables are removed, leaving only design variables and material variables. In this case that means that formula_8 must be removed. The axial stress equation can be rearranged to give formula_12. Substituting this into the weight equation gives formula_13. Next, the material variables and design variables are grouped separately, giving formula_14. Since both formula_7 and formula_4 are fixed, and since the goal is to minimize formula_3, then the ratio formula_15 should be minimized. By convention, however, the performance index is always a quantity which should be maximized. Therefore, the resulting equation is formula_16 Performance index during bending. Next, suppose that the material is also subjected to bending forces. The max tensile stress equation of bending is formula_17, where formula_18 is the bending moment, formula_19 is the distance from the neutral axis, and formula_20 is the moment of inertia. This is shown in Figure 2. Using the weight equation above and solving for the free variables, the solution arrived at is formula_21, where formula_7 is the length and formula_22 is the height of the beam. Assuming that formula_22, formula_7, and formula_18 are fixed design variables, the performance index for bending becomes formula_23. Selecting the best material overall. At this point two performance indices that have been derived: for tension formula_24 and for bending formula_25. The first step is to create a log-log plot and add all known materials in the appropriate locations. However, the performance index equations must be modified before being plotted on the log-log graph. For the tension performance equation formula_26, the first step is to take the log of both sides. The resulting equation can be rearranged to give formula_27. Note that this follows the format of formula_28, making it linear on a log-log graph. Similarly, the y-intercept is the log of formula_29. Thus, the fixed value of formula_29 for tension in Figure 3 is 0.1. The bending performance equation formula_23 can be treated similarly. Using the power property of logarithms it can be derived that formula_30. The value for formula_29 for bending is ≈ 0.0316 in Figure 3. Finally, both lines are plotted on the Ashby chart. First, the best bending materials can be found by examining which regions are higher on the graph than the formula_25 bending line. In this case, some of the foams (blue) and technical ceramics (pink) are higher than the line. Therefore those would be the best bending materials. In contrast, materials which are far below the line (like metals in the bottom-right of the gray region) would be the worst materials. Lastly, the formula_24 tension line can be used to "break the tie" between foams and technical ceramics. Since technical ceramics are the only material which is located higher than the tension line, then the best-performing tension materials are technical ceramics. Therefore, the overall best material is a technical ceramics in the top-left of the pink region such as boron carbide. Numerically understanding the chart. The performance index can then be plotted on the Ashby chart by converting the equation to a log scale. This is done by taking the log of both sides, and plotting it similar to a line with formula_31 being the y-axis intercept. This means that the higher the intercept, the higher the performance of the material. By moving the line up the Ashby chart, the performance index gets higher. Each materials the line passes through, has the performance index listed on the y-axis. So, moving to the top of the chart while still touching a region of material is where the highest performance will be. As seen from figure 3 the two lines intercept near the top of the graph at Technical ceramics and Composites. This will give a performance index of 120 for tensile loading and 15 for bending. When taking into consideration the cost of the engineering ceramics, especially because the intercept is around the Boron carbide, this would not be the optimal case. A better case with lower performance index but more cost effective solutions is around the Engineering Composites near CFRP. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt[3]{E}/\\rho" }, { "math_id": 1, "text": "E/\\rho" }, { "math_id": 2, "text": "\\sqrt[2\n]{E}/\\rho" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "P" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\sigma" }, { "math_id": 7, "text": "L" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "\\rho, \\sigma" }, { "math_id": 10, "text": "P/A" }, { "math_id": 11, "text": "w = \\rho AL" }, { "math_id": 12, "text": "A = P/\\sigma" }, { "math_id": 13, "text": "w=\\rho (P/\\sigma) L = \\rho L P/\\sigma " }, { "math_id": 14, "text": "w = (\\rho/\\sigma)LP" }, { "math_id": 15, "text": "\\rho/\\sigma" }, { "math_id": 16, "text": "\\text{Performance index} = P_{cr} = \\sigma/\\rho" }, { "math_id": 17, "text": "\\sigma = (-My)/I" }, { "math_id": 18, "text": "M" }, { "math_id": 19, "text": "y" }, { "math_id": 20, "text": "I" }, { "math_id": 21, "text": "w=\\sqrt{6MbL^2}(\\rho/\\sqrt{\\sigma})" }, { "math_id": 22, "text": "b" }, { "math_id": 23, "text": "P_{CR}=\\sqrt{\\sigma}/\\rho" }, { "math_id": 24, "text": "\\sigma/\\rho" }, { "math_id": 25, "text": "\\sqrt{\\sigma}/\\rho" }, { "math_id": 26, "text": "P_{CR}=\\sigma/\\rho" }, { "math_id": 27, "text": "\\log(\\sigma) = \\log(\\rho) + \\log(P_{CR})" }, { "math_id": 28, "text": "y = x + b" }, { "math_id": 29, "text": "P_{CR}" }, { "math_id": 30, "text": "\\log(\\sigma) = 2 \\times (\\log(\\rho) + \\log(P_{CR}))" }, { "math_id": 31, "text": "P_{cr}" } ]
https://en.wikipedia.org/wiki?curid=7589621
75912306
Lévy-Leblond equation
A linearized quantum mechanical equation In quantum mechanics, the Lévy-Leblond equation describes the dynamics of a spin-1/2 particle. It is a linearized version of the Schrödinger equation and of the Pauli equation. It was derived by French physicist Jean-Marc Lévy-Leblond in 1967. Lévy-Leblond equation was obtained under similar heuristic derivations as the Dirac equation, but contrary to the latter, Lévy-Leblond equation is not relativistic. As both equations recover the electron gyromagnetic ratio, it is suggested that spin is not necessarily a relativistic phenomenon. Equation. For a nonrelativistic spin-1/2 particle of mass "m," a representation of the time-independent Lévy-Leblond equation reads: formula_0 where "c" is the speed of light, "E" is the nonrelativistic particle energy, formula_1 is the momentum operator, and formula_2 is the vector of Pauli matrices, which is proportional to the spin operator formula_3. Here formula_4 are two components functions (spinors) describing the wave function of the particle. By minimal coupling, the equation can be modified to account for the presence of an electromagnetic field, formula_5 where "q" is the electric charge of the particle. "V" is the electric potential, and A is the magnetic vector potential. This equation is linear in its spatial derivatives. Relation to spin. In 1928, Paul Dirac linearized the relativistic dispersion relation and obtained Dirac equation, described by a bispinor. This equation can be decoupled into two spinors in the non-relativistic limit, leading to predict the electron magnetic moment with a gyromagnetic ratio formula_6. The success of Dirac theory has led to some textbooks to erroneously claim that spin is necessarily a relativistic phenomena. Jean-Marc Lévy-Leblond applied the same technique to the non-relativistic energy relation showing that the same prediction of formula_6 can be obtained. Actually to derive the Pauli equation from Dirac equation one has to pass by Lévy-Leblond equation. Spin is then a result of quantum mechanics and linearization of the equations but not necessarily a relativistic effect. Lévy-Leblond equation is Galilean invariant. This equation demonstrates that one does not need the full Poincaré group to explain the spin 1/2. In the classical limit where formula_7, quantum mechanics under the Galilean transformation group are enough. Similarly, one can construct classical linear equation for any arbitrary spin. Under the same idea one can construct equations for Galilean electromagnetism. Relation to other equations. Schrödinger's and Pauli's equation. Taking the second line of Lévy-Leblond equation and inserting it back into the first line, one obtains through the algebra of the Pauli matrices, that formula_8, which is the Schrödinger equation for a two-valued spinor. Note that solving for formula_9 also returns another Schrödinger's equation. Pauli's expression for spin-&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 particle in an electromagnetic field can be recovered by minimal coupling: formula_10. While Lévy-Leblond is linear in its derivatives, Pauli's and Schrödinger's equations are quadratic in the spatial derivatives. Dirac equation. Dirac equation can be written as: formula_11 where formula_12 is the total relativistic energy. In the non-relativistic limit, formula_13 and formula_14 one recovers, Lévy-Leblond equations. Heuristic derivation. Similar to the historical derivation of Dirac equation by Paul Dirac, one can try to linearize the non-relativistic dispersion relation formula_15. We want two operators Θ and Θ' linear in formula_16 (spatial derivatives) and "E", like formula_17 for some formula_18, such that, their product recovers the classical dispersion relation, that is formula_19, where the factor 2"mc"2 is arbitrary an it is just there for normalization. By doing carrying out the product, one find that there is not solution if formula_20 are one dimensional constants. The lowest dimension where there is a solution is 4. Then formula_21 are matrices that must satisfy the following relations: formula_22 these relations can be rearranged to involve the gamma matrices from Clifford algebra. formula_23 is the Identity matrix of dimension "N". One possible representation is formula_24, such that formula_25, with formula_26 , returns Lévy-Leblond equation. Other representations can be chosen leading to equivalent equations with different signs or phases.
[ { "math_id": 0, "text": "\\left\\{\\begin{matrix}\nE\\psi+(\\boldsymbol \\sigma \\cdot \\mathbf p c)\\chi=0 \\\\\n(\\boldsymbol \\sigma \\cdot \\mathbf pc )\\psi + 2mc^2\\chi=0\\end{matrix}\n\\right." }, { "math_id": 1, "text": "\\mathbf p = -i\\hbar \\nabla " }, { "math_id": 2, "text": "\\boldsymbol \\sigma = (\\sigma_x,\\sigma_y,\\sigma_z) " }, { "math_id": 3, "text": "\\mathbf S=\\tfrac12\\hbar \\boldsymbol \\sigma " }, { "math_id": 4, "text": "\\psi,\\chi " }, { "math_id": 5, "text": "\\left\\{\\begin{matrix}\n(E-q V)\\psi+[\\boldsymbol \\sigma \\cdot (\\mathbf p-q\\mathbf A)c]\\chi=0 \\\\\n{[\\boldsymbol \\sigma \\cdot (\\mathbf p-q \\mathbf A ) c] } \\psi + 2mc^2\\chi = 0\n\\end{matrix}\\right. " }, { "math_id": 6, "text": "g=2 " }, { "math_id": 7, "text": "c \\to \\infty " }, { "math_id": 8, "text": "\\frac{1}{2m}(\\boldsymbol \\sigma \\cdot \\mathbf p)^2\\psi-E\\psi=\\left[\\frac{1}{2m} \\mathbf p^2-E\\right]\\psi=0 " }, { "math_id": 9, "text": "\\chi " }, { "math_id": 10, "text": "\\left\\{\\frac{1}{2m}[\\boldsymbol \\sigma \\cdot (\\mathbf p-q\\mathbf A)]^2+qV\\right\\}\\psi=E\\psi " }, { "math_id": 11, "text": "\\left\\{\\begin{matrix}\n(\\mathcal{E}-mc^2)\\psi+(\\boldsymbol \\sigma \\cdot \\mathbf p c)\\chi=0 \\\\\n(\\boldsymbol \\sigma \\cdot \\mathbf pc )\\psi + (\\mathcal{E}+ mc^2)\\chi=0\\end{matrix}\n\\right." }, { "math_id": 12, "text": "\\mathcal{E}" }, { "math_id": 13, "text": "E\\ll mc^2 " }, { "math_id": 14, "text": "\\mathcal{E}\\approx mc^2+E+\\cdots " }, { "math_id": 15, "text": "E=\\frac{\\mathbf p^2}{2m} " }, { "math_id": 16, "text": "\\mathbf p " }, { "math_id": 17, "text": "\\left\\{\\begin{matrix}\\Theta\\Psi= [AE+\\mathbf B\\cdot \\mathbf p c+2mc^2C]\\Psi=0 \\\\\n\\Theta'\\Psi= [A'E+\\mathbf B'\\cdot \\mathbf p c+2mc^2C' ]\\Psi=0 \\end{matrix}\n\\right." }, { "math_id": 18, "text": "A,A', \\mathbf B=(B_x,B_y,B_z),\\mathbf B'=(B_x',B_y',B_z'), C ,C' " }, { "math_id": 19, "text": "\\frac{1}{2mc^2}\\Theta'\\Theta =E-\\frac{\\mathbf p^2}{2m}" }, { "math_id": 20, "text": "A,A',B_i, B_i', C ,C' " }, { "math_id": 21, "text": "A,A', \\mathbf B, \\mathbf B', C ,C' " }, { "math_id": 22, "text": "\\left\\{\\begin{matrix}\nA'A=0\\\\\nC'C=0\\\\\nA'B_i+B_i'A=0\\\\\nC'B_i+B_i'C=0\\\\\nA'C+C'A=I_4\\\\\nB_i'B_j+B_j'B_i=-2\\delta_{ij}\n\\end{matrix}\\right." }, { "math_id": 23, "text": "I_N " }, { "math_id": 24, "text": "A=A'=\\begin{pmatrix}0 & 0 \\\\ I_2 & 0\\end{pmatrix}, B_i=-B_i'=\\begin{pmatrix}\\sigma_i & 0 \\\\ 0 & \\sigma_i\\end{pmatrix}, C=C'= \\begin{pmatrix}0 & I_2 \\\\ 0 & 0\\end{pmatrix}" }, { "math_id": 25, "text": "\\Theta\\Psi=0 " }, { "math_id": 26, "text": "\\Psi=(\\psi, \\chi) " } ]
https://en.wikipedia.org/wiki?curid=75912306
7592567
Introduction to entropy
Non-technical introduction to entropy In thermodynamics, entropy is a numerical quantity that shows that many physical processes can go in only one direction in time. For example, cream and coffee can be mixed together, but cannot be "unmixed"; a piece of wood can be burned, but cannot be "unburned". The word 'entropy' has entered popular usage to refer a lack of order or predictability, or of a gradual decline into disorder. A more physical interpretation of thermodynamic entropy refers to spread of energy or matter, or to extent and diversity of microscopic motion. If a movie that shows coffee being mixed or wood being burned is played in reverse, it would depict processes highly improbable in reality. Mixing coffee and burning wood are "irreversible". Irreversibility is described by a law of nature known as the second law of thermodynamics, which states that in an isolated system (a system not connected to any other system) which is undergoing change, entropy increases over time. Entropy does not increase indefinitely. A body of matter and radiation eventually will reach an unchanging state, with no detectable flows, and is then said to be in a state of thermodynamic equilibrium. Thermodynamic entropy has a definite value for such a body and is at its maximum value. When bodies of matter or radiation, initially in their own states of internal thermodynamic equilibrium, are brought together so as to intimately interact and reach a new joint equilibrium, then their total entropy increases. For example, a glass of warm water with an ice cube in it will have a lower entropy than that same system some time later when the ice has melted leaving a glass of cool water. Such processes are irreversible: A glass of cool water will not spontaneously turn into a glass of warm water with an ice cube in it. Some processes in nature are almost reversible. For example, the orbiting of the planets around the Sun may be thought of as practically reversible: A movie of the planets orbiting the Sun which is run in reverse would not appear to be impossible. While the second law, and thermodynamics in general, accurately predicts the intimate interactions of complex physical systems, scientists are not content with simply knowing how a system behaves, they also want to know "why" it behaves the way it does. The question of why entropy increases until equilibrium is reached was answered in 1877 by physicist Ludwig Boltzmann. The theory developed by Boltzmann and others, is known as statistical mechanics. Statistical mechanics explains thermodynamics in terms of the statistical behavior of the atoms and molecules which make up the system. The theory not only explains thermodynamics, but also a host of other phenomena which are outside the scope of thermodynamics. Explanation. Thermodynamic entropy. The concept of thermodynamic entropy arises from the second law of thermodynamics. This law of entropy increase quantifies the reduction in the capacity of an isolated compound thermodynamic system to do thermodynamic work on its surroundings, or indicates whether a thermodynamic process may occur. For example, whenever there is a suitable pathway, heat spontaneously flows from a hotter body to a colder one. Thermodynamic entropy is measured as a change in entropy (formula_0) to a system containing a sub-system which undergoes heat transfer to its surroundings (inside the system of interest). It is based on the macroscopic relationship between heat flow into the sub-system and the temperature at which it occurs summed over the boundary of that sub-system. Following the formalism of Clausius, the basic calculation can be mathematically stated as: formula_1 where formula_2 is the increase or decrease in entropy, formula_3 is the heat added to the system or subtracted from it, and formula_4 is temperature. The 'equals' sign and the symbol formula_5 imply that the heat transfer should be so small and slow that it scarcely changes the temperature formula_4. If the temperature is allowed to vary, the equation must be integrated over the temperature path. This calculation of entropy change does not allow the determination of absolute value, only differences. In this context, the Second Law of Thermodynamics may be stated that for heat transferred over any valid process for any system, whether isolated or not, formula_6 According to the first law of thermodynamics, which deals with the conservation of energy, the loss formula_3 of heat will result in a decrease in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the amount of decrease in internal energy and the corresponding increase in internal energy of the surroundings at a given temperature. In many cases, a visualization of the second law is that energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. When applicable, entropy increase is the quantitative measure of that kind of a spontaneous process: how much energy has been effectively lost or become unavailable, by dispersing itself, or spreading itself out, as assessed at a specific temperature. For this assessment, when the temperature is higher, the amount of energy dispersed is assessed as 'costing' proportionately less. This is because a hotter body is generally more able to do thermodynamic work, other factors, such as internal energy, being equal. This is why a steam engine has a hot firebox. The second law of thermodynamics deals only with changes of entropy (formula_0). The absolute entropy (S) of a system may be determined using the third law of thermodynamics, which specifies that the entropy of all perfectly crystalline substances is zero at the absolute zero of temperature. The entropy at another temperature is then equal to the increase in entropy on heating the system reversibly from absolute zero to the temperature of interest. Statistical mechanics and information entropy. Thermodynamic entropy bears a close relationship to the concept of information entropy ("H"). Information entropy is a measure of the "spread" of a probability density or probability mass function. Thermodynamics makes no assumptions about the atomistic nature of matter, but when matter is viewed in this way, as a collection of particles constantly moving and exchanging energy with each other, and which may be described in a probabilistic manner, information theory may be successfully applied to explain the results of thermodynamics. The resulting theory is known as statistical mechanics. An important concept in statistical mechanics is the idea of the microstate and the macrostate of a system. If we have a container of gas, for example, and we know the position and velocity of every molecule in that system, then we know the microstate of that system. If we only know the thermodynamic description of that system, the pressure, volume, temperature, and/or the entropy, then we know the macrostate of that system. Boltzmann realized that there are many different microstates that can yield the same macrostate, and, because the particles are colliding with each other and changing their velocities and positions, the microstate of the gas is always changing. But if the gas is in equilibrium, there seems to be no change in its macroscopic behavior: No changes in pressure, temperature, etc. Statistical mechanics relates the thermodynamic entropy of a macrostate to the number of microstates that could yield that macrostate. In statistical mechanics, the entropy of the system is given by Ludwig Boltzmann's equation: formula_7 where "S" is the thermodynamic entropy, "W" is the number of microstates that may yield the macrostate, and formula_8 is the Boltzmann constant. The natural logarithm of the number of microstates (formula_9) is known as the information entropy of the system. This can be illustrated by a simple example: If you flip two coins, you can have four different results. If "H" is heads and "T" is tails, we can have ("H","H"), ("H","T"), ("T","H"), and ("T","T"). We can call each of these a "microstate" for which we know exactly the results of the process. But what if we have less information? Suppose we only know the total number of heads?. This can be either 0, 1, or 2. We can call these "macrostates". Only microstate ("T","T") will give macrostate zero, ("H","T") and ("T","H") will give macrostate 1, and only ("H","H") will give macrostate 2. So we can say that the information entropy of macrostates 0 and 2 are ln(1) which is zero, but the information entropy of macrostate 1 is ln(2) which is about 0.69. Of all the microstates, macrostate 1 accounts for half of them. It turns out that if you flip a large number of coins, the macrostates at or near half heads and half tails accounts for almost all of the microstates. In other words, for a million coins, you can be fairly sure that about half will be heads and half tails. The macrostates around a 50–50 ratio of heads to tails will be the "equilibrium" macrostate. A real physical system in equilibrium has a huge number of possible microstates and almost all of them are the equilibrium macrostate, and that is the macrostate you will almost certainly see if you wait long enough. In the coin example, if you start out with a very unlikely macrostate (like all heads, for example with zero entropy) and begin flipping one coin at a time, the entropy of the macrostate will start increasing, just as thermodynamic entropy does, and after a while, the coins will most likely be at or near that 50–50 macrostate, which has the greatest information entropy – the equilibrium entropy. The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules which result in those values. The number of arrangements of molecules which could result in the same values for temperature, pressure and volume is the number of microstates. The concept of information entropy has been developed to describe any of several phenomena, depending on the field and the context in which it is being used. When it is applied to the problem of a large number of interacting particles, along with some other constraints, like the conservation of energy, and the assumption that all microstates are equally likely, the resultant theory of statistical mechanics is extremely successful in explaining the laws of thermodynamics. Example of increasing entropy. Ice melting provides an example in which entropy increases in a small system, a thermodynamic system consisting of the surroundings (the warm room) and the entity of glass container, ice and water which has been allowed to reach thermodynamic equilibrium at the melting temperature of ice. In this system, some heat ("δQ") from the warmer surroundings at 298 K (25 °C; 77 °F) transfers to the cooler system of ice and water at its constant temperature ("T") of 273 K (0 °C; 32 °F), the melting temperature of ice. The entropy of the system, which is , increases by . The heat δ"Q" for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. Δ"H" for ice fusion. The entropy of the surrounding room decreases less than the entropy of the ice and water increases: the room temperature of 298 K is larger than 273 K and therefore the ratio, (entropy change), of for the surroundings is smaller than the ratio (entropy change), of for the ice and water system. This is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy. As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the over the continuous range, "at many increments", in the initially cool to finally warm water can be found by calculus. The entire miniature 'universe', i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that 'universe' than when the glass of ice and water was introduced and became a 'system' within it. Origins and uses. Originally, entropy was named to describe the "waste heat", or more accurately, energy loss, from heat engines and other mechanical devices which could never run with 100% efficiency in converting energy into work. Later, the term came to acquire several additional descriptions, as more was understood about the behavior of molecules on the microscopic level. In the late 19th century, the word "disorder" was used by Ludwig Boltzmann in developing statistical views of entropy using probability theory to describe the increased molecular movement on the microscopic level. That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed. Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and statistical mechanics. For most of the 20th century, textbooks tended to describe entropy as "disorder", following Boltzmann's early conceptualisation of the "motional" (i.e. kinetic) energy of molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy dispersal. Entropy can also involve the dispersal of particles, which are themselves energetic. Thus there are instances where both particles and energy disperse at different rates when substances are mixed together. The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In particular, information sciences developed the concept of information entropy, which lacks the Boltzmann constant inherent in thermodynamic entropy. Classical calculation of entropy. When the word 'entropy' was first defined and used in 1865, the very existence of atoms was still controversial, though it had long been speculated that temperature was due to the motion of microscopic constituents and that "heat" was the transferring of that motion from one place to another. Entropy change, formula_0, was described in macroscopic terms that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation of entropy, formula_10 can be explained, part by part, in modern terms describing how molecules are responsible for what is happening: The interpretation properly refers to dispersal in abstract microstate spaces, but it may be loosely visualised in some simple examples of spatial spread of matter or energy. If a partition is removed from between two different gases, the molecules of each gas spontaneously disperse as widely as possible into their respectively newly accessible volumes; this may be thought of as mixing. If a partition, that blocks heat transfer between two bodies of different temperatures, is removed so that heat can pass between the bodies, then energy spontaneously disperses or spreads as heat from the hotter to the colder. Beyond such loose visualizations, in a general thermodynamic process, considered microscopically, spontaneous dispersal occurs in abstract microscopic phase space. According to Newton's and other laws of motion, phase space provides a systematic scheme for the description of the diversity of microscopic motion that occurs in bodies of matter and radiation. The second law of thermodynamics may be regarded as quantitatively accounting for the intimate interactions, dispersal, or mingling of such microscopic motions. In other words, entropy may be regarded as measuring the extent of diversity of motions of microscopic constituents of bodies of matter and radiation in their own states of internal thermodynamic equilibrium. If, instead of using the natural logarithm to define information entropy, we instead use the base 2 logarithm, then the information entropy is roughly equal to the average number of (carefully chosen ) yes/no questions that would have to be asked to get complete information about the system under study. In the introductory example of two flipped coins, the information entropy for the macrostate which contains one head and one tail, one would only need one question to determine its exact state, (e.g. is the first one heads?") and instead of expressing the entropy as ln(2) one could say, equivalently, that it is Log2(2) which equals the number of questions we would need to ask: One. When measuring entropy using the natural logarithm (ln), the unit of information entropy is called a "nat", but when it is measured using the base-2 logarithm, the unit of information entropy is called a "bit". This is just a difference in units, much like the difference between inches and centimeters. (1 nat = "e" bits). Thermodynamic entropy is equal to Boltzmann's constant times the information entropy expressed in nats. The information entropy expressed in bits is equal to the number of yes–no questions that need to be answered in order to determine the microstate from the macrostate. The concepts of "disorder" and "spreading" can be analyzed with this information entropy concept in mind. For example, if we take a new deck of cards out of the box, it is arranged in "perfect order" (spades, hearts, diamonds, clubs, each suit beginning with the ace and ending with the king), we may say that we then have an "ordered" deck with an information entropy of zero. If we thoroughly shuffle the deck, the information entropy will be about 225.6 bits: We will need to ask about 225.6 questions, on average, to determine the exact order of the shuffled deck. We can also say that the shuffled deck has become completely "disordered" or that the ordered cards have been "spread" throughout the deck. But information entropy does not say that the deck needs to be ordered in any particular way. If we take our shuffled deck and write down the names of the cards, in order, then the information entropy becomes zero. If we again shuffle the deck, the information entropy would again be about 225.6 bits, even if by some miracle it reshuffled to the same order as when it came out of the box, because even if it did, we would not know that. So the concept of "disorder" is useful if, by order, we mean maximal knowledge and by disorder we mean maximal lack of knowledge. The "spreading" concept is useful because it gives a feeling to what happens to the cards when they are shuffled. The probability of a card being in a particular place in an ordered deck is either 0 or 1, in a shuffled deck it is 1/52. The probability has "spread out" over the entire deck. Analogously, in a physical system, entropy is generally associated with a "spreading out" of mass or energy. The connection between thermodynamic entropy and information entropy is given by Boltzmann's equation, which says that "S = kB" ln "W". If we take the base-2 logarithm of "W", it will yield the average number of questions we must ask about the microstate of the physical system in order to determine its macrostate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta S" }, { "math_id": 1, "text": "{\\rm \\delta}S = \\frac{{\\rm \\delta}q}{T}." }, { "math_id": 2, "text": "\\delta S" }, { "math_id": 3, "text": "\\delta q" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\delta" }, { "math_id": 6, "text": "{{\\rm \\delta}S} \\ge {\\frac{{\\rm \\delta}q}{T}}." }, { "math_id": 7, "text": "S=k_\\text{B}\\,\\ln W" }, { "math_id": 8, "text": "k_\\text{B}" }, { "math_id": 9, "text": "\\ln W" }, { "math_id": 10, "text": "\\Delta S = \\frac{q_\\mathrm{rev}}{T}" }, { "math_id": 11, "text": "\\Delta S = S_\\mathrm{final} - S _\\mathrm{initial}" }, { "math_id": 12, "text": " \\Delta S = S_\\mathrm{final} - S _\\mathrm{initial} = \\frac{q_\\mathrm{rev}}{T}" }, { "math_id": 13, "text": "\\frac{q_\\mathrm{rev}}{T} = \\frac{6008\\,\\mathrm J}{273\\,\\mathrm K}" }, { "math_id": 14, "text": "\\frac{q_\\mathrm{rev}}{T}" }, { "math_id": 15, "text": "C_p" }, { "math_id": 16, "text": "\\frac{dT}{T}" }, { "math_id": 17, "text": "T_\\mathrm{initial}" }, { "math_id": 18, "text": "T_\\mathrm{final}" }, { "math_id": 19, "text": "\\Delta S = C_p \\ln\\frac{T_\\mathrm{final}}{T_\\mathrm{initial}}" } ]
https://en.wikipedia.org/wiki?curid=7592567
759264
Amount of substance
Extensive physical property In chemistry, the amount of substance (symbol "n") in a given sample of matter is defined as a ratio ("n" = "N"/"N"A) between the number of elementary entities ("N") and the Avogadro constant ("N"A). The entities are usually molecules, atoms, ions, or ion pairs of a specified kind. The particular substance sampled may be specified using a subscript, e.g., the amount of sodium chloride (NaCl) would be denoted as "n"NaCl. The unit of amount of substance in the International System of Units is the mole (symbol: mol), a base unit. Since 2019, the value of the Avogadro constant "N"A is defined to be exactly . Sometimes, the amount of substance is referred to as the chemical amount or, informally, as the "number of moles" in a given sample of matter. Usage. Historically, the mole was defined as the amount of substance in 12 grams of the carbon-12 isotope. As a consequence, the mass of one mole of a chemical compound, in grams, is numerically equal (for all practical purposes) to the mass of one molecule or formula unit of the compound, in daltons, and the molar mass of an isotope in grams per mole is approximately equal to the mass number (historically exact for carbon-12 with a molar mass of 12 g/mol). For example, a molecule of water has a mass of about 18.015 daltons on average, whereas a mole of water (which contains water molecules) has a total mass of about 18.015 grams. In chemistry, because of the law of multiple proportions, it is often much more convenient to work with amounts of substances (that is, number of moles or of molecules) than with masses (grams) or volumes (liters). For example, the chemical fact "1 molecule of oxygen (O2) will react with 2 molecules of hydrogen (H2) to make 2 molecules of water ()" can also be stated as "1 mole of will react with 2 moles of to form 2 moles of water". The same chemical fact, expressed in terms of masses, would be "32 g (1 mole) of oxygen will react with approximately 4.0304 g (2 moles of H2) hydrogen to make approximately 36.0304 g (2 moles) of water" (and the numbers would depend on the isotopic composition of the reagents). In terms of volume, the numbers would depend on the pressure and temperature of the reagents and products. For the same reasons, the concentrations of reagents and products in solution are often specified in moles per liter, rather than grams per liter. The amount of substance is also a convenient concept in thermodynamics. For example, the pressure of a certain quantity of a noble gas in a recipient of a given volume, at a given temperature, is directly related to the number of molecules in the gas (through the ideal gas law), not to its mass. This technical sense of the term "amount of substance" should not be confused with the general sense of "amount" in the English language. The latter may refer to other measurements such as mass or volume, rather than the number of particles. There are proposals to replace "amount of substance" with more easily-distinguishable terms, such as enplethy and stoichiometric amount. The IUPAC recommends that "amount of substance" should be used instead of "number of moles", just as the quantity mass should not be called "number of kilograms". Nature of the particles. To avoid ambiguity, the nature of the particles should be specified in any measurement of the amount of substance: thus, a sample of 1 mol "of molecules" of oxygen (O2) has a mass of about 32 grams, whereas a sample of 1 mol "of atoms" of oxygen (O) has a mass of about 16 grams. Derived quantities. Molar quantities (per mole). The quotient of some extensive physical quantity of a homogeneous sample by its amount of substance is an intensive property of the substance, usually named by the prefix "molar" or the suffix "per mole". For example, the quotient of the mass of a sample by its amount of substance is its molar mass, for which the SI unit kilogram per mole or gram per mole may be used. This is about 18.015 g/mol for water, and 55.845 g/mol for iron. Similarly for volume, one gets the molar volume, which is about 18.069 millilitres per mole for liquid water and 7.092 mL/mol for iron at room temperature. From the heat capacity, one gets the molar heat capacity, which is about 75.385 J/(K⋅mol) for water and about 25.10 J/(K⋅mol) for iron. Molar mass. The molar mass (formula_0) of a substance is the ratio of the mass (formula_1) of a sample of that substance to its amount of substance (formula_2): formula_3. The amount of substance is given as the number of moles in the sample. For most practical purposes, the numerical value of the molar mass in grams per mole is the same as that of the mean mass of one molecule or formula unit of the substance in daltons, as the mole was historically defined such that the molar mass constant was exactly 1 g/mol. Thus, given the molecular mass or formula mass in daltons, the same number in grams gives an amount very close to one mole of the substance. For example, the average molecular mass of water is about 18.015 Da and the molar mass of water is about 18.015 g/mol. This allows for accurate determination of the amount in moles of a substance by measuring its mass and dividing by the molar mass of the compound: formula_4. For example, 100 g of water is about 5.551 mol of water. Other methods of determining the amount of substance include the use of the molar volume or the measurement of electric charge. The molar mass of a substance depends not only on its molecular formula, but also on the distribution of isotopes of each chemical element present in it. For example, the molar mass of calcium-40 is , whereas the molar mass of calcium-42 is , and of calcium with the normal isotopic mix is . Amount (molar) concentration (moles per liter). Another important derived quantity is the molar concentration (formula_5) (also called "amount of substance concentration", "amount concentration", or "substance concentration", especially in clinical chemistry), defined as the amount in moles (formula_2) of a specific substance (solute in a solution or component of a mixture), divided by the volume (formula_6) of the solution or mixture: formula_7. The standard SI unit of this quantity is mol/m3, although more practical units are commonly used, such as mole per liter (mol/L, equivalent to mol/dm3). For example, the amount concentration of sodium chloride in ocean water is typically about 0.599 mol/L. The denominator is the volume of the solution, not of the solvent. Thus, for example, one liter of standard vodka contains about 0.40 L of ethanol (315 g, 6.85 mol) and 0.60 L of water. The amount concentration of ethanol is therefore (6.85 mol of ethanol)/(1 L of vodka) = 6.85 mol/L, not (6.85 mol of ethanol)/(0.60 L of water), which would be 11.4 mol/L. In chemistry, it is customary to read the unit "mol/L" as molar, and denote it by the symbol "M" (both following the numeric value). Thus, for example, each liter of a "0.5 molar" or "0.5 M" solution of urea (CH4N2O) in water contains 0.5 moles of that molecule. By extension, the amount concentration is also commonly called the molarity of the substance of interest in the solution. However, as of May 2007, these terms and symbols are not condoned by IUPAC. This quantity should not be confused with the mass concentration, which is the mass of the substance of interest divided by the volume of the solution (about 35 g/L for sodium chloride in ocean water). Amount (molar) fraction (moles per mole). Confusingly, the amount (molar) concentration should also be distinguished from the molar fraction (also called mole fraction or amount fraction) of a substance in a mixture (such as a solution), which is the number of moles of the compound in one sample of the mixture, divided by the total number of moles of all components. For example, if 20 g of NaCl is dissolved in 100 g of water, the amounts of the two substances in the solution will be (20 g)/(58.443 g/mol) = 0.34221 mol and (100 g)/(18.015 g/mol) = 5.5509 mol, respectively; and the molar fraction of NaCl will be 0.34221/(0.34221 + 5.5509) = 0.05807. In a mixture of gases, the partial pressure of each component is proportional to its molar fraction. History. The alchemists, and especially the early metallurgists, probably had some notion of amount of substance, but there are no surviving records of any generalization of the idea beyond a set of recipes. In 1758, Mikhail Lomonosov questioned the idea that mass was the only measure of the quantity of matter, but he did so only in relation to his theories on gravitation. The development of the concept of amount of substance was coincidental with, and vital to, the birth of modern chemistry. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "M=m/n" }, { "math_id": 4, "text": "n=m/M" }, { "math_id": 5, "text": "c" }, { "math_id": 6, "text": "V" }, { "math_id": 7, "text": "c=n/V" } ]
https://en.wikipedia.org/wiki?curid=759264
759298
Linear density
Linear density is the measure of a quantity of any characteristic value per unit of length. Linear mass density (titer in textile engineering, the amount of mass per unit length) and "linear charge density" (the amount of electric charge per unit length) are two common examples used in science and engineering. The term linear density or linear mass density is most often used when describing the characteristics of one-dimensional objects, although linear density can also be used to describe the density of a three-dimensional quantity along one particular dimension. Just as density is most often used to mean mass density, the term linear density likewise often refers to linear mass density. However, this is only one example of a linear density, as any quantity can be measured in terms of its value along one dimension. Linear mass density. Consider a long, thin rod of mass formula_0 and length formula_1. To calculate the average linear mass density, formula_2, of this one dimensional object, we can simply divide the total mass, formula_0, by the total length, formula_1: formula_3 If we describe the rod as having a varying mass (one that varies as a function of position along the length of the rod, formula_4), we can write: formula_5 Each infinitesimal unit of mass, formula_6, is equal to the product of its linear mass density, formula_7, and the infinitesimal unit of length, formula_8: formula_9 The linear mass density can then be understood as the derivative of the mass function with respect to the one dimension of the rod (the position along its length, formula_4) formula_10 The SI unit of linear mass density is the kilogram per meter (kg/m). Linear density of fibers and yarns can be measured by many methods. The simplest one is to measure a length of material and weigh it. However, this requires a large sample and masks the variability of linear density along the thread, and is difficult to apply if the fibers are crimped or otherwise cannot lay flat relaxed. If the density of the material is known, the fibers are measured individually and have a simple shape, a more accurate method is direct imaging of the fiber with a scanning electron microscope to measure the diameter and calculation of the linear density. Finally, linear density is directly measured with a vibroscope. The sample is tensioned between two hard points, mechanical vibration is induced and the fundamental frequency is measured. Linear charge density. Consider a long, thin wire of charge formula_11 and length formula_1. To calculate the average linear charge density, formula_12, of this one dimensional object, we can simply divide the total charge, formula_11, by the total length, formula_1: formula_13 If we describe the wire as having a varying charge (one that varies as a function of position along the length of the wire, formula_4), we can write: formula_14 Each infinitesimal unit of charge, formula_15, is equal to the product of its linear charge density, formula_16, and the infinitesimal unit of length, formula_8: formula_17 The linear charge density can then be understood as the derivative of the charge function with respect to the one dimension of the wire (the position along its length, formula_4) formula_18 Notice that these steps were exactly the same ones we took before to find formula_19. The SI unit of linear charge density is the coulomb per meter (C/m). Other applications. In drawing or printing, the term linear density also refers to how densely or heavily a line is drawn. The most famous abstraction of linear density is the probability density function of a single random variable. Units. Common units include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "\\bar\\lambda_m" }, { "math_id": 3, "text": "\\bar\\lambda_m = \\frac{M}{L}" }, { "math_id": 4, "text": "l" }, { "math_id": 5, "text": "m = m(l)" }, { "math_id": 6, "text": "dm" }, { "math_id": 7, "text": "\\lambda_m" }, { "math_id": 8, "text": "dl" }, { "math_id": 9, "text": "dm = \\lambda_m dl" }, { "math_id": 10, "text": "\\lambda_m = \\frac{dm}{dl}" }, { "math_id": 11, "text": "Q" }, { "math_id": 12, "text": "\\bar\\lambda_q" }, { "math_id": 13, "text": "\\bar\\lambda_q = \\frac{Q}{L}" }, { "math_id": 14, "text": "q = q(l)" }, { "math_id": 15, "text": "dq" }, { "math_id": 16, "text": "\\lambda_q" }, { "math_id": 17, "text": "dq = \\lambda_q dl" }, { "math_id": 18, "text": "\\lambda_q = \\frac{dq}{dl}" }, { "math_id": 19, "text": "\\lambda_m = \\frac{dm}{dl}" } ]
https://en.wikipedia.org/wiki?curid=759298
7593055
Optical sine theorem
In optics, the optical sine theorem states that the products of the index, height, and sine of the slope angle of a ray in object space and its corresponding ray in image space are equal. That is: formula_0
[ { "math_id": 0, "text": "n_{0}y_{0}a_{0}=n_{i}y_{i}a_{i}" } ]
https://en.wikipedia.org/wiki?curid=7593055
75934343
Glen–Nye flow law
Technical explanation of a rheology model describing the flow of glacial ice In theoretical glaciology and continuum mechanics, the Glen–Nye flow law, also referred to as Glen's flow law, is an empirically derived constitutive relation widely used as a model for the rheology of glacial ice. The Glen–Nye flow law treats ice as a purely viscous, incompressible, isotropic, non-Newtonian fluid, with a viscosity determined by a power law relation between strain rate and stress: formula_0 The effective strain rate formula_1 (units of s−1) and effective stress formula_2 (units of Pa) are related to the second principle invariants of their respective tensors. The parameters formula_3 and formula_4 are scalar constants which have been estimated through a combination of theory and measurements. The exponent formula_4 is dimensionless, and the rate factor formula_3 takes on the units Pa−formula_4 s−1. The Glen–Nye flow law simplifies the viscous stress tensor to a single scalar value formula_5, the dynamic viscosity, which is determined by tensor invariants of the deviatoric stress tensor formula_6 and the strain rate tensor formula_7. Under the application of sustained force ice will flow as a fluid, and changes to the force applied will result in non-linear changes to the resulting flow. This fluid behavior of ice, which the Glen–Nye flow law is intended to represent, is accommodated within the solid ice by creep, and is a dominant mode of glacial ice flow. Viscosity definition. The constitutive relation is developed as a generalized Newtonian fluid, where the deviatoric stress and strain tensors are related by a viscosity scalar: Glen–Nye constitutive law of ice formula_8 where formula_5 is the viscosity (units of Pa s), formula_6 is the deviatoric stress tensor, and formula_7 is the strain rate tensor. In some derivations, formula_9 (units of Pa−1 s−1) is substituted. This construction makes several assumptions: While incompressibility is an accurate assumption for glacial ice, glacial ice can be anisotropic and in general the strain rate may respond perpendicularly to the principal stress. "With these assumptions, the stress and strain rate tensors here are symmetric and have a trace of zero, properties that allow their invariants and squares to be simplified from the general definitions." The deviatoric stress tensor is related to an effective stress by its second principal invariant: formula_11 where Einstein notation implies summation over repeated indices. The same is defined for an effective strain rate: formula_12 From this form, we can recognize that: formula_13 and formula_14 The viscosity is scalar and cannot be negative (a fluid cannot gain energy as it flows), so formula_5 can be expressed in terms of the invariant effective stress and effective strain rate. formula_15 Here, the Glen–Nye flow law allows us to substitute for either formula_2 or formula_16, and formula_5 can be defined in terms of either the effective strain rate or effective stress alone: Glen–Nye viscosity of ice formula_17 where formula_18 (units of Pa sformula_19) is sometimes substituted. Parameter values. The Glen–Nye rheology model defines two parameters, formula_3 and formula_4. The rate factor formula_3 has been found empirically to vary with temperature and is often modeled with an Arrhenius relation describing the temperature dependence of creep: formula_20 where formula_21 is the activation energy, formula_22 is the universal gas constant, and formula_23 is the absolute temperature. The prefactor formula_24 may be dependent on crystal structure, impurities, damage, or other qualities of the ice. Estimates of formula_3 vary by orders of magnitude and can be derived as a single value from an estimated value for formula_24, or by comparing measurements of multiple real world glaciers and experiments, or treated as a scalar field inferred from observations by a numerical inversion of the momentum equation for ice flow at a specific location. Viscous ice flow is an example of shear thinning, which corresponds to formula_25. Review of research using a variety of methods and field sites have found the range of plausible values to be around formula_26 with the most commonly used assumption to be a constant formula_27. However, the value of formula_4 is also stress dependent, and can reflect different microstructural mechanisms facilitating creep at different stress regimes. Methods to improve estimations of these viscous parameters are an ongoing field of research. Limitations. The use of the word "law" in referring to the Glen-Nye model of ice rheology may obscure the complexity of factors which determine the range of viscous ice flow parameter values even within a single glacier, as well as the significant assumptions and simplifications made by the model itself. In particular, treatment of the ice as a fluid with bulk properties does not represent and may struggle to capture the cascade of mechanisms which allow the ice to deform at the grain scale in solid state. Glacial ice crystals grow on scales of millimeters up to 10 cm, and constant readjustment between grain structure and internal stress results in high variations in strain across the same length-scale as the crystals themselves. Additionally, individual ice crystals are not isotropic, and typically are not randomly oriented within the material fabric which undergoes dynamic recrystallization. Grain size and fabric orientation are known to influence the creep of glacial ice, but are dynamic properties which also evolve with the stress regime and are not simple to capture in a model. The Glen-Nye flow law also does not render the full range of ice response to stress, including elastic deformation, fracture mechanics (i.e. crevasses), and transient phases of creep. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dot{\\epsilon}_{e} = A\\tau^{n}_e" }, { "math_id": 1, "text": "\\dot{\\epsilon}_{e}" }, { "math_id": 2, "text": "\\tau_e" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "\\boldsymbol\\tau" }, { "math_id": 7, "text": "\\boldsymbol \\dot{\\epsilon}" }, { "math_id": 8, "text": "\\boldsymbol\\tau=2\\mu\\dot{\\boldsymbol{\\epsilon}}" }, { "math_id": 9, "text": "\\lambda=(2\\mu)^{-1}" }, { "math_id": 10, "text": "\\tau_{ij} \\propto \\dot{\\epsilon}_{ij}" }, { "math_id": 11, "text": "\\tau_e^2 = II_{\\boldsymbol{\\tau}} = \\frac{1}{2}\\tau_{ij} \\tau_{ij}" }, { "math_id": 12, "text": "\\dot{\\epsilon}_e^2 = II_{\\boldsymbol{\\dot{\\epsilon}}} = \\frac{1}{2}\\dot{\\epsilon}_{ij} \\dot{\\epsilon}_{ij}" }, { "math_id": 13, "text": " \\boldsymbol\\tau^2 = \\tau_{ij} \\tau_{ij} =2\\tau_e^2" }, { "math_id": 14, "text": " \\boldsymbol{\\dot{\\epsilon}}^2 = \\dot{\\epsilon}_{ij} \\dot{\\epsilon}_{ij} =2\\dot{\\epsilon}_e^2 " }, { "math_id": 15, "text": "\n\\mu= \\frac{\\boldsymbol{\\tau}}{2\\dot{\\boldsymbol{\\epsilon}}}=\\frac{1}{2} \\tau_e \\dot{\\epsilon}_e^{-1} \n" }, { "math_id": 16, "text": "\\dot{\\epsilon}_e" }, { "math_id": 17, "text": "\\mu=\\frac{A^{-1/n}}{2 \\dot{\\epsilon}_e^{(n-1)/n}} = \\frac{\\tau_e^{(1-n)}}{2A}\n" }, { "math_id": 18, "text": "B=A^{-1/n}" }, { "math_id": 19, "text": "^{1/n}" }, { "math_id": 20, "text": "A=A_0 e^{(-Q/RT)}" }, { "math_id": 21, "text": "Q" }, { "math_id": 22, "text": "R" }, { "math_id": 23, "text": "T" }, { "math_id": 24, "text": "A_0" }, { "math_id": 25, "text": "n>1" }, { "math_id": 26, "text": "2<n<4" }, { "math_id": 27, "text": "n=3" } ]
https://en.wikipedia.org/wiki?curid=75934343
759360
Area density
Mass per unit area The area density (also known as areal density, surface density, superficial density, areic density, mass thickness, column density, or density thickness) of a two-dimensional object is calculated as the mass per unit area. The SI derived unit is the kilogram per square metre (kg·m−2). A related "area number density" can be defined by replacing mass in by number of particles or other countable quantity. In the paper and fabric industries, it is called grammage and is expressed in grams per square meter (g/m2); for paper in particular, it may be expressed as pounds per ream of standard sizes ("basis ream"). Formulation. Area density can be calculated as: formula_1 or formula_2 where "ρA" is the average area density, "m" is the total mass of the object, "A" is the total area of the object, "ρ" is the average density, and "l" is the average thickness of the object. Column density. A special type of area density is called "column density" (also "columnar mass density" or simply "column density"), denoted "ρ"A or "σ". It is the mass of substance per unit area integrated along a path; It is obtained integrating volumetric density formula_3 over a column: formula_4 In general the integration path can be slant or oblique incidence (as in, for example, line of sight propagation in atmospheric physics). A common special case is a vertical path, from the bottom to the top of the medium: formula_5 where formula_6 denotes the vertical coordinate (e.g., height or depth). Columnar density formula_0 is closely related to the vertically averaged volumetric density formula_7 as formula_8 where formula_9; formula_7, formula_0, and formula_10 have units of, for example, grams per cubic metre, grams per square metre, and metres, respectively. Usage. Atmospheric physics. It is a quantity commonly retrieved by remote sensing instruments, for instance the Total Ozone Mapping Spectrometer (TOMS) which retrieves ozone columns around the globe. Columns are also returned by the differential optical absorption spectroscopy (DOAS) method and are a common retrieval product from nadir-looking microwave radiometers. A closely related concept is that of ice or liquid water path, which specifies the volume per unit area or depth instead of mass per unit area, thus the two are related: formula_11 Another closely related concept is optical depth. Astronomy. In astronomy, the column density is generally used to indicate the number of atoms or molecules per square cm (cm2) along the line of sight in a particular direction, as derived from observations of e.g. the 21-cm hydrogen line or from observations of a certain molecular species. Also the interstellar extinction can be related to the column density of H or H2. The concept of area density can be useful when analysing accretion disks. In the case of a disk seen face-on, area density for a given area of the disk is defined as column density: that is, either as the mass of substance per unit area integrated along the vertical path that goes through the disk (line-of-sight), from the bottom to the top of the medium: formula_12 where formula_6 denotes the vertical coordinate (e.g., height or depth), or as the number or count of a substance—rather than the mass—per unit area integrated along a path (column number density): formula_13 Data storage media. Areal density is used to quantify and compare different types media used in data storage devices such as hard disk drives, optical disc drives and tape drives. The current unit of measure is typically gigabits per square inch. Paper. The area density is often used to describe the thickness of paper; e.g., 80 g/m2 is very common. Fabric. Fabric "weight" is often specified as mass per unit area, grams per square meter (gsm) or ounces per square yard. It is also sometimes specified in ounces per yard in a standard width for the particular cloth. One gram per square meter equals 0.0295 ounces per square yard; one ounce per square yard equals 33.9 grams per square meter. Other. It is also an important quantity for the absorption of radiation. When studying bodies falling through air, area density is important because resistance depends on area, and gravitational force is dependent on mass. Bone density is often expressed in grams per square centimeter (g·cm−2) as measured by x-ray absorptiometry, as a proxy for the actual density. The body mass index is expressed in units of kilograms per square meter, though the area figure is nominal, being the square of the height. The total electron content in the ionosphere is a quantity of type columnar number density. Snow water equivalent is a quantity of type columnar mass density.
[ { "math_id": 0, "text": "\\rho_A" }, { "math_id": 1, "text": " \\rho_A = \\frac {m} {A} " }, { "math_id": 2, "text": " \\rho_A = \\rho \\cdot l," }, { "math_id": 3, "text": "\\rho" }, { "math_id": 4, "text": "\\sigma=\\int \\rho \\, \\mathrm{d}s." }, { "math_id": 5, "text": "\\sigma = \\int \\rho \\, \\mathrm{d}z," }, { "math_id": 6, "text": "z" }, { "math_id": 7, "text": "\\bar{\\rho}" }, { "math_id": 8, "text": "\\bar{\\rho} = \\frac{\\rho_A}{\\Delta z}," }, { "math_id": 9, "text": "\\Delta z = \\int 1 \\, \\mathrm{d}z" }, { "math_id": 10, "text": "\\Delta z" }, { "math_id": 11, "text": "P = \\frac{\\sigma}{\\rho_0}." }, { "math_id": 12, "text": "\\sigma = \\int \\rho \\, \\mathrm{d}z," }, { "math_id": 13, "text": "N = \\int n \\, \\mathrm{d}z." } ]
https://en.wikipedia.org/wiki?curid=759360
75936255
2024 FIDE Circuit
Sports season The 2024 FIDE Circuit is a system comprising the top chess tournaments in 2024, which serves as a qualification path for the Candidates Tournament 2026. Players receive points based on their performance and the strength of the tournament. A player's final Circuit score is the sum of their seven best results of the year. The winner of the Circuit qualifies for the Candidates Tournament 2026. Tournament eligibility. A FIDE-rated individual standard tournament is eligible for the Circuit if it meets the following criteria: The Circuit also includes the following tournaments: Points system. Event points. Circuit points obtained by a player from a tournament are calculated as follows: formula_0 where: Basic points. Basic points for a tournament are awarded depending on the tournament format: Points are awarded as follows: FIDE World Championship points. For World Chess Championship 2024, the winner will get points calculated as 1st place basic points multipled by the strength factor, but with its TAR value using winner's performance rating instead. Player's total and ranking. A player's point total for the ranking is the sum of their best 7 tournaments with the following criteria: Tournaments that could be included in player's results are as follows: Tournaments. Eligible tournaments as of 30 August 2024. Ranking. At the end of 2024, the best player in the Circuit will qualify for the Candidates Tournament 2026, provided that they have played 5 eligible tournaments, including at least 4 in standard time controls. The current leader is marked in green. "(M)" denotes the Masters section of tournaments while "(Ch)" – Challenger section. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = B \\times k \\times w" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "k = (TAR-2500) / 100" }, { "math_id": 5, "text": "w" } ]
https://en.wikipedia.org/wiki?curid=75936255
75938302
Strongly-polynomial time
In computer science, a "polynomial-time algorithm" is – generally speaking – an algorithm whose running time is upper-bounded by some polynomial function of the input size. The definition naturally depends on the computational model, which determines how the "running time" is measured, and how the "input size" is measured. Two prominent computational models are the Turing-machine model and the arithmetic model. A strongly-polynomial time algorithm is polynomial in both models, whereas a weakly-polynomial time algorithm is polynomial only in the Turing machine model. The difference between strongly- and weakly-polynomial time is when the inputs to the algorithms consist of integer or rational numbers. It is particularly common in optimization. Computational models. Two common computational models are the Turing-machine model and the arithmetic model:32 Some algorithms run in polynomial time in one model but not in the other one. For example: However, if an algorithm runs in polynomial time in the arithmetic model, and in addition, the binary length of all inputs, outputs, and intermediate values is polynomial in the number of input values, then it is always polynomial-time in the Turing model. Such an algorithm is said to run in strongly polynomial time. Definition. Strongly polynomial time is defined in the arithmetic model of computation. In this model of computation the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) take a unit time step to perform, regardless of the sizes of the operands. The algorithm runs in strongly polynomial time if: Any algorithm with these two properties can be converted to a polynomial time algorithm by replacing the arithmetic operations by suitable algorithms for performing the arithmetic operations on a Turing machine. The second condition is strictly necessary: given the integer formula_1 (which takes up space proportional to "n" in the Turing machine model), it is possible to compute formula_0 with "n" multiplications using repeated squaring. However, the space used to represent formula_0 is proportional to formula_1, and thus exponential rather than polynomial in the space used to represent the input. Hence, it is not possible to carry out this computation in polynomial time on a Turing machine, but it is possible to compute it by polynomially many arithmetic operations. However, for the first condition, there are algorithms that run in a number of Turing machine steps bounded by a polynomial in the length of binary-encoded input, but do not take a number of arithmetic operations bounded by a polynomial in the number of input numbers. The Euclidean algorithm for computing the greatest common divisor of two integers is one example. Given two integers formula_2 and formula_3, the algorithm performs formula_4 arithmetic operations on numbers with at most formula_4 bits. At the same time, the number of arithmetic operations cannot be bounded by the number of integers in the input (which is constant in this case, there are always only two integers in the input). Due to the latter observation, the algorithm does not run in strongly polynomial time. Its real running time depends on the "lengths" of formula_2 and formula_3 in bits and not only on the number of integers in the input. An algorithm that runs in polynomial time but that is not strongly polynomial is said to run in weakly polynomial time. A well-known example of a problem for which a weakly polynomial-time algorithm is known, but is not known to admit a strongly polynomial-time algorithm, is linear programming. Weakly polynomial time should not be confused with pseudo-polynomial time, which depends on the "magnitudes" of values in the problem instead of the lengths and is not truly polynomial time. Subtleties. In order to specify the arithmetic model, there are several ways to define the division operation. The outcome of dividing an integer "a" by another integer "b" could be one of:33 In all versions, strongly-polynomial-time implies polynomial-time in the Turing model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{2^n}" }, { "math_id": 1, "text": "2^n" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "O(\\log a + \\log b)" } ]
https://en.wikipedia.org/wiki?curid=75938302
75938487
Algorithmic problems on convex sets
Many problems in mathematical programming can be formulated as problems on convex sets or convex bodies. Six kinds of problems are particularly important:Sec.2 optimization, violation, validity, separation, membership and emptiness. Each of these problems has a strong (exact) variant, and a weak (approximate) variant. In all problem descriptions, "K" denotes a compact and convex set in R"n". Strong variants. The strong variants of the problems are:47 Closely related to the problems on convex sets is the following problem on a convex function "f": R"n" → R: Trivial implications. From the definitions, it is clear that algorithms for some of the problems can be used to solve other problems in oracle-polynomial time: Examples. The solvability of a problem crucially depends on the nature of "K" and the way "K" it is represented. For example: Weak variants. Each of the above problems has a weak variant, in which the answer is given only approximately. To define the approximation, we define the following operations on convex sets:6 Using these notions, the weak variants are:50 Trivial implications. Analogously to the strong variants, algorithms for some of the problems can be used to solve other problems in oracle-polynomial time: Stronger weak variants. Some of these weak variants can be slightly strengthened.Rem.2.1.5(a) For example, WVAL with inputs "c", "t"' = "t"+"ε"/2 and "ε"' = "ε"/2 does one of the following: Implications of weak variants. Besides these trivial implications, there are highly non-trivial implications, whose proof relies on the ellipsoid method. Some of these implications require additional information about the convex body "K". In particular, besides the number of dimensions "n", the following information may be needed:53 The following can be done in oracle-polynomial time:Sec.4 The following implications use the polar set of "K" - defined as formula_0. Note that "K"**="K". Necessity of additional information. Some of the above implications provably do not work without the additional information.Sec.4.5 Geometric problems on convex bodies. Using the above basic problems, one can solve several geometric problems related to convex bodies. In particular, one can find an approximate John ellipsoid in oracle-polynomial time:Sec.4.6 These results imply that it is possible to approximate any norm by an ellipsoidal norm. Specifically, suppose a norm "N" is given by a weak norm oracle: for every vector "x" in Q"n" and every rational "ε"&gt;0, it returns a rational number "r" such that |N("x")-"r"|&lt;"ε". Suppose we also know a constant "c"1 that gives a lower bound on the ratio of N("x") to the Euclidean norm, formula_7Then we can compute in oracle-polynomial time a linear transformation "T" of R"n" such that, for all "x" in R"n", formula_8. It is also possible to approximate the diameter and the width of "K": Some problems not yet solved (as of 1993) are whether it is possible to compute in polytime the volume, the center of gravity or the surface area of a convex body given by a separation oracle. Problems on combinations of convex sets. Some binary operations on convex sets preserve the algorithmic properties of the various problems. In particular, given two convex sets "K" and "L":Sec.4.7 From weak to strong oracles. In some cases, an oracle for a weak problem can be used to solve the corresponding strong problem. General convex sets. An algorithm for WMEM, given circumscribed radius "R" and inscribe radius "r" and interior point "a"0, can solve the following slightly stronger membership problem (still weaker than SMEM): given a vector "y" in Q"n", and a rational "ε"&gt;0, either assert that "y" in "S"("K,ε"), or assert that "y" not in "K." The proof is elementary and uses a single call to the WMEM oracle.108 Polyhedra. Suppose now that "K" is a polyhedron. Then, many oracles to weak problems can be used to solve the corresponding strong problems in oracle-polynomial time. The reductions require an upper bound on the representation complexity (facet complexity or vertex complexity) of the polyhedron:Sec. 6.3 The proofs use results on simultaneous diophantine approximation. Necessity of additional information. How essential is the additional information for the above reductions?173 Implications of strong variants. Using the previous results, it is possible to prove implications between strong variants. The following can be done in oracle-polynomial time for a well-described polyhedron - a polyhedron for which an upper bound on the representation complexity is known:Sec.6.4 So SSEP, SVIOL and SOPT are all polynomial-time equivalent. This equivalence, in particular, implies Khachian's proof that linear programming can be solved in polynomial time,Thm.6.4.12 since when a polyhedron is given by explicit linear inequalities, a SSEP oracle is trivial to implement. Moreover, a basic optimal dual solution can also be found in polytime.Thm.6.5.14 Note that the above theorems do not require an assumption of full-dimensionality or a lower bound on the volume. Other reductions cannot be made without additional information: Extension to non-well-described convex sets. Jain extends one of the above theorems to convex sets that are not polyhedra and not well-described. He only requires a guarantee that the convex set contains at least "one point" (not necessarily a vertex) with a bounded representation length. He proves that, under this assumption, SNEMPT can be solved (a point in the convex set can be found) in polytime.Thm.12 Moreover, the representation length of the found point is at most P("n") times the given bound, where P is some polynomial function.Thm.13 Geometric problems on polyhedra. Using the above basic problems, one can solve several geometric problems related to nonempty polytopes and polyhedra with a bound on the representation complexity, in oracle-polynomial time, given an oracle to SSEP, SVIOL or SOPT:Sec.6.5 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K^* := \\{ y\\in \\mathbb{R}^n : y^T x \\leq 1 \\text{ for all } x\\in K \\}" }, { "math_id": 1, "text": "E\\left(\\frac{1}{n(n+1)^2}A, a\\right) \\subseteq K \\subseteq E(A, a)" }, { "math_id": 2, "text": "E\\left(\\frac{1}{n^2}A, a\\right) \\subseteq K \\subseteq E(A, a)" }, { "math_id": 3, "text": "E\\left(\\frac{1}{n(n+1)}A, a\\right) \\subseteq K \\subseteq E(A, a)" }, { "math_id": 4, "text": "E\\left(\\frac{1}{n}A, a\\right) \\subseteq K \\subseteq E(A, a)" }, { "math_id": 5, "text": "E\\left(\\frac{1}{(n+1)^2}A, a\\right) \\subseteq K \\subseteq E(A, a)" }, { "math_id": 6, "text": "E\\left(\\frac{1}{n+1}A, 0\\right) \\subseteq K \\subseteq E(A, 0)" }, { "math_id": 7, "text": "c_1 \\| x \\| \\leq N(x)" }, { "math_id": 8, "text": "\\| T x \\| \\leq N(x) \\leq \\sqrt{n(n+1)} \\| T x \\|" }, { "math_id": 9, "text": "R^* \\leq \\frac{1}{2} \\sqrt{n} \\|x-y\\|" }, { "math_id": 10, "text": "r^*= \\frac{d_2 - d_1}{2(n+1)\\sqrt{n}\\|c\\|}" } ]
https://en.wikipedia.org/wiki?curid=75938487
7594615
5-orthoplex
In five-dimensional geometry, a 5-orthoplex, or 5-cross polytope, is a five-dimensional polytope with 10 vertices, 40 edges, 80 triangle faces, 80 tetrahedron cells, 32 5-cell 4-faces. It has two constructed forms, the first being regular with Schläfli symbol {33,4}, and the second with alternately labeled (checkerboarded) facets, with Schläfli symbol {3,3,31,1} or Coxeter symbol 211. It is a part of an infinite family of polytopes, called cross-polytopes or "orthoplexes". The dual polytope is the 5-hypercube or 5-cube. As a configuration. This configuration matrix represents the 5-orthoplex. The rows and columns correspond to vertices, edges, faces, cells and 4-faces. The diagonal numbers say how many of each element occur in the whole 5-orthoplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. formula_0 Cartesian coordinates. Cartesian coordinates for the vertices of a 5-orthoplex, centered at the origin are (±1,0,0,0,0), (0,±1,0,0,0), (0,0,±1,0,0), (0,0,0,±1,0), (0,0,0,0,±1) Construction. There are three Coxeter groups associated with the 5-orthoplex, one regular, dual of the penteract with the C5 or [4,3,3,3] Coxeter group, and a lower symmetry with two copies of "5-cell" facets, alternating, with the D5 or [32,1,1] Coxeter group, and the final one as a dual 5-orthotope, called a 5-fusil which can have a variety of subsymmetries. Related polytopes and honeycombs. This polytope is one of 31 uniform 5-polytopes generated from the B5 Coxeter plane, including the regular 5-cube and 5-orthoplex.
[ { "math_id": 0, "text": "\\begin{bmatrix}\\begin{matrix}\n10 & 8 & 24 & 32 & 16 \\\\ \n2 & 40 & 6 & 12 & 8 \\\\ \n3 & 3 & 80 & 4 & 4 \\\\ \n4 & 6 & 4 & 80 & 2 \\\\ \n5 & 10 & 10 & 5 & 32 \n\\end{matrix}\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=7594615
75949002
Zometool
Construction set toy Zometool is a construction set toy that had been created by a collaboration of Steve Baer (the creator of Zome architecture), artist Clark Richert, Paul Hildebrandt (the present CEO of Zometool), and co-inventor Marc Pelletier. It is manufactured by Zometool, Inc. According to the company, Zometool was primarily designed for kids. Zometool has also been used in other fields including mathematics and physics. For example, aperiodic tilings such as Penrose tilings can be modeled using Zometool. The learning tool was designed by inventor-designer Steve Baer, his wife Holly and others. The Zometool plastic construction set toy is produced by a privately owned company of the same name, based outside of Longmont, Colorado, and which evolved out of Baer's company ZomeWorks. Its elements consist of small connector nodes and struts of various colors. The overall shape of a connector node is that of a non-uniform small rhombicosidodecahedron with each face replaced by a small hole. The ends of the struts are designed to fit in the holes of the connector nodes, allowing for syntheses of a variety of structures. The idea of shape-coding the three types of struts was developed by Marc Pelletier and Paul Hildebrandt. To create the "balls," or nodes, Pelletier and Hildebrandt invented a system of 62 hydraulic pins that came together to form a mold. The first connector node emerged from their mold on April 1, 1992. In the years since 1992, Zometool has extended its product line, though the basic design of the connector node has not changed so all parts to date are compatible with each-other. From 1992 until 2000, Zometool produced kits with connector nodes and blue, yellow, and red struts. In 2000, Zometool introduced green struts, prompted by French architect Fabien Vienne, which can be used to construct the regular tetrahedron and octahedron. In 2003, Zometool changed the style of the struts slightly. The struts "with clicks" have a different surface texture and they also have longer nibs which allow for a more robust connection between connector node and strut. Characteristics. The color of a Zometool strut is associated with its cross section and also with the shape of the hole of the connector node in which it fits. Each blue strut has a rectangular cross section, each yellow strut has a triangular cross section, and each red strut has a pentagonal cross section. The cross section of a green strut is a rhombus of √2 aspect ratio, but as the connector nodes do not include holes at the required positions, the green struts instead fit into any of the 12 pentagonal holes with 5 possible orientations per hole, 60 possible orientations in all; using them is not as straightforward as the other struts. At their midpoints, each of the yellow and red struts has a twist where the cross-sectional shape reverses. This design feature forces the connector nodes on the ends of the strut to have the same orientation. Similarly, the cross section of the blue strut is a non-square rectangle, again ensuring that the two nodes on the ends have the same orientation. Instead of a twist, the green struts have two bends which allow them to fit into the pentagonal holes of the connector node which are at a slight offset from the strut's axis.[""] Among other places, the word zome comes from the term zone. The zome system allows no more than 61 zones. The cross-sectional shapes correspond to colors, and in turn these correspond to zone colors. Hence the zome system has 15 blue zones, 10 yellow zones, 6 red zones, and 30 green zones. Two shapes are associated with blue. The blue struts with a rectangular cross section are designed to lie in the same zones as the blue struts, but they are half the length of a blue strut; hence these struts are often called "half-blue" (and were originally made in a light blue color). The blue-green struts with a rhombic cross section lie in the same zones as the green struts, but they are designed so that the ratio of a rhombic blue-green strut to a blue strut is 1:1 (as opposed to the green strut's √2:1). Due to this length ratio, the blue-green struts that have a rhombic cross section do not mathematically belong to the zome system.[""] Mathematics of Zometools. The strut lengths follow a mathematical pattern: For any color, there exists lengths such that they increase by a constant factor of approximately 1.618, a number that is yield of what is called the “golden ratio" which is represented by Greek letter phi (formula_0 or formula_1). The golden ratio is a ratio such that the sum of two quantities is equal to the ratio of the same quantities, based on the largest value of the two numbers. Thus, &lt;templatestyles src="Block indent/styles.css"/&gt;formula_2An application of the golden ratio for the zome system is that for each color, there exists a length such that a long strut length equals the length of a medium strut connected to a short strut. In other words, the length of the long strut equals the sum of the medium strut length and the a short strut length. Definition. A zome is defined in terms of the vector space formula_3, equipped with the standard inner product, also known as 3-dimensional Euclidean space. Let formula_0 denote the golden ratio and let formula_4 denote the symmetry group of the configuration of vectors formula_5, formula_6, and formula_7. The group formula_4, an example of a Coxeter group, is known as the icosahedral group because it is the symmetry group of a regular icosahedron having these vectors as its vertices. The subgroup of formula_4 consisting of the elements with determinant 1 (i.e. the rotations) is isomorphic to formula_8. Define the "standard blue vectors" as the formula_8-orbit of the vector formula_9. Define the "standard yellow vectors" as the formula_8-orbit of the vector formula_10. Define the "standard red vectors" as the formula_8-orbit of the vector formula_11. A "strut" of the zome system is any vector which can be obtained from the standard vectors described above by scaling by any power formula_12, where formula_13 is an integer. A "node" of the zome system is any element of the subgroup of formula_3 generated by the struts. Finally, the "zome system" is the set of all pairs formula_14, where formula_15 is a set of nodes and formula_16 is a set of pairs formula_17 such that formula_18 and formula_19 are in formula_15 and the difference formula_20 is a strut. There are then 30, 20, and 12 standard vectors having the colors blue, yellow, and red, respectively. Correspondingly, the stabilizer subgroup of a blue, yellow, or red strut is isomorphic to the cyclic group of order 2, 3, or 5, respectively. Hence, one may also describe the blue, yellow, and red struts as "rectangular", "triangular", and "pentagonal", respectively. The zome system may be extended by adjoining green vectors. The "standard green vectors" comprise the formula_8-orbit of the vector formula_21 and a "green strut" as any vector which can be obtained by scaling a standard green vector by any integral power formula_12. As above, one may check that there are formula_22=60 standard green vectors. One may then enhance the zome system by including these green struts. Doing this does not affect the set of nodes. The abstract zome system defined above is significant because of the following fact: Every connected zome model has a faithful image in the zome system. The converse of this fact is only partially true, but this is due only to the laws of physics. For example, the radius of a zometool node is positive (as opposed to a node being a single point mathematically), so one cannot make a zometool model where two nodes are separated by an arbitrarily small, prescribed distance. Similarly, only a finite number of lengths of struts will ever be manufactured, and a green strut cannot be placed directly adjacent to a red strut or another green strut with which it shares the same hole (even though they are mathematically distinct). As a modeling system. The zome system is especially useful for modeling 1-dimensional skeletons of highly symmetric objects in 3- and 4-dimensional Euclidean space. The most prominent among these are the five Platonic Solids, and the 4-dimensional polytopes related to the 120-cell and the 600-cell. However, many other mathematical objects may be modeled using the zome system, including: The uses of zome are not restricted to pure mathematics. Other uses include the study of engineering problems, especially steel-truss structures, the study of some molecular, nanotube, and viral structures, and to make soap film surfaces. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Software
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": " \\frac{a+b}{a} = \\frac{a}{b} = \\varphi" }, { "math_id": 3, "text": "\\R^3" }, { "math_id": 4, "text": "H_3" }, { "math_id": 5, "text": "(0,\\pm\\varphi,\\pm 1)" }, { "math_id": 6, "text": "(\\pm\\varphi,\\pm 1,0)" }, { "math_id": 7, "text": "(\\pm 1,0,\\pm\\varphi)" }, { "math_id": 8, "text": "A_5" }, { "math_id": 9, "text": "(2,0,0)" }, { "math_id": 10, "text": "(1,1,1)" }, { "math_id": 11, "text": "(0,\\varphi,1)" }, { "math_id": 12, "text": "\\varphi^n" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "(N,S)" }, { "math_id": 15, "text": "N" }, { "math_id": 16, "text": "S" }, { "math_id": 17, "text": "(v,w)" }, { "math_id": 18, "text": "v" }, { "math_id": 19, "text": "w" }, { "math_id": 20, "text": "v-w" }, { "math_id": 21, "text": "(2,2,0)" }, { "math_id": 22, "text": "|A_5|" } ]
https://en.wikipedia.org/wiki?curid=75949002
75955131
N-dimensional polyhedron
An "n"-dimensional polyhedron is a geometric object that generalizes the 3-dimensional polyhedron to an "n"-dimensional space. It is defined as a set of points in real affine (or Euclidean) space of any dimension "n", that has flat sides. It may alternatively be defined as the intersection of finitely many half-spaces. Unlike a 3-dimensional polyhedron, it may be bounded or unbounded. In this terminology, a bounded polyhedron is called a polytope. Analytically, a convex polyhedron is expressed as the solution set for a system of linear inequalities, "ai"T"x" ≤ "bi", where "ai" are vectors in R"n" and "bi" are scalars. This definition of polyhedra is particularly important as it provides a geometric perspective for problems in linear programming.9 Examples. Many traditional polyhedral forms are n-dimensional polyhedra. Other examples include: Faces and vertices. A subset "F" of a polyhedron "P" is called a face of "P" if there is a halfspace "H" (defined by some inequality "a1"T"x" ≤ "b1") such that "H" contains "P" and "F" is the intersection of "H" and "P".9 Suppose "P" is a polyhedron defined by "Ax" ≤ "b", where "A" has full column rank. Then, "v" is a vertex of "P" if and only if "v" is a basic feasible solution of the linear system "Ax" ≤ "b".10 Standard representation. The representation of a polyhedron by a set of linear inequalities is not unique. It is common to define a standard representation for each polyhedron "P":9 Representation by cones and convex hulls. If "P" is a polytope (a bounded polyhedron), then it can be represented by its set of vertices "V", as "P" is simply the convex hull of "V": "P" = conv("V"). If "P" is a general (possibly unbounded) polyhedron, then it can be represented as: P = conv(V) + cone(E), where "V" is (as before) the set of vertices of "P", and "E" is another finite set, and cone denotes the conic hull. The set cone(E) is also called the recession cone of "P".10 Carathéodory's theorem states that, if "P" is a "d"-dimensional polytope, then every point in "P" can be written as a convex combination of at most "d"+1 affinely-independent vertices of "P". The theorem can be generalized: if "P" is any "d"-dimensional polyhedron, then every point in "P" can be written as a convex combination of points "v"1...,"vs", "v"1+"e"1...,"v"1+"et" with "s"+"t" ≤ "d"+1, such that "v"1...,"vs" are elements of minimal nonempty faces of "P" and "e"1...,"et" are elements of the minimal nonzero faces of the recession cone of "P".10 Complexity of representation. When solving algorithmic problems on polyhedra, it is important to know whether a certain polyhedron can be represented by an encoding with a small number of bits. There are several measures to the representation complexity of a polyhedron "P":163 These two kinds of complexity are closely related:Lem.6.2.4 A polyhedron "P" is called well-described if we know "n" (the number of dimensions) and "f" (an upper bound on the facet complexity). This is equivalent to requiring that we know "n" and "v" (an upper bound on the vertex complexity). In some cases, we know an upper bound on the facet complexity of a polyhedron "P", but we do not know the specific inequalities that attain this upper bound. However, it can be proved that the encoding length in any standard representation of "P" is at most 35 "n"2 "f".Lem.6.2.3 The complexity of representation of "P" implies upper and lower bounds on the volume of "P":165 Small representation complexity is useful for "rounding" approximate solutions to exact solutions. Specifically:166 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\| a_i \\|_{\\infty} = 1" }, { "math_id": 1, "text": "2^{-7 n^3 f}" }, { "math_id": 2, "text": "2^{5 n^2 f}" }, { "math_id": 3, "text": "\\| qv-w \\| < 2^{-3f}, 0<q<2^{4 n f}" }, { "math_id": 4, "text": "\\| qa-c \\| + |qb-d| < 2^{-3v}, 0<q<2^{4 n v}" } ]
https://en.wikipedia.org/wiki?curid=75955131
75957569
Frequency (marketing)
Number of times an audience is exposed to advertisement In marketing and advertising, frequency refers to the number of times a target audience is exposed to a particular message or advertisement within a given time frame. This concept is a fundamental element of marketing communication strategies, aiming to enhance brand recall, create awareness, and influence consumer behavior through repeated exposure. From an audience perspective, Philip H. Dougherty says frequency can be interpreted as "how often consumers must see it before they can readily recall it and how many times it must be seen before attitudes are altered." For a business, increased frequency is generally desirable. Some studies have shown that audiences respond more favorably from repeated exposures to advertisements (i.e., increased frequency). Moreover, to maximize return on ad spend, some research suggests the repeat of exposures should be spread out (once-a-week) versus multiple times in a short-time period (multiple times in a day), in order not to overwhelm the target audience. Construction. Television. In television media, frequency is calculated by dividing the number of impressions by the total audience population that was reached. formula_0 where Radio. In radio, frequency is calculated multiplying gross impressions, the total number of times a commercial is heard, divided by the total audience population that was reached. formula_4 where Frequency capping. Frequency capping is a term in advertising that means restricting (capping) the number of times a specific visitor to a website is shown a particular advertisement (frequency). This restriction is applied to all websites that serve ads from the same advertising network. Frequency capping is a feature within ad serving that allows to limit the maximum number of impressions/views a visitor can see a specific ad within a period of time. For example, "three views/visitor/24-hours" ("three views per visitor per 24-hours") means after viewing this ad three times, any visitor will not see it again for 24 hours. This feature uses cookies to remember the impression count. Non-cookies privacy-preserving implementation is also available. Frequency capping is often cited as a way to avoid banner burnout, the point where visitors are being overexposed and response drops. This may be true for direct-response campaigns whose effectiveness is measured in click-throughs, but it might run counter to campaigns whose goal is brand awareness, as measured by non-click activity. Effective frequency. The effective frequency is the number of times a person must be exposed to an advertising message before a response is made and before exposure is considered wasteful. The subject of effective frequency is quite controversial. Many people have their own definition on what this phrase means. There are also numerous studies with their own theories or models as to what the correct number is for effective frequency. There are several definitions of effective frequency: Theorists. Hermann Ebbinghaus. In 1879–80, Hermann Ebbinghaus conducted research on higher mental processes; he replicated the entire procedure in 1883–4. Ebbinghaus' methods achieved a remarkable set of results. He was the first to describe the shape of the learning curve. He reported that the time required to memorize an average nonsense syllable increases sharply as the number of syllables increases. He discovered that distributing learning trials over time is more effective in memorizing nonsense syllables than massing practice into a single session; and he noted that continuing to practice material after the learning criterion has been reached enhances retention. Using one of his methods called savings as an index, he showed that the most commonly accepted law of association, viz., association by contiguity (the idea that items next to one another are associated) had to be modified to include remote associations (associations between items that are not next to one another in a list). He was the first to describe primacy and recency effects (the fact that early and late items in a list are more likely to be recalled than middle items), and to report that even a small amount of initial practice, far below that required for retention, can lead to savings at relearning. He even addressed the question of memorization of meaningful material and estimated that learning such material takes only about one tenth of the effort required to learn comparable nonsense material. This learning curve research has been used to help researches study advertising message retention. Thomas Smith. Thomas Smith wrote a guide called "Successful Advertising" in 1885. The saying he used is still being used today. The first time people look at any given ad, they don't even see it. The second time, they don't notice it. The third time, they are aware that it is there. The fourth time, they have a fleeting sense that they've seen it somewhere before. The fifth time, they actually read the ad. The sixth time they thumb their nose at it. The seventh time, they start to get a little irritated with it. The eighth time, they start to think, "Here's that confounded ad again." The ninth time, they start to wonder if they're missing out on something. The tenth time, they ask their friends and neighbors if they've tried it. The eleventh time, they wonder how the company is paying for all these ads. The twelfth time, they start to think that it must be a good product. The thirteenth time, they start to feel the product has value. The fourteenth time, they start to remember wanting a product exactly like this for a long time. The fifteenth time, they start to yearn for it because they can't afford to buy it. The sixteenth time, they accept the fact that they will buy it sometime in the future. The seventeenth time, they make a note to buy the product. The eighteenth time, they curse their poverty for not allowing them to buy this terrific product. The nineteenth time, they count their money very carefully. The twentieth time prospects see the ad, they buy what is offering. Herbert E. Krugman. Herbert E. Krugman wrote "Why Three Exposures may be enough" while he was employed at General Electric. His theory has been adopted and widely in use in the advertising arena. The following statement encapsulates his theory: "Let me try to explain the special qualities of one, two and three exposures. I stop at three because as you shall see there is no such thing as a fourth exposure psychologically; rather fours, fives, etc., are repeats of the third exposure effect. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Exposure No. 1 is...a "What is it?" type of... response. Anything new or novel no matter how uninteresting on second exposure has to elicit some response the first time...if only to discard the object as of no further interest... The second exposure...response...is "What of it?"...whether or not [the message] has personal relevance... By the third exposure the viewer knows he's been through his "What it is?" and "What of it's?," and the third, then, becomes the true reminder ... The importance of this view ... is that it positions advertising as powerful only when the viewer...is interested in the [product message]... Secondly, it positions the viewer as...reacting to the commercial—very quickly...when the proper time comes round. There is a myth in the advertising world that viewers will forget your message if you don't repeat your advertising often enough. It is this myth that supports many large advertising expenditures... I would rather say the public comes closer to forgetting nothing they have seen on TV. They just "put it out of their minds" until and unless it has some use ... and [then] the response to the commercial continues. According to Krugman, there are only three levels of exposure in psychological, not media, terms: curiosity, recognition and decision. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F=I/U" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "I" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "F = \\frac{GI}{U} = \\frac{AQH * S}{U}" }, { "math_id": 5, "text": "GI" }, { "math_id": 6, "text": "AQH" }, { "math_id": 7, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=75957569