id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
74636983
Neural operators
Machine learning framework Neural operators are a class of deep learning architectures designed to learn maps between infinite-dimensional function spaces. Neural operators represent an extension of traditional artificial neural networks, marking a departure from the typical focus on learning mappings between finite-dimensional Euclidean spaces or finite sets. Neural operators directly learn operators between function spaces; they can receive input functions, and the output function can be evaluated at any discretization. The primary application of neural operators is in learning surrogate maps for the solution operators of partial differential equations (PDEs), which are critical tools in modeling the natural environment. Standard PDE solvers can be time-consuming and computationally intensive, especially for complex systems. Neural operators have demonstrated improved performance in solving PDEs compared to existing machine learning methodologies while being significantly faster than numerical solvers. Neural operators have also been applied to various scientific and engineering disciplines such as turbulent flow modeling, computational mechanics, graph-structured data, and the geosciences. In particular, they have been applied to learning stress-strain fields in materials, classifying complex data like spatial transcriptomics, predicting multiphase flow in porous media, and carbon dioxide migration simulations. Finally, the operator learning paradigm allows learning maps between function spaces, and is different from parallel ideas of learning maps from finite-dimensional spaces to function spaces, and subsumes these settings when limited to fixed input resolution. Operator learning. Understanding and mapping relationships between function spaces has many applications in engineering and the sciences. In particular, one can cast the problem of solving partial differential equations as identifying a map between function spaces, such as from an initial condition to a time-evolved state. In other PDEs this map takes an input coefficient function and outputs a solution function. Operator learning is a machine learning paradigm to learn solution operators mapping the input function to the output function. Using traditional machine learning methods, addressing this problem would involve discretizing the infinite-dimensional input and output function spaces into finite-dimensional grids and applying standard learning models, such as neural networks. This approach reduces the operator learning to finite-dimensional function learning and has some limitations, such as generalizing to discretizations beyond the grid used in training. The primary properties of neural operators that differentiate them from traditional neural networks is discretization invariance and discretization convergence. Unlike conventional neural networks, which are fixed on the discretization of training data, neural operators can adapt to various discretizations without re-training. This property improves the robustness and applicability of neural operators in different scenarios, providing consistent performance across different resolutions and grids. Definition and formulation. Architecturally, neural operators are similar to feed-forward neural networks in the sense that they are composed of alternating linear maps and non-linearities. Since neural operators act on and output functions, neural operators have been instead formulated as a sequence of alternating linear integral operators on function spaces and point-wise non-linearities. Using an analogous architecture to finite-dimensional neural networks, similar universal approximation theorems have been proven for neural operators. In particular, it has been shown that neural operators can approximate any continuous operator on a compact set. Neural operators seek to approximate some operator formula_0 between function spaces formula_1 and formula_2 by building a parametric map formula_3. Such parametric maps formula_4 can generally be defined in the form formula_5 where formula_6 are the lifting (lifting the codomain of the input function to a higher dimensional space) and projection (projecting the codomain of the intermediate function to the output codimension) operators, respectively. These operators act pointwise on functions and are typically parametrized as multilayer perceptrons. formula_7 is a pointwise nonlinearity, such as a rectified linear unit (ReLU), or a Gaussian error linear unit (GeLU). Each layer formula_8 has a respective local operator formula_9 (usually parameterized by a pointwise neural network), a kernel integral operator formula_10, and a bias function formula_11. Given some intermediate functional representation formula_12 with domain formula_13 in the formula_14-th hidden layer, a kernel integral operator formula_15 is defined as formula_16 where the kernel formula_17 is a learnable implicit neural network, parametrized by formula_18. In practice, one is often given the input function to the neural operator at a specific resolution. For instance, consider the setting where one is given the evaluation of formula_12 at formula_19 points formula_20. Borrowing from Nyström integral approximation methods such as Riemann sum integration and Gaussian quadrature, the above integral operation can be computed as follows: formula_21 where formula_22 is the sub-area volume or quadrature weight associated to the point formula_23. Thus, a simplified layer can be computed as formula_24 The above approximation, along with parametrizing formula_17 as an implicit neural network, results in the graph neural operator (GNO). There have been various parameterizations of neural operators for different applications. These typically differ in their parameterization of formula_25. The most popular instantiation is the Fourier neural operator (FNO). FNO takes formula_26 and by applying the convolution theorem, arrives at the following parameterization of the kernel integral operator: formula_27 where formula_28 represents the Fourier transform and formula_29 represents the Fourier transform of some periodic function formula_17. That is, FNO parameterizes the kernel integration directly in Fourier space, using a prescribed number of Fourier modes. When the grid at which the input function is presented is uniform, the Fourier transform can be approximated using the discrete Fourier transform (DFT) with frequencies below some specified threshold. The discrete Fourier transform can be computed using a fast Fourier transform (FFT) implementation. Training. Training neural operators is similar to the training process for a traditional neural network. Neural operators are typically trained in some Lp norm or Sobolev norm. In particular, for a dataset formula_30 of size formula_31, neural operators minimize (a discretization of) formula_32, where formula_33 is a norm on the output function space formula_2. Neural operators can be trained directly using backpropagation and gradient descent-based methods. Another training paradigm is associated with physics-informed machine learning. In particular, physics-informed neural networks (PINNs) use complete physics laws to fit neural networks to solutions of PDEs. Extensions of this paradigm to operator learning are broadly called physics-informed neural operators (PINO), where loss functions can include full physics equations or partial physical laws. As opposed to standard PINNs, the PINO paradigm incorporates a data loss (as defined above) in addition to the physics loss formula_34. The physics loss formula_34 quantifies how much the predicted solution of formula_35 violates the PDEs equation for the input formula_36. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathcal{G} : \\mathcal{A} \\to \\mathcal{U}" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "\\mathcal{U}" }, { "math_id": 3, "text": "\\mathcal{G}_\\phi : \\mathcal{A} \\to \\mathcal{U}" }, { "math_id": 4, "text": "\\mathcal{G}_\\phi" }, { "math_id": 5, "text": "\\mathcal{G}_\\phi := \\mathcal{Q} \\circ \\sigma(W_T + \\mathcal{K}_T + b_T) \\circ \\cdots \\circ \\sigma(W_1 + \\mathcal{K}_1 + b_1) \\circ \\mathcal{P}," }, { "math_id": 6, "text": "\\mathcal{P}, \\mathcal{Q}" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "t=1, \\dots, T" }, { "math_id": 9, "text": "W_t" }, { "math_id": 10, "text": "\\mathcal{K}_t" }, { "math_id": 11, "text": "b_t" }, { "math_id": 12, "text": "v_t" }, { "math_id": 13, "text": "D" }, { "math_id": 14, "text": "t" }, { "math_id": 15, "text": "\\mathcal{K}_\\phi" }, { "math_id": 16, "text": "(\\mathcal{K}_\\phi v_t)(x) := \\int_D \\kappa_\\phi(x, y, v_t(x), v_t(y))v_t(y)dy, " }, { "math_id": 17, "text": "\\kappa_\\phi" }, { "math_id": 18, "text": "\\phi" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "\\{y_j\\}_j^n" }, { "math_id": 21, "text": "\\int_D \\kappa_\\phi(x, y, v_t(x), v_t(y))v_t(y)dy\\approx \\sum_j^n \\kappa_\\phi(x, y_j, v_t(x), v_t(y_j))v_t(y_j)\\Delta_{y_j}, " }, { "math_id": 22, "text": "\\Delta_{y_j}" }, { "math_id": 23, "text": "y_j" }, { "math_id": 24, "text": "v_{t+1}(x) \\approx \\sigma\\left(\\sum_j^n \\kappa_\\phi(x, y_j, v_t(x), v_t(y_j))v_t(y_j)\\Delta_{y_j} + W_t(v_t(y_j)) + b_t(x)\\right)." }, { "math_id": 25, "text": "\\kappa" }, { "math_id": 26, "text": "\\kappa_\\phi(x, y, v_t(x), v_t(y)) := \\kappa_\\phi(x-y)" }, { "math_id": 27, "text": "(\\mathcal{K}_\\phi v_t)(x) = \\mathcal{F}^{-1} (R_\\phi \\cdot (\\mathcal{F}v_t))(x), " }, { "math_id": 28, "text": "\\mathcal{F}" }, { "math_id": 29, "text": "R_\\phi" }, { "math_id": 30, "text": "\\{(a_i, u_i)\\}_{i=1}^N" }, { "math_id": 31, "text": "N" }, { "math_id": 32, "text": "\\mathcal{L}_\\mathcal{U}(\\{(a_i, u_i)\\}_{i=1}^N) := \\sum_{i=1}^N \\|u_i - \\mathcal{G}_\\theta (a_i) \\|_\\mathcal{U}^2" }, { "math_id": 33, "text": "\\|\\cdot \\|_\\mathcal{U}" }, { "math_id": 34, "text": "\\mathcal{L}_{PDE}(a, \\mathcal{G}_\\theta (a))" }, { "math_id": 35, "text": "\\mathcal{G}_\\theta (a)" }, { "math_id": 36, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=74636983
74639708
Drug permeability
Measure of drug take up into the body In medicinal chemistry, Drug Permeability is an empirical parameter that indicates how quickly a chemical entity or an active pharmaceutical ingredient crosses a biological membrane or another biological barrier to become bioavailable in the body. Drug permeability, together with drug aqueous solubility are the two parameters that define the fate of the active ingredient after oral administration and ultimately define its bioavailability. When drug permeability is empirically measured "in vitro", it is generally called apparent permeability (Papp) as its absolute value varies according to the method selected for its measurement. Papp is measured "in vitro" utilizing cellular based barriers such as the Caco-2 model or utilizing artificial biomimetic barriers, such as the Parallel Artificial Membrane Permeation Assay (PAMPA) or the PermeaPad. All these methods are built on an acceptor compartment (from 0.2 up to several mL according to the method uses) where the drug solution is placed, a biomimetic barrier and an acceptor compartment, where the drug concentration is quantified over time. By maintaining sink condition, a steady state is reached after a lag time (τ, Fig. 1) . Data Analysis. The drug flux represents the slope of the linear regression of the accumulated mass (Q) over time (t) normalized over the permeation area (A), i.e., the surface area of the barrier available for permeation. Equation 1: formula_0 The drug apparent permeability (Papp) is calculated by normalizing the drug flux (j) over the initial concentration of the API in the donor compartment (c0) as: Equation 2: formula_1 Dimensionally, the Papp represents a velocity, and it is normally expressed in cm/sec. The highest is the permeability, the highest is expected to be the bioavailability of the drug after oral administration. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "j=dQ/dt*1/A" }, { "math_id": 1, "text": "P_{app}=j/c_0 " } ]
https://en.wikipedia.org/wiki?curid=74639708
746424
David Hestenes
American physicist and science educator David Orlin Hestenes (born May 21, 1933) is a theoretical physicist and science educator. He is best known as chief architect of geometric algebra as a unified language for mathematics and physics, and as founder of Modelling Instruction, a research-based program to reform K–12 Science, Technology, Engineering, and Mathematics (STEM) education. For more than 30 years, he was employed in the Department of Physics and Astronomy of Arizona State University (ASU), where he retired with the rank of research professor and is now emeritus. Life and career. Education and doctorate degree. David Orlin Hestenes (eldest son of mathematician Magnus Hestenes) was born 1933 in Chicago, Illinois. Beginning college as a pre-medical major at UCLA from 1950 to 1952, he graduated from Pacific Lutheran University in 1954 with degrees in philosophy and speech. After serving in the U.S. Army from 1954 to 1956, he entered UCLA as an unclassified graduate student, completed a physics M.A. in 1958 and won a University Fellowship. His mentor at UCLA was the physicist Robert Finkelstein, who was working on unified field theories at that time. A serendipitous encounter with lecture notes by mathematician Marcel Riesz inspired Hestenes to study a geometric interpretation of Dirac matrices. He obtained his Ph.D. from UCLA with a thesis entitled "Geometric Calculus and Elementary Particles". Shortly thereafter he recognized that the Dirac algebras and Pauli matrices could be unified in matrix-free form by a device later called a spacetime split. Then he revised his thesis and published it in 1966 as a book, "Space–Time Algebra", now referred to as spacetime algebra (STA). This was the first major step in developing a unified, coordinate-free geometric algebra and calculus for all of physics. Postdoctorate research and career. From 1964 to 1966, Hestenes was an NSF Postdoctoral Fellow at Princeton with John Archibald Wheeler. In 1966 he joined the physics department at Arizona State University, rising to full professor in 1976 and retiring in 2000 to "Emeritus Professor of Physics". In 1980 and 1981 as a "NASA Faculty Fellow" and in 1983 as a "NASA Consultant" he worked at Jet Propulsion Laboratory on orbital mechanics and attitude control, where he applied geometric algebra in development of new mathematical techniques published in a textbook/monograph "New Foundations for Classical Mechanics". In 1983 he joined with entrepreneur Robert Hecht-Nielsen and psychologist Peter Richard Killeen in conducting the first ever conference devoted exclusively to neural network modeling of the brain. In 1987, he became the first visiting scholar in the Department of Cognitive and Neural Systems (Boston University) and worked on neuroscience research for a period. Hestenes has been a principal investigator for NSF grants seeking to teach physics through modeling and to measure student understanding of physics models at both the high school and university levels. Work. Hestenes has worked in mathematical and theoretical physics, geometric algebra, neural networks, and cognitive research in science education. He is the prime mover behind the contemporary resurgence of interest in geometric algebras and in other offshoots of Clifford algebras as ways of formalizing theoretical physics. Geometric algebra and calculus. Spacetime algebra provided the starting point for two main lines of research: on its implications for quantum mechanics specifically and for mathematical physics generally. The first line began with the fact that reformulation of the Dirac equation in terms of spacetime algebra reveals hidden geometric structure. Among other things, it reveals that the complex factor formula_0 in the equation is a geometric quantity (a bivector) identified with electron spin, where formula_1 specifies the spin direction and formula_2 is the spin magnitude. The implications of this insight have been studied in a long series of papers with the most significant conclusion linking it to Schrödinger's zitterbewegung and proposing a zitterbewegung interpretation of quantum mechanics. Research in this direction is still active. The second line of research was dedicated to extending geometric algebra to a self-contained geometric calculus for use in theoretical physics. Its culmination is the book "Clifford Algebra to Geometric Calculus" which follows an approach to differential geometry that uses the shape tensor (second fundamental form). Innovations in the book include the concepts of vector manifold, differential outermorphism, vector derivative that enables coordinate-free calculus on manifolds, and an extension of the Cauchy integral theorem to higher dimensions. Hestenes emphasizes the important role of the mathematician Hermann Grassmann for the development of geometric algebra, with William Kingdon Clifford building on Grassmann's work. Hestenes is adamant about calling this mathematical approach “geometric algebra” and its extension “geometric calculus,” rather than referring to it as “Clifford algebra”. He emphasizes the universality of this approach, the foundations of which were laid by both Grassmann and Clifford. He points out that contributions were made by many individuals, and Clifford himself used the term “geometric algebra” which reflects the fact that this approach can be understood as a mathematical formulation of geometry, whereas, so Hestenes asserts, the term “Clifford algebra” is often regarded as simply “just one more algebra among many other algebras”, which withdraws attention from its role as a unified language for mathematics and physics. Hestenes' work has been applied to Lagrangian field theory, formulation of a gauge theory of gravity alternative to general relativity by Lasenby, Doran and Gull, which they call gauge theory gravity (GTG), and it has been applied to spin representations of Lie groups. Most recently, it led Hestenes to formulate conformal geometric algebra, a new approach to computational geometry. This has found a rapidly increasing number of applications in engineering and computer science. He has contributed to the main conferences in this field, the International Conference on Clifford Algebras (ICCA) and the Applications of Geometric Algebra in Computer Science and Engineering (AGACSE) series. Modeling theory and instruction. Since 1980, Hestenes has been developing a "Modeling Theory" of science and cognition, especially to guide the design of science instruction. The theory distinguishes sharply between conceptual models that constitute the content core of science and the mental models that are essential to understand them. "Modeling Instruction" is designed to engage students in all aspects of modeling, broadly conceived as constructing, testing, analyzing and applying scientific models. To assess the effectiveness of "Modeling Instruction", Hestenes and his students developed the "Force Concept Inventory", a concept inventory tool for evaluating student understanding of introductory physics. After a decade of education research to develop and validate the approach, Hestenes was awarded grants from the National Science Foundation for another decade to spread the "Modeling Instruction Program" nationwide. As of 2011, more than 4000 teachers had participated in summer workshops on modeling, including nearly 10% of the United States' high school physics teachers. It is estimated that "Modeling" teachers reach more than 100,000 students each year. One outcome of the program is that the teachers created their own non-profit organization, the "American Modeling Teachers Association" (AMTA), to continue and expand the mission after government funding terminated. The AMTA has expanded to a nationwide community of teachers dedicated to addressing the nation's Science, Technology, Engineering, and Mathematics (STEM) education crisis. Another outcome of the Modeling Program was creation of a graduate program at Arizona State University for sustained professional development of STEM teachers. This provides a validated model for similar programs at universities across the country. Science Invents, LLC propulsion project controversy. On August 30, 2023, Hestenes was named in a United States District Court case in Utah filed by several venture capitalists claiming he endorsed and participated in a Ponzi scheme related to a discredited anti-gravity propulsion technology that was being marketed by Science Invents, LLC in Salt Lake City, Utah, a company owned by Joe Firmage, the former founder of USWeb. He was alleged to have taken over $100,000 in kickbacks from Firmage and other principals involved in the scheme and for recruiting investors into this scheme. The suit alleges Firmage and others falsely claimed the propulsion technology had been endorsed by the Department of Defense and was funded by them, and also claimed Hestenes had endorsed the validity of the science underlying the technology, a claim which Hestenes has adamantly denied. In total, the Ponzi scheme allegedly defrauded investors of $25,000,000 over a 10 year period. A default judgment was entered by the court on December 26, 2023 against the defendants. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "i \\hbar " }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "\\hbar /2" } ]
https://en.wikipedia.org/wiki?curid=746424
7464700
Theorem of corresponding states
According to van der Waals, the theorem of corresponding states (or principle/law of corresponding states) indicates that all fluids, when compared at the same reduced temperature and reduced pressure, have approximately the same compressibility factor and all deviate from ideal gas behavior to about the same degree. Material constants that vary for each type of material are eliminated, in a recast reduced form of a constitutive equation. The reduced variables are defined in terms of critical variables. The principle originated with the work of Johannes Diderik van der Waals in about 1873 when he used the critical temperature and critical pressure to derive a universal property of all fluids that follow the van der Waals equation of state. It predicts a value of formula_0 that is found to be an overestimate when compared to real gases. Edward A. Guggenheim used the phrase "Principle of Corresponding States" in an opt-cited paper to describe the phenomenon where different systems have very similar behaviors when near a critical point. There are many examples of non-ideal gas models which satisfy this theorem, such as the van der Waals model, the Dieterici model, and so on, that can be found on the page on real gases. Compressibility factor at the critical point. The compressibility factor at the critical point, which is defined as formula_1, where the subscript formula_2 indicates physical quantities measured at the critical point, is predicted to be a constant independent of substance by many equations of state. The table below for a selection of gases uses the following conventions: References. <templatestyles src="Reflist/styles.css" /> External links. <br>
[ { "math_id": 0, "text": "3/8 = 0.375" }, { "math_id": 1, "text": "Z_c=\\frac{P_c v_c \\mu}{R T_c}" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "T_c" }, { "math_id": 4, "text": "P_c" }, { "math_id": 5, "text": "v_c" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=7464700
746497
Slip (aerodynamics)
Aerobatic maneuver A slip is an aerodynamic state where an aircraft is moving "somewhat" sideways as well as forward relative to the oncoming airflow or relative wind. In other words, for a conventional aircraft, the nose will be pointing in the opposite direction to the bank of the wing(s). The aircraft is not in coordinated flight and therefore is flying inefficiently. Background. Flying in a slip is aerodynamically inefficient, since the lift-to-drag ratio is reduced. More drag is at play consuming energy but not producing lift. Inexperienced or inattentive pilots will often enter slips unintentionally during turns by failing to coordinate the aircraft with the rudder. Airplanes can readily enter into a slip climbing out from take-off on a windy day. If left unchecked, climb performance will suffer. This is especially dangerous if there are nearby obstructions under the climb path and the aircraft is underpowered or heavily loaded. A slip can also be a "piloting maneuver" where the pilot deliberately enters one type of slip or another. Slips are particularly useful in performing a short field landing over an obstacle (such as trees, or power lines), or to avoid an obstacle (such as a single tree on the extended centerline of the runway), and may be practiced as part of emergency landing procedures. These methods are also commonly employed when flying into farmstead or rough country airstrips where the landing strip is short. Pilots need to touch down with ample runway remaining to slow down and stop. There are common situations where a pilot may deliberately enter a slip by using opposite rudder and aileron inputs, most commonly in a landing approach at low power. Without flaps or spoilers it is difficult to increase the steepness of the glide without adding significant speed. This excess speed can cause the aircraft to fly in ground effect for an extended period, perhaps running out of runway. In a forward slip much more drag is created, allowing the pilot to dissipate altitude without increasing airspeed, increasing the angle of descent (glide slope). Forward slips are especially useful when operating pre-1950s training aircraft, aerobatic aircraft such as the Pitts Special or any aircraft with inoperative flaps or spoilers. Often, if an airplane in a slip is made to stall, it displays very little of the yawing tendency that causes a skidding stall to develop into a spin. A stalling airplane in a slip may do little more than tend to roll into a wings-level attitude. In fact, in some airplanes stall characteristics may even be improved. Forward-slip vs. sideslip. Aerodynamically these are identical once established, but they are entered for different reasons and will create different ground tracks and headings relative to those prior to entry. Forward-slip is used to steepen an approach (reduce height) without gaining much airspeed, benefiting from the increased drag. The sideslip moves the aircraft sideways (often, only in relation to the wind) where executing a turn would be inadvisable, drag is considered a byproduct. Most pilots like to enter sideslip just before flaring or touching down during a crosswind landing. Forward-slip. The forward slip changes the heading of the aircraft away from the down wing, while retaining the original "track" (flight path over the ground) of the aircraft. To execute a forward slip, the pilot banks into the wind and applies opposing rudder (e.g., right aileron + left rudder) in order to keep moving towards the target. If you were the target you would see the plane's nose off to one side, a wing off to the other side and tilted down toward you. The pilot must make sure that the plane's nose is low enough to keep airspeed up. However, airframe speed limits such as VA and VFE must be observed. A forward-slip is useful when a pilot has set up for a landing approach with excessive height or must descend steeply beyond a tree line to touchdown near the runway threshold. Assuming that the plane is properly lined up for the runway, the forward slip will allow the aircraft "track" to be maintained while steepening the descent without adding excessive airspeed. Since the heading is not aligned with the runway, forward-slip must be removed before touchdown to avoid excessive side loading on the landing gear, and if a cross wind is present an appropriate sideslip may be necessary at touchdown as described below. Sideslip. The "sideslip" also uses aileron and opposite rudder. In this case it is entered by lowering a wing and applying exactly enough opposite rudder so the airplane does not turn (maintaining the same "heading"), while maintaining safe airspeed with pitch or power. Compared to Forward-slip, less rudder is used: just enough to stop the change in the heading. In the sideslip condition, the airplane's longitudinal axis remains parallel to the original flightpath, but the airplane no longer flies along that track. The horizontal component of lift is directed toward the low wing, drawing the airplane sideways. This is the still-air, headwind or tailwind scenario. In case of crosswind, the wing is lowered into the wind, so that the airplane flies the original track. This is the sideslip approach technique used by many pilots in crosswind conditions (sideslip without slipping). The other method of maintaining the desired track is the crab technique: the wings are kept level, but the nose is pointed (part way) into the crosswind, and resulting drift keeps the airplane on track. A sideslip may be used exclusively to remain lined up with a runway centerline while on approach in a crosswind or be employed in the final moments of a crosswind landing. To commence sideslipping, the pilot rolls the airplane toward the wind to maintain runway centerline position while maintaining heading on the centerline with the rudder. Sideslip causes one main landing gear to touch down first, followed by the second main gear. This allows the wheels to be constantly aligned with the track, thus avoiding any side load at touchdown. The sideslip method for crosswind landings is not suitable for long-winged and low-sitting aircraft such as gliders, where instead a crab angle (heading into the wind) is maintained until a moment before touchdown. Aircraft manufacturer Airbus recommends sideslip approach only in low crosswind conditions. Sideslip angle. The sideslip angle, also called angle of sideslip (AOS, AoS, formula_0, Greek letter beta), is a term used in fluid dynamics and aerodynamics and aviation. It relates to the rotation of the aircraft centerline from the relative wind. In flight dynamics it is given the shorthand notation formula_0 (beta) and is usually assigned to be "positive" when the relative wind is coming from the right of the nose of the airplane. The sideslip angle formula_0 is essentially the directional angle of attack of the airplane. It is the primary parameter in directional stability considerations. In vehicle dynamics, side slip angle is defined as the angle made by the velocity vector to longitudinal axis of the vehicle at the center of gravity in an instantaneous frame. As the lateral acceleration increases during cornering, the side slip angle decreases. Thus at very high speed turns and small turning radius, there is a high lateral acceleration and formula_0 could be a negative value. Other uses of the slip. There are other, specialized circumstances where slips can be useful in aviation. For example, during aerial photography, a slip can lower one side of the aircraft to allow ground photos to be taken through a side window. Pilots will also use a slip to land in icing conditions if the front windshield has been entirely iced over—by landing slightly sideways, the pilot is able to see the runway through the aircraft's side window. Slips also play a role in aerobatics and aerial combat. How a slip affects flight. When an aircraft is put into a forward slip with no other changes to the throttle or elevator, the pilot will notice an increased rate of descent (or reduced rate of "ascent"). This is usually mostly due to increased drag on the fuselage. The airflow over the fuselage is at a sideways angle, increasing the relative frontal area, which increases drag. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=746497
746550
Generalized game
Game generalized so that it can be played on a board or grid of any size In computational complexity theory, a generalized game is a game or puzzle that has been generalized so that it can be played on a board or grid of any size. For example, generalized chess is the game of chess played on an formula_0 board, with formula_1 pieces on each side. Generalized Sudoku includes Sudokus constructed on an formula_0 grid. Complexity theory studies the asymptotic difficulty of problems, so generalizations of games are needed, as games on a fixed size of board are finite problems. For many generalized games which last for a number of moves polynomial in the size of the board, the problem of determining if there is a win for the first player in a given position is PSPACE-complete. Generalized hex and reversi are PSPACE-complete. For many generalized games which may last for a number of moves exponential in the size of the board, the problem of determining if there is a win for the first player in a given position is EXPTIME-complete. Generalized chess, go (with Japanese ko rules), Quixo, and checkers are EXPTIME-complete. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n\\times n" }, { "math_id": 1, "text": "2n" } ]
https://en.wikipedia.org/wiki?curid=746550
74663570
Kling–Gupta efficiency
Performance indicator for hydrologic models The Kling–Gupta efficiency (KGE) is a goodness-of-fit indicator widely used in the hydrologic sciences for comparing simulations to observations. It was created by hydrologic scientists Harald Kling and Hoshin Vijai Gupta. Its creators intended for it to improve upon widely used metrics such as the coefficient of determination and the Nash–Sutcliffe model efficiency coefficient. formula_0 where: The terms formula_2 and formula_3 are defined as follows: formula_4 where: and formula_7 where: A modified version, KGE', was proposed by Kling et al. in 2012. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n\\text{KGE} =1-\\sqrt{(r-1)^2+(\\alpha-1)^2+(\\beta-1)^2}\n" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\n\\beta=\\frac{\\mu_s}{\\mu_o}\n" }, { "math_id": 5, "text": "\\mu_s" }, { "math_id": 6, "text": "\\mu_o" }, { "math_id": 7, "text": "\n\\alpha = \\frac{\\sigma_s}{\\sigma_o}\n" }, { "math_id": 8, "text": "\\sigma_s^2" }, { "math_id": 9, "text": "\\sigma_s" }, { "math_id": 10, "text": "\\sigma_o^2" } ]
https://en.wikipedia.org/wiki?curid=74663570
746673
Bicuspid aortic valve
Medical condition Bicuspid aortic valve (BAV) is a form of heart disease in which two of the leaflets of the aortic valve fuse during development in the womb resulting in a two-leaflet (bicuspid) valve instead of the normal three-leaflet (tricuspid) valve. BAV is the most common cause of heart disease present at birth and affects approximately 1.3% of adults. Normally, the mitral valve is the only bicuspid valve and this is situated between the heart's left atrium and left ventricle. Heart valves play a crucial role in ensuring the unidirectional flow of blood from the atrium to the ventricles, or from the ventricle to the aorta or pulmonary trunk. BAV is normally inherited. Signs and symptoms. In many cases, a bicuspid aortic valve will cause no problems. People with BAV may become tired more easily than those with normal valvular function and have difficulty maintaining stamina for cardio-intensive activities due to poor heart performance caused by stress on the aortic wall. Complications. Calcification. BAV may become calcified later in life, which may lead to varying degrees of severity of aortic stenosis that will manifest as murmurs. If the leaflets do not close correctly, aortic regurgitation can occur. If these become severe enough, they may require heart surgery. The heart is put under more stress in order to either pump more blood through a stenotic valve or attempt to circulate regurgitation blood through a leaking valve. Ultimately there is a risk of rupture in the aortic valve due to bicuspid aortopathy which is a result of progressive aortic dilation from the stress of having only two valve leaflets where three are normal. Aortic lesions. One of the most notable associations with BAV is the tendency for these patients to present with ascending aortic aneurysmal lesions. The extracellular matrix of the aorta in patients with BAV shows marked deviations from that of the normal tricuspid aortic valve, specifically reduced Fibrillin-1. It is currently believed that an increase in the ratio of MMP2 (Matrix Metalloproteinases 2) to TIMP1 (tissue inhibitors of metalloproteinase) may be responsible for the abnormal degradation of the valve matrix and therefore lead to aortic dissection and aneurysm. However, other studies have also shown MMP9 involvement with no differences in TIMP expression. The size of the proximal aorta should be evaluated carefully during the workup. The initial diameter of the aorta should be noted and annual evaluation with CT scan, or MRI to avoid ionizing radiation, should be recommended to the patient; the examination should be conducted more frequently if a change in aortic diameter is seen. From this monitoring, the type of surgery that should be offered to the patient can be determined based on the change in size of the aorta. Aortic narrowing. A bicuspid aortic valve may cause the heart's aortic valve to narrow (aortic stenosis). This narrowing prevents the valve from opening fully, which reduces or blocks blood flow from the heart to the body. In some cases, the aortic valve does not close tightly, causing blood to leak backward into the left ventricle. Coarctation of the aorta (a congenital narrowing in the region of the ductus arteriosus) has also been associated with BAV. Pathophysiology. Fusion of aortic valve leaflets occurs most commonly (≈80%) between the right coronary and left coronary leaflets (RL), which are the anterior leaflets of the aortic valve. Fusion also occurs between the right coronary and noncoronary leaflets (RN, ≈17%), and least commonly between the noncoronary and left coronary leaflets (≈2%). In comparison to other fusion patterns, RN leaflet fusion has a stronger association with future complications such as aortic valve regurgitation and stenosis. However, all fusion patterns associate with a specific area or areas of dilated enlargement in either the root of the ascending aorta, the ascending aorta, or the transverse aortic arch. Hemodynamics. Identifying hemodynamic patterns in the aorta after left ventricle systole aids in predicting consequential complications of bicuspid aortic valve. The patient-specific risk of developing complications such as aortic aneurysms is dependent on the particular aortic leaflet fusion pattern, with each pattern varying in 4D MRI measurements of wall shear stress (WSS), blood flow velocity, asymmetrical flow displacement and flow angle of the aorta. BAV outflow is helical and occurs at high velocities (>1 m/s) throughout the ascending aorta. This is potentially more damaging to the aorta in comparison to the streamline flow and short-lived burst of high velocity at the beginning of the aorta, as seen within a healthy tricuspid valve. This eccentric outflow from the BAV results in blood hitting and reflecting off the aortic wall in a non-streamline fashion. The specific zones where blood hits is dependent on the varying BAV leaflet fusion patterns and consequently correlates with increases in WSS. WSS measurements in RL fusion indicate an increase in pressure applied predominantly to the right-anterior side of the vessel wall, while RN fusion increases WSS on the right-posterior wall. The resulting rise in WSS is supported by the asymmetrical displacement of blood flow produced by an increased angle of outflow from the BAV. Displacement is measured as the distance in millimeters from the center of the aorta to the center of the high velocity outflow. Blood does not flow centrally through the aorta in BAV, but along the right-anterior and right-posterior vessel wall for RL and RN leaflet fusion respectively. Aortic disease. Identification of hemodynamics for RL, RN, and left coronary and noncoronary leaflet fusion patterns enables detection of specific aortic regions susceptible to dysfunction and the eventual development of disease. Specifically, RL and RN fusion patterns are more likely to develop into these aortic disease states. The blood flow information associated with RL fusion causes dilation of the mid-ascending aorta, while RN fusion is associated with dilation in the root, distal ascending aorta and transverse arch. BAV helical and high velocity outflow patterns are consistent with aortic dilation hemodynamics seen in those with tricuspid aortic valves. However, it is the increase and variance in WSS and flow displacement in BAV that demonstrate the importance of aortic leaflet morphology. Flow displacement measurements taken from 4D MRI may be best for detecting irregularities in hemodynamics. Displacement measurements were highly sensitive and distinguishable between different valve morphologies. Hemodynamic measurements from 4D MRI in patients with BAV are advantageous in determining the timing and location of repair surgery to the aorta in aortopathy states. Most patients with bicuspid aortic valve whose valve becomes dysfunctional will need careful follow-up and potentially valve replacement at some point in life. Regular EchoCG and MRI may be performed. If the valve is normally functioning or minimally dysfunctional, average lifespan is similar to that of those without the anomaly. Diagnosis. A bicuspid aortic valve can be associated with a heart murmur located at the right second intercostal space. Often there will be differences in blood pressures between upper and lower extremities. The diagnosis can be assisted with echocardiography or magnetic resonance imaging (MRI). Four-dimensional magnetic resonance imaging (4D MRI) is a technique that defines blood flow characteristics and patterns throughout the vessels, across valves, and in compartments of the heart. Four-dimensional imaging enables accurate visualizations of blood flow patterns in a three-dimensional (3D) spatial volume, as well as in a fourth temporal dimension. Current 4D MRI systems produces high-resolution images of blood flow in just a single scan session. Classification. Bicuspid aortic valves may assume three different types of configuration: Treatment. Complications stemming from structural heart issues are most often treated through surgical intervention, which could include aortic valve replacement, or balloon valvuloplasty. Prognosis. BAV leads to significant complications in over one-third of affected individuals. Notable complications of BAV include narrowing of the aortic valve opening, backward blood flow at the aortic valve, dilation of the ascending aorta, and infection of the heart valve. If aortic regurgitation and dilation of the ascending aorta are noted in someone, they should undergo yearly surveillance with transthoracic echocardiograms if the aortic root measures 4.5 centimeters or greater in diameter. Epidemiology. Bicuspid aortic valves are the most common cardiac valvular anomaly, occurring in 1–2% of the general population. It is twice as common in males as in females. Bicuspid aortic valve is a heritable condition, with a demonstrated association with mutations in the NOTCH1 gene. Its heritability (formula_0) is as high as 89%. Both familial clustering and isolated valve defects have been documented. Recent studies suggest that BAV is an autosomal dominant condition with incomplete penetrance. Other congenital heart defects are associated with bicuspid aortic valve at various frequencies, including coarctation of the aorta. Bicuspid aortic valve abnormality is also the most observed cardiac defect in Turner syndrome. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "h^2" } ]
https://en.wikipedia.org/wiki?curid=746673
7466855
Proportional fairness
Proportional fairness may refer to: Topics referred to by the same term <templatestyles src="Dmbox/styles.css" /> This page lists associated with the title .
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "1/n" } ]
https://en.wikipedia.org/wiki?curid=7466855
7466947
Multi-label classification
Classification problem where multiple labels may be assigned to each instance In machine learning, multi-label classification or multi-output classification is a variant of the classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of several (greater than or equal to two) classes. In the multi-label problem the labels are nonexclusive and there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y. Problem transformation methods. Several problem transformation methods exist for multi-label classification, and can be roughly broken down into: Transformation into binary classification problems. The baseline approach, called the "binary relevance" method, amounts to independently training one binary classifier for each label. Given an unseen sample, the combined model then predicts all labels for this sample for which the respective classifiers predict a positive result. Although this method of dividing the task into multiple binary tasks may resemble superficially the one-vs.-all (OvA) and one-vs.-rest (OvR) methods for multiclass classification, it is essentially different from both, because a single classifier under binary relevance deals with a single label, without any regard to other labels whatsoever. A classifier chain is an alternative method for transforming a multi-label classification problem into several binary classification problems. It differs from binary relevance in that labels are predicted sequentially, and the output of all previous classifiers (i.e. positive or negative for a particular label) are input as features to subsequent classifiers. Classifier chains have been applied, for instance, in HIV drug resistance prediction. Bayesian network has also been applied to optimally order classifiers in Classifier chains. In case of transforming the problem to multiple binary classifications, the likelihood function reads formula_0 where index formula_1 runs over the samples, index formula_2 runs over the labels, formula_3 indicates the binary outcomes 0 or 1, formula_4 indicates the Kronecker delta,formula_5 indicates the multiple hot encoded labels of sample formula_1. Transformation into multi-class classification problem. The label powerset (LP) transformation creates one binary classifier for every label combination present in the training set. For example, if possible labels for an example were A, B, and C, the label powerset representation of this problem is a multi-class classification problem with the classes [0 0 0], [1 0 0], [0 1 0], [0 0 1], [1 1 0], [1 0 1], [0 1 1], and [1 1 1] where for example [1 0 1] denotes an example where labels A and C are present and label B is absent. Ensemble methods. A set of multi-class classifiers can be used to create a multi-label ensemble classifier. For a given example, each classifier outputs a single class (corresponding to a single label in the multi-label problem). These predictions are then combined by an ensemble method, usually a voting scheme where every class that receives a requisite percentage of votes from individual classifiers (often referred to as the discrimination threshold) is predicted as a present label in the multi-label output. However, more complex ensemble methods exist, such as committee machines. Another variation is the random k-labelsets (RAKEL) algorithm, which uses multiple LP classifiers, each trained on a random subset of the actual labels; label prediction is then carried out by a voting scheme. A set of multi-label classifiers can be used in a similar way to create a multi-label ensemble classifier. In this case, each classifier votes once for each label it predicts rather than for a single label. Adapted algorithms. Some classification algorithms/models have been adapted to the multi-label task, without requiring problem transformations. Examples of these including for multi-label data are Learning paradigms. Based on learning paradigms, the existing multi-label classification techniques can be classified into batch learning and online machine learning. Batch learning algorithms require all the data samples to be available beforehand. It trains the model using the entire training data and then predicts the test sample using the found relationship. The online learning algorithms, on the other hand, incrementally build their models in sequential iterations. In iteration t, an online algorithm receives a sample, xt and predicts its label(s) ŷt using the current model; the algorithm then receives yt, the true label(s) of xt and updates its model based on the sample-label pair: (xt, yt). Multi-label stream classification. Data streams are possibly infinite sequences of data that continuously and rapidly grow over time. Multi-label stream classification (MLSC) is the version of multi-label classification task that takes place in data streams. It is sometimes also called online multi-label classification. The difficulties of multi-label classification (exponential number of possible label sets, capturing dependencies between labels) are combined with difficulties of data streams (time and memory constraints, addressing infinite stream with finite means, concept drifts). Many MLSC methods resort to ensemble methods in order to increase their predictive performance and deal with concept drifts. Below are the most widely used ensemble methods in the literature: Statistics and evaluation metrics. Considering formula_6 to be a set of labels for formula_7 data sample (do not confuse it with a one-hot vector; it is simply a collection of all of the labels that belong to this sample), the extent to which a dataset is multi-label can be captured in two statistics: Evaluation metrics for multi-label classification performance are inherently different from those used in multi-class (or binary) classification, due to the inherent differences of the classification problem. If T denotes the true set of labels for a given sample, and P the predicted set of labels, then the following metrics can be defined on that sample: Cross-validation in multi-label settings is complicated by the fact that the ordinary (binary/multiclass) way of stratified sampling will not work; alternative ways of approximate stratified sampling have been suggested. Implementations and datasets. Java implementations of multi-label algorithms are available in the Mulan and Meka software packages, both based on Weka. The scikit-learn Python package implements some multi-labels algorithms and metrics. The scikit-multilearn Python package specifically caters to the multi-label classification. It provides multi-label implementation of several well-known techniques including SVM, kNN and many more. The package is built on top of scikit-learn ecosystem. The binary relevance method, classifier chains and other multilabel algorithms with a lot of different base learners are implemented in the R-package mlr A list of commonly used multi-label data-sets is available at the Mulan website. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " L=\\prod_{i=1}^n (\\prod_k (\\prod_{j_k}(p_{k,j_k}(x_i)^{\\delta_{y_{i,k},j_k}})))\n" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "j_k" }, { "math_id": 4, "text": "\\delta_{a,b}" }, { "math_id": 5, "text": " y_{i,k}\\in {0,1}" }, { "math_id": 6, "text": "Y_i" }, { "math_id": 7, "text": "i^{th}" }, { "math_id": 8, "text": "\\frac{1}{N} \\sum_{i=1}^N |Y_i|" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "\\frac{1}{N} \\sum _{i=1}^N \\frac{|Y_i|}{|L|}" }, { "math_id": 11, "text": "L = \\bigcup_{i=1}^N Y_i" }, { "math_id": 12, "text": "\\frac{1}{|N|\\cdot |L|} \\sum_{i=1}^{|N|} \\sum_{j=1}^{|L|} \\operatorname{xor}(y_{i,j}, z_{i,j})" }, { "math_id": 13, "text": "y_{i,j}" }, { "math_id": 14, "text": "z_{i,j}" }, { "math_id": 15, "text": "\\operatorname{xor}(\\cdot)" }, { "math_id": 16, "text": "\\frac{|T \\cap P|}{|T \\cup P|}" }, { "math_id": 17, "text": "P" }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "F_1" }, { "math_id": 20, "text": "\\frac{|T \\cap P|}{|P|}" }, { "math_id": 21, "text": "\\frac{|T \\cap P|}{|T|}" } ]
https://en.wikipedia.org/wiki?curid=7466947
7466971
Parabolic partial differential equation
Class of second-order linear partial differential equations A parabolic partial differential equation is a type of partial differential equation (PDE). Parabolic PDEs are used to describe a wide variety of time-dependent phenomena in, i.a., engineering science and financial mathematics. Examples include the heat equation, time-dependent Schrödinger equation and Black–Scholes equation. Definition. To define the simplest kind of parabolic PDE, consider a real-valued function formula_0 of two independent real variables, formula_1 and formula_2. A second-order, linear, constant-coefficient PDE for formula_3 takes the form formula_4 where the subscripts denote the first- and second-order partial derivatives with respect to formula_1 and formula_2. The PDE is classified as "parabolic" if the coefficients of the principal part (i.e. the terms containing the second derivatives of formula_3) satisfy the condition formula_5 Usually formula_1 represents one-dimensional position and formula_2 represents time, and the PDE is solved subject to prescribed initial and boundary conditions. Equations with formula_6 are termed elliptic while those with formula_7 are hyperbolic. The name "parabolic" is used because the assumption on the coefficients is the same as the condition for the analytic geometry equation formula_8 to define a planar parabola. The basic example of a parabolic PDE is the one-dimensional heat equation formula_9 where formula_10 is the temperature at position formula_1 along a thin rod at time formula_11 and formula_12 is a positive constant called the thermal diffusivity. The heat equation says, roughly, that temperature at a given time and point rises or falls at a rate proportional to the difference between the temperature at that point and the average temperature near that point. The quantity formula_13 measures how far off the temperature is from satisfying the mean value property of harmonic functions. The concept of a parabolic PDE can be generalized in several ways. For instance, the flow of heat through a material body is governed by the three-dimensional heat equation formula_14 where formula_15 denotes the Laplace operator acting on formula_3. This equation is the prototype of a "multi-dimensional parabolic" PDE. Noting that formula_16 is an elliptic operator suggests a broader definition of a parabolic PDE: formula_17 where formula_18 is a second-order elliptic operator (implying that formula_18 must be positive; a case where formula_19 is considered below). A system of partial differential equations for a vector formula_3 can also be parabolic. For example, such a system is hidden in an equation of the form formula_20 if the matrix-valued function formula_21 has a kernel of dimension 1. Solution. Under broad assumptions, an initial/boundary-value problem for a linear parabolic PDE has a solution for all time. The solution formula_10, as a function of formula_1 for a fixed time formula_22, is generally smoother than the initial data formula_23. For a nonlinear parabolic PDE, a solution of an initial/boundary-value problem might explode in a singularity within a finite amount of time. It can be difficult to determine whether a solution exists for all time, or to understand the singularities that do arise. Such interesting questions arise in the solution of the Poincaré conjecture via Ricci flow. Backward parabolic equation. One occasionally encounters a so-called "backward parabolic PDE", which takes the form formula_24 (note the absence of a minus sign). An initial-value problem for the backward heat equation, formula_25 is equivalent to a final-value problem for the ordinary heat equation, formula_26 Similarly to a final-value problem for a parabolic PDE, an initial-value problem for a backward parabolic PDE is usually not well-posed (solutions often grow unbounded in finite time, or even fail to exist). Nonetheless, these problems are important for the study of the reflection of singularities of solutions to various other PDEs. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "u(x, y)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "u" }, { "math_id": 4, "text": "Au_{xx} + 2Bu_{xy} + Cu_{yy} + Du_x + Eu_y + F = 0," }, { "math_id": 5, "text": "B^2 - AC = 0." }, { "math_id": 6, "text": "B^2 - AC < 0" }, { "math_id": 7, "text": "B^2 - AC > 0" }, { "math_id": 8, "text": "A x^2 + 2B xy + C y^2 + D x + E y + F = 0" }, { "math_id": 9, "text": "u_t = \\alpha\\,u_{xx}," }, { "math_id": 10, "text": "u(x,t)" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "\\alpha" }, { "math_id": 13, "text": "u_{xx}" }, { "math_id": 14, "text": "u_t = \\alpha\\,\\Delta u," }, { "math_id": 15, "text": "\\Delta u := \\frac{\\partial^2u}{\\partial x^2}+\\frac{\\partial^2u}{\\partial y^2}+\\frac{\\partial^2u}{\\partial z^2}," }, { "math_id": 16, "text": "-\\Delta" }, { "math_id": 17, "text": "u_t = -Lu," }, { "math_id": 18, "text": "L" }, { "math_id": 19, "text": "u_t = +Lu" }, { "math_id": 20, "text": "\\nabla \\cdot (a(x) \\nabla u(x)) + b(x)^\\text{T} \\nabla u(x) + cu(x) = f(x)" }, { "math_id": 21, "text": "a(x)" }, { "math_id": 22, "text": "t > 0" }, { "math_id": 23, "text": "u(x,0) = u_0(x)" }, { "math_id": 24, "text": "u_t = Lu" }, { "math_id": 25, "text": "\\begin{cases} u_{t} = -\\Delta u & \\textrm{on} \\ \\ \\Omega \\times (0,T), \\\\ u=0 & \\textrm{on} \\ \\ \\partial\\Omega \\times (0,T), \\\\ u = f & \\textrm{on} \\ \\ \\Omega \\times \\left \\{ 0 \\right \\}. \\end{cases}" }, { "math_id": 26, "text": "\\begin{cases} u_{t} = \\Delta u & \\textrm{on} \\ \\ \\Omega \\times (0,T), \\\\ u=0\n& \\textrm{on} \\ \\ \\partial\\Omega \\times (0,T), \\\\ u = f & \\textrm{on} \\ \\ \\Omega \\times \\left \\{ T \\right \\}. \\end{cases} " } ]
https://en.wikipedia.org/wiki?curid=7466971
74671225
FHZ
FHZ can refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "F_{HZ}" } ]
https://en.wikipedia.org/wiki?curid=74671225
74672276
Pass-through (economics)
In economics, cost pass-through (also known as price transmission or simply "pass-through") is a process (or result) of a business changing pricing of its output (products or services) to reflect a change in costs of its own input (materials, labor, etc.). The effect of passthrough is quantified as passthrough rate, a ratio between the change in costs and the change in prices. Depending on circumstances, a business might decide to absorb part of the cost changes (resulting in ratio below 1.0) or amplify them (ratio above 1.0). Cost pass-through is extensively used when analyzing the state of competition or evaluating mergers. In the studies of inflation, a pass-through from prices to wages (i.e., in opposite direction) is also considered. Simple examples. When an increase in costs (the "cost shock") happens in a perfectly competitive market, a bigger share of the change will be borne by the party that is less sensitive to the price. For example, in the perfectly inelastic demand case (consumers have to have the good whatever the price is), a cost shock will be passed to consumers in its entirety ("full pass-through", passthrough rate will be equal to 1.0). In the case of a perfectly elastic demand (consumers ready to abandon the market if faced with any price increase), producers will be forced to fully absorb the shock (pass-through rate 0.0). In the intermediate case of consumers being somewhat price-sensitive, the demand for goods will be reduced; the ultimate pass-through effect will be dependent on the slope of the supply curve. If it slopes upwards (the more units are produced, the costlier each one is, e.g., due to capacity constraints), the per-unit costs will go down, providing the producer with some room to partially absorb the cost shock. If the supply curve slopes downward (case of economy of scale), reduced production will make each unit costlier to produce, so the pass-through rate can become higher than 1.0 (so called "over-shifting"). Terminology. In addition to the "absolute pass-through" that uses incremental values (i.e., $2 cost shock causing $1 increase in price yields a 50% pass-through rate), some researchers use "pass-through elasticity", where the ratio is calculated based on percentage change of price and cost (for example, with elasticity of 0.5, a 2% increase in cost yields a 1% increase in price). The relationship between these values is based on the ratio of the price to marginal cost: formula_0 Number of businesses affected by the change in costs can vary from one (in this case, a term "firm-specific pass-through" is used) to all the companies in an industry ("industry-wide pass-through"), consideration of intermediate scenarios between these two extremes is also meaningful; the pass-through rate significantly depends on the scenario. In an oligopolistic market cost shocks experienced by one producer affect the competitors through a change in equilibrium price (so called "cross pass-through effect"). In many cases in short-term the prices increase more with the cost increases, and decrease proportionally less when the costs get lower; this situation is characterized as an "asymmetric pass-through". This asymmetry eventually dissipates, although there is no set time interval for this downward adjustment of the price. While studying a market with vertically separated companies (for example, a producer and a reseller), terms "upstream pass-through" and "downstream pass-through" are used to denote, respectively, the producer and reseller pass-through rates. Theoretical considerations. The cost pass-through in a perfectly competitive market is higher with less elastic demand and more elastic supply. The convex demand curve corresponds to higher pass-through, concave demand is characterized by lower pass-through value. The pass-through for concave demand is always below 1.0 if the marginal cost is constant. In the case of perfectly competitive market, formula_1. Practical estimates. There are few ways to estimate (or predict) the pass-through rate: The attempts to accurately estimate the cost pass-through are hampered by multiple practical issues: Empirical data on pass-through values shows large variability both between the particular industry cases and between different companies affected by similar price shocks: absolute industry-wide passthrough rate were observed to be anywhere between 20% and way above 100%, with pass-through elasticities occupying the full range of 0.0 to 1.0 (elasticities close to 1.0 in practice correspond to absolute rates above 100% due to mark-up inherent in a successful business operation). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{absolutePassthrough} = {passthroughElasticity} \\times \\frac {price} {marginalCost}" }, { "math_id": 1, "text": "{passthrough} = \\frac 1 {1 + \\frac {demandElasticity} {supplyElasticity}}" } ]
https://en.wikipedia.org/wiki?curid=74672276
746751
OpenMath
OpenMath is the name of a markup language for specifying the meaning of mathematical formulae. Among other things, it can be used to complement MathML, a standard which mainly focuses on the presentation of formulae, with information about their semantic meaning. OpenMath can be encoded in XML or in a binary format. Coverage. OpenMath consists of the definition of "OpenMath Objects", which is an abstract datatype for describing the logical structure of a mathematical formula and the definition of "OpenMath Content Dictionaries", or collections of names for mathematical concepts. The names available from the latter type of collections are specifically intended for use in extending MathML, and conversely, a basic set of such "Content Dictionaries" has been designed to be compatible with the small set of mathematical concepts defined in Content MathML, the non-presentational subset of MathML. History. OpenMath has been developed in a long series of workshops and (mostly European) research projects that began in 1993 and continues through today. The OpenMath 1.0 Standard was released in February 2000, and revised as OpenMath 1.1 in October 2002. Two years later, the OpenMath 2.0 Standard was released in June 2004. OpenMath 1 fixed the basic language architecture, while OpenMath2 brought better XML integration, structure sharing and liberalized the notion of OpenMath Content dictionaries. OpenMath Society. The OpenMath Effort is governed by the OpenMath Society, based in Helsinki, Finland. The Society brings together tool builders, software suppliers, publishers and authors. Membership is by invitation of the Societies Executive Committee, which welcomes self-nominations of individuals who have worked on OpenMath-related issues in research or application. As of 2007, Michael Kohlhase is president of the OpenMath society. He succeeded Arjeh M. Cohen, who was the first president. Example. The well-known quadratic formula: formula_0 would be marked up like this in OpenMath (the representation is an expression tree made up from functional elements like OMA for function application or OMV for variables): &lt;OMOBJ xmlns="http://www.openmath.org/OpenMath"&gt; &lt;OMA cdbase="http://www.openmath.org/cd"&gt; &lt;OMS cd="relation1" name="eq"/&gt; &lt;OMV name="x"/&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="divide"/&gt; &lt;OMA&gt; &lt;OMS cdbase="http://www.example.com/mathops" cd="multiops" name="plusminus"/&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="unary_minus"/&gt; &lt;OMV name="b"/&gt; &lt;/OMA&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="root"/&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="minus"/&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="power"/&gt; &lt;OMV name="b"/&gt; &lt;OMI&gt;2&lt;/OMI&gt; &lt;/OMA&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="times"/&gt; &lt;OMI&gt;4&lt;/OMI&gt; &lt;OMV name="a"/&gt; &lt;OMV name="c"/&gt; &lt;/OMA&gt; &lt;/OMA&gt; &lt;/OMA&gt; &lt;/OMA&gt; &lt;OMA&gt; &lt;OMS cd="arith1" name="times"/&gt; &lt;OMI&gt;2&lt;/OMI&gt; &lt;OMV name="a"/&gt; &lt;/OMA&gt; &lt;/OMA&gt; &lt;/OMA&gt; &lt;/OMOBJ&gt; In the expression tree above symbols—i.e. elements like —stand for mathematical functions that are applied to sibling expressions in an OMA which are interpreted as arguments. The OMS element is a generic extension element that means whatever is specified in the content dictionary referred to in the cd attribute (this document can be found at the URI specified in the innermost cdbase attribute dominating the respective OMS element. In the example above, all symbols come from the content dictionary for arithmetics (arith1, see below), except for the plusminus, which comes from a non-standard place, hence the cdbase attribute here. OpenMath Content Dictionaries. Content Dictionaries are structured XML documents that define mathematical symbols that can be referred to by OMS elements in OpenMath Objects. The OpenMath 2 standard does not prescribe a canonical encoding for content dictionaries, but only requires an infrastructure sufficient for unique referencing in OMS elements. OpenMath provides a very basic XML encoding that meets these requirements, and a set of specific content dictionaries for some areas of mathematics, in particular covering the K-14 fragment covered by content MathML. For more richly structured content dictionaries (and generally for arbitrary mathematical documents) the OMDoc format extends OpenMath by a “statement level” (including structures like definitions, theorems, proofs and examples, as well as means for interrelating them) and a “theory level”, where a theory is a collection of several contextually related statements. OMDoc's theories are designed to be compatible to OpenMath content dictionaries, but they can also be set into inheritance and import relations. Criticism. OpenMath is criticised for being inadequate for general mathematics, exposing not enough formal precision to capture the intricacies of numerics, lacking a proof-of-concept and as an inferior technology to already established approaches of encoding mathematical semantics, amongst other presumed shortcomings. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}" } ]
https://en.wikipedia.org/wiki?curid=746751
74683745
Resurgent function
The term resurgent function (from , to get up again) comes from French mathematician Jean Écalle's "theory of resurgent functions and alien calculus". The theory evolved from the summability of divergent series (see Borel summation) and treats analytic functions with isolated singularities. He introduced the term in the late 1970s. "Resurgent functions" have applications in asymptotic analysis, in the theory of differential equations, in perturbation theory and in quantum field theory. For analytic functions with isolated singularities, the Alien calculus can be derived, a special algebra for their derivatives. Definition. A formula_0-resurgent function is an element of formula_1, i.e. an element of the form formula_2 from formula_3, where formula_4 and formula_5 is a "formula_0-continuable germ". A power series formula_6 whose formal Borel transformation is a formula_0-resurgent function is called formula_0-resurgent series. Basic concepts and notation. Convergence in formula_7: The formal power series formula_8 is "convergent in formula_7" if the associated formal power series formula_9 has a positive radius of convergence. formula_10 denotes the space of "formal power series convergent in formula_7". Formal Borel transform: The "formal Borel transform" (named after Émile Borel) is the operator formula_11 defined by formula_12. Convolution in formula_13: Let formula_14, then the convolution is given by formula_15. By adjunction we can add a unit to the convolution in formula_16 and introduce the vector space formula_17, where we denote the formula_18 element with formula_19. Using the convention formula_20 we can write the space as formula_21 and define formula_22 and set formula_23. formula_0-resummable seed: Let formula_0 be a non-empty discrete subset of formula_24 and define formula_25. Let formula_26 be the radius of convergence of formula_5. formula_5 is a "formula_0-continuable seed" if an formula_27 exists such that formula_28 and formula_29, and formula_5 analytic continuation along some path in formula_30 starting at a point in formula_31. formula_32 denotes the space of formula_0-continuable germs in formula_33.
[ { "math_id": 0, "text": "\\Omega" }, { "math_id": 1, "text": "\\mathbb{C}\\delta\\oplus \\hat{\\mathcal{R}}_{\\Omega}" }, { "math_id": 2, "text": "c\\delta + \\hat{\\phi}" }, { "math_id": 3, "text": "\\mathbb{C}\\delta \\oplus \\mathbb{C}\\{\\zeta\\}" }, { "math_id": 4, "text": "c\\in \\mathbb{C}" }, { "math_id": 5, "text": "\\hat{\\phi}" }, { "math_id": 6, "text": "\\widetilde{\\phi}\\in \\mathbb{C}[[z^{-1}]]" }, { "math_id": 7, "text": "\\infty" }, { "math_id": 8, "text": "\\phi(z) \\in \\mathbb{C}[[z^{-1}]]" }, { "math_id": 9, "text": "\\psi(t) = \\phi(1/z) \\in \\mathbb{C}[[t]]" }, { "math_id": 10, "text": "\\mathbb{C}\\{z^{-1}\\}" }, { "math_id": 11, "text": "\\mathcal{B}:z^{-1}\\mathbb{C}[[z^{-1} ]]\\to \\mathbb{C}[[\\zeta]]" }, { "math_id": 12, "text": "\\mathcal{B}:\\phi=\\sum\\limits_{n=0}^\\infty a_n z^{-n-1}\\mapsto \\hat{\\phi }=\\sum\\limits_{n= 0}^\\infty a_n \\frac{\\zeta^n}{n!}" }, { "math_id": 13, "text": "\\mathbb{C}\\{\\zeta\\}" }, { "math_id": 14, "text": "\\hat{\\phi},\\hat{\\psi}\\in \\mathbb{C}[[\\zeta]]" }, { "math_id": 15, "text": "\\hat{\\phi}*\\hat{\\psi}:=\\mathcal{B}[\\phi\\psi]" }, { "math_id": 16, "text": "\\mathbb{C}[[\\zeta]]" }, { "math_id": 17, "text": "\\mathbb{C}\\times \\mathbb{C}[[z]]" }, { "math_id": 18, "text": "(1,0)" }, { "math_id": 19, "text": "\\delta" }, { "math_id": 20, "text": "\\{0\\}\\times \\mathbb{C}[[\\zeta]]:=\\mathbb{C}[[\\zeta]]" }, { "math_id": 21, "text": "\\mathbb{C}\\delta\\oplus \\mathbb{C}[[z]]" }, { "math_id": 22, "text": "(a\\delta + \\hat{\\phi })*(b\\delta + \\hat{\\psi }) := ab\\delta + a\\hat{\\psi } + b \\hat{\\phi } +\\hat{\\phi}*\\hat{\\psi}" }, { "math_id": 23, "text": "\\mathcal{B}1 := \\delta" }, { "math_id": 24, "text": "\\mathbb{C}" }, { "math_id": 25, "text": "\\mathbb{D}_R=\\{\\zeta\\in \\mathbb{C}\\mid |\\zeta-0| < R\\}\\setminus\\{0\\}" }, { "math_id": 26, "text": "r" }, { "math_id": 27, "text": "R" }, { "math_id": 28, "text": "r \\geq R>0" }, { "math_id": 29, "text": "\\mathbb{D}_R\\cap \\Omega=\\emptyset" }, { "math_id": 30, "text": "\\mathbb{C}\\setminus \\Omega" }, { "math_id": 31, "text": "\\mathbb{D}_R" }, { "math_id": 32, "text": "\\hat{\\mathcal{R}}_{\\Omega}" }, { "math_id": 33, "text": "\\mathbb{C}\\{\\zeta \\}" } ]
https://en.wikipedia.org/wiki?curid=74683745
7468671
Penalty method
Type of algorithm for constrained optimization Penalty methods are a certain class of algorithms for solving constrained optimization problems. A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution of the original constrained problem. The unconstrained problems are formed by adding a term, called a penalty function, to the objective function that consists of a "penalty parameter" multiplied by a measure of violation of the constraints. The measure of violation is nonzero when the constraints are violated and is zero in the region where constraints are not violated. Description. Let us say we are solving the following constrained problem: formula_0 subject to formula_1 This problem can be solved as a series of unconstrained minimization problems formula_2 where formula_3 In the above equations, formula_4 is the "exterior penalty function" while formula_5 is the "penalty coefficient". When the penalty coefficient is 0, "fp"="f". In each iteration of the method, we increase the penalty coefficient formula_5 (e.g. by a factor of 10), solve the unconstrained problem and use the solution as the initial guess for the next iteration. Solutions of the successive unconstrained problems will asymptotically converge to the solution of the original constrained problem. Common penalty functions in constrained optimization are the quadratic penalty function and the deadzone-linear penalty function. Convergence. We first consider the set of global optimizers of the original problem, X*.Thm.9.2.1Assume that the objective "f" has bounded level sets, and that the original problem is feasible. Then: This theorem is helpful mostly when "fp" is convex, since in this case, we can find the global optimizers of "fp". A second theorem considers local optimizers.Thm.9.2.2 Let x* be a non-degenerate local optimizer of the original problem ("nondegenerate" means that the gradients of the active constraints are linearly independent and the second-order sufficient optimality condition is satisfied). Then, there exists a neighborhood V* of x*, and some "p"0&gt;0, such that for all "p"&gt;"p"0, the penalized objective "fp" has exactly one critical point in V* (denoted by x*(p)), and x*("p") approaches x* as "p"→∞. Also, the objective value "f"(x*("p")) is weakly-increasing with "p". Practical applications. Image compression optimization algorithms can make use of penalty functions for selecting how best to compress zones of colour to single representative values. The penalty method is often used in computational mechanics, especially in the Finite element method, to enforce conditions such as e.g. contact. The advantage of the penalty method is that, once we have a penalized objective with no constraints, we can use any unconstrained optimization method to solve it. The disadvantage is that, as the penalty coefficient "p" grows, the unconstrained problem becomes ill-conditioned - the coefficients are very large, and this may cause numeric errors and slow convergence of the unconstrained minimization.Sub.9.2 See also. Barrier methods constitute an alternative class of algorithms for constrained optimization. These methods also add a penalty-like term to the objective function, but in this case the iterates are forced to remain interior to the feasible domain and the barrier is in place to bias the iterates to remain away from the boundary of the feasible region. They are practically more efficient than penalty methods. Augmented Lagrangian methods are alternative penalty methods, which allow to get high-accuracy solutions without pushing the penalty coefficient to infinity. This makes the unconstrained penalized problems easier to solve. Other nonlinear programming algorithms: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Smith, Alice E.; Coit David W. Penalty functions Handbook of Evolutionary Computation, Section C 5.2. Oxford University Press and Institute of Physics Publishing, 1996. Coello, A.C.: Theoretical and Numerical Constraint-Handling Techniques Used with Evolutionary Algorithms: A Survey of the State of the Art. Comput. Methods Appl. Mech. Engrg. 191(11-12), 1245-1287 Courant, R. Variational methods for the solution of problems of equilibrium and vibrations. Bull. Amer. Math. Soc., 49, 1–23, 1943. Wotao, Y. Optimization Algorithms for constrained optimization. Department of Mathematics, UCLA, 2015.
[ { "math_id": 0, "text": " \\min_x f(\\mathbf x) " }, { "math_id": 1, "text": " c_i(\\mathbf x) \\le 0 ~\\forall i \\in I. " }, { "math_id": 2, "text": " \\min f_p (\\mathbf x) := f (\\mathbf x) + p ~ \\sum_{i\\in I} ~ g(c_i(\\mathbf x)) " }, { "math_id": 3, "text": " g(c_i(\\mathbf x))=\\max(0,c_i(\\mathbf x ))^2. " }, { "math_id": 4, "text": " g(c_i(\\mathbf x))" }, { "math_id": 5, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=7468671
7469547
SABR volatility model
Stochastic volatility model used in derivatives markets In mathematical finance, the SABR model is a stochastic volatility model, which attempts to capture the volatility smile in derivatives markets. The name stands for "stochastic alpha, beta, rho", referring to the parameters of the model. The SABR model is widely used by practitioners in the financial industry, especially in the interest rate derivative markets. It was developed by Patrick S. Hagan, Deep Kumar, Andrew Lesniewski, and Diana Woodward. Dynamics. The SABR model describes a single forward formula_0, such as a LIBOR forward rate, a forward swap rate, or a forward stock price. This is one of the standards in market used by market participants to quote volatilities. The volatility of the forward formula_0 is described by a parameter formula_1. SABR is a dynamic model in which both formula_0 and formula_1 are represented by stochastic state variables whose time evolution is given by the following system of stochastic differential equations: formula_2 formula_3 with the prescribed time zero (currently observed) values formula_4 and formula_5. Here, formula_6 and formula_7 are two correlated Wiener processes with correlation coefficient formula_8: formula_9 The constant parameters formula_10 satisfy the conditions formula_11. formula_12 is a volatility-like parameter for the volatility. formula_13 is the instantaneous correlation between the underlying and its volatility. The initial volatility formula_5 controls the height of the ATM implied volatility level. Both the correlation formula_13 and formula_14 controls the slope of the implied skew. The volatility of volatility formula_12 controls its curvature. The above dynamics is a stochastic version of the CEV model with the "skewness" parameter formula_14: in fact, it reduces to the CEV model if formula_15 The parameter formula_12 is often referred to as the "volvol", and its meaning is that of the lognormal volatility of the volatility parameter formula_1. Asymptotic solution. We consider a European option (say, a call) on the forward formula_0 struck at formula_16, which expires formula_17 years from now. The value of this option is equal to the suitably discounted expected value of the payoff formula_18 under the probability distribution of the process formula_19. Except for the special cases of formula_20 and formula_21, no closed form expression for this probability distribution is known. The general case can be solved approximately by means of an asymptotic expansion in the parameter formula_22. Under typical market conditions, this parameter is small and the approximate solution is actually quite accurate. Also significantly, this solution has a rather simple functional form, is very easy to implement in computer code, and lends itself well to risk management of large portfolios of options in real time. It is convenient to express the solution in terms of the implied volatility formula_23 of the option. Namely, we force the SABR model price of the option into the form of the Black model valuation formula. Then the implied volatility, which is the value of the lognormal volatility parameter in Black's model that forces it to match the SABR price, is approximately given by: formula_24 where, for clarity, we have set formula_25. The value formula_26 denotes a conveniently chosen midpoint between formula_4 and formula_16 (such as the geometric average formula_27 or the arithmetic average formula_28). We have also set formula_29 and formula_30 formula_31 The function formula_32 entering the formula above is given by formula_33 Alternatively, one can express the SABR price in terms of the Bachelier's model. Then the implied normal volatility can be asymptotically computed by means of the following expression: formula_34 It is worth noting that the normal SABR implied volatility is generally somewhat more accurate than the lognormal implied volatility. The approximation accuracy and the degree of arbitrage can be further improved if the equivalent volatility under the CEV model with the same formula_14 is used for pricing options. SABR for the negative rates. A SABR model extension for negative interest rates that has gained popularity in recent years is the shifted SABR model, where the shifted forward rate is assumed to follow a SABR process formula_35 formula_36 for some positive shift formula_37. Since shifts are included in a market quotes, and there is an intuitive soft boundary for how negative rates can become, shifted SABR has become market best practice to accommodate negative rates. The SABR model can also be modified to cover negative interest rates by: formula_38 formula_36 for formula_39 and a "free" boundary condition for formula_40. Its exact solution for the zero correlation as well as an efficient approximation for a general case are available. An obvious drawback of this approach is the a priori assumption of potential highly negative interest rates via the free boundary. Arbitrage problem in the implied volatility formula. Although the asymptotic solution is very easy to implement, the density implied by the approximation is not always arbitrage-free, especially not for very low strikes (it becomes negative or the density does not integrate to one). One possibility to "fix" the formula is use the stochastic collocation method and to project the corresponding implied, ill-posed, model on a polynomial of an arbitrage-free variables, e.g. normal. This will guarantee equality in probability at the collocation points while the generated density is arbitrage-free. Using the projection method analytic European option prices are available and the implied volatilities stay very close to those initially obtained by the asymptotic formula. Another possibility is to rely on a fast and robust PDE solver on an equivalent expansion of the forward PDE, that preserves numerically the zero-th and first moment, thus guaranteeing the absence of arbitrage. Extensions. The SABR model can be extended by assuming its parameters to be time-dependent. This however complicates the calibration procedure. An advanced calibration method of the time-dependent SABR model is based on so-called "effective parameters". Alternatively, Guerrero and Orlando show that a time-dependent local stochastic volatility (SLV) model can be reduced to a system of autonomous PDEs that can be solved using the heat kernel, by means of the Wei-Norman factorization method and Lie algebraic techniques. Explicit solutions obtained by said techniques are comparable to traditional Monte Carlo simulations allowing for shorter time in numerical computations. Simulation. As the stochastic volatility process follows a geometric Brownian motion, its exact simulation is straightforward. However, the simulation of the forward asset process is not a trivial task. Taylor-based simulation schemes are typically considered, like Euler–Maruyama or Milstein. Recently, novel methods have been proposed for the "almost exact" Monte Carlo simulation of the SABR model. Extensive studies for SABR model have recently been considered. For the normal SABR model (formula_20 with no boundary condition at formula_40), a closed-form simulation method is known. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "dF_t=\\sigma_t \\left(F_t\\right)^\\beta\\, dW_t," }, { "math_id": 3, "text": "d\\sigma_t=\\alpha\\sigma^{}_t\\, dZ_t," }, { "math_id": 4, "text": "F_0" }, { "math_id": 5, "text": "\\sigma_0" }, { "math_id": 6, "text": "W_t" }, { "math_id": 7, "text": "Z_t" }, { "math_id": 8, "text": "-1<\\rho<1" }, { "math_id": 9, "text": "dW_t \\, dZ_t = \\rho \\, dt" }, { "math_id": 10, "text": "\\beta,\\;\\alpha" }, { "math_id": 11, "text": "0\\leq\\beta\\leq 1,\\;\\alpha\\geq 0" }, { "math_id": 12, "text": "\\alpha" }, { "math_id": 13, "text": "\\rho" }, { "math_id": 14, "text": "\\beta" }, { "math_id": 15, "text": "\\alpha=0" }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "T" }, { "math_id": 18, "text": "\\max(F_T-K,\\;0)" }, { "math_id": 19, "text": "F_t" }, { "math_id": 20, "text": "\\beta=0" }, { "math_id": 21, "text": "\\beta=1" }, { "math_id": 22, "text": "\\varepsilon=T\\alpha^2" }, { "math_id": 23, "text": "\\sigma_{\\textrm{impl}}" }, { "math_id": 24, "text": "\n\\sigma_\\text{impl}=\\alpha\\;\n\\frac{\\log(F_0/K)}{D(\\zeta)}\\;\n\\left\\{1+\\left[\\frac{2\\gamma_2-\\gamma_1^2+1/\\left(F_{\\text{mid}}\\right)^2}{24} \\;\\left(\\frac{\\sigma_0 C(F_{\\text{mid}})} \\alpha \\right)^2+\\frac{\\rho\\gamma_1}{4} \\;\\frac{\\sigma_0 C(F_{\\text{mid}})}{\\alpha} + \\frac{2-3\\rho^2}{24} \\right] \\varepsilon\\right\\},\n" }, { "math_id": 25, "text": "C\\left(F\\right)=F^\\beta" }, { "math_id": 26, "text": "F_{\\text{mid}}" }, { "math_id": 27, "text": "\\sqrt{F_0 K}" }, { "math_id": 28, "text": "\\left(F_0+K\\right)/2" }, { "math_id": 29, "text": "\n\\zeta=\\frac \\alpha {\\sigma_0}\\;\\int_K^{F_0} \\frac{dx}{C(x)}\n=\\frac \\alpha {\\sigma_0(1-\\beta)}\\;\\left(F_0{}^{1-\\beta}-K^{1-\\beta}\\right),\n" }, { "math_id": 30, "text": "\n\\gamma_1=\\frac{C'(F_{\\text{mid}})}{C(F_{\\text{mid}})}\n=\\frac{\\beta}{F_{\\text{mid}}}\\;,\n" }, { "math_id": 31, "text": "\n\\gamma_2=\\frac{C''(F_{\\text{mid}})}{C(F_{\\text{mid}})}\n=-\\frac{\\beta(1-\\beta)}{\\left(F_{\\text{mid}}\\right)^2}\\;,\n" }, { "math_id": 32, "text": "D\\left(\\zeta\\right)" }, { "math_id": 33, "text": "\nD(\\zeta)=\\log\\left(\\frac{\\sqrt{1-2\\rho\\zeta+\\zeta^2}+\\zeta-\\rho}{1-\\rho}\\right).\n" }, { "math_id": 34, "text": "\n\\sigma_{\\text{impl}}^{\\text{n}}=\\alpha\\;\n\\frac{F_0-K}{D(\\zeta)}\\;\n\\left\\{1+\\left[\\frac{2\\gamma_2-\\gamma_1^2}{24}\\;\\left(\\frac{\\sigma_0 C(F_{\\text{mid}})}{\\alpha}\\right)^2+\\frac{\\rho\\gamma_1}{4}\\;\\frac{\\sigma_0 C(F_{\\text{mid}})}{\\alpha}+\\frac{2-3\\rho^2}{24} \\right] \\varepsilon\\right\\}.\n" }, { "math_id": 35, "text": "dF_t=\\sigma_t (F_t+s)^\\beta\\, dW_t," }, { "math_id": 36, "text": "d\\sigma_t=\\alpha\\sigma_t\\, dZ_t," }, { "math_id": 37, "text": "s" }, { "math_id": 38, "text": "dF_t=\\sigma_t |F_t|^\\beta\\, dW_t," }, { "math_id": 39, "text": "0\\leq\\beta\\leq 1/2" }, { "math_id": 40, "text": "F=0" } ]
https://en.wikipedia.org/wiki?curid=7469547
747122
Generalized linear model
Class of statistical models In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a "link function" and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression. They proposed an iteratively reweighted least squares method for maximum likelihood estimation (MLE) of the model parameters. MLE remains popular and is the default method on many statistical computing packages. Other approaches, including Bayesian regression and least squares fitting to variance stabilized responses, have been developed. Intuition. Ordinary linear regression predicts the expected value of a given unknown quantity (the "response variable", a random variable) as a linear combination of a set of observed values ("predictors"). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a "linear-response model"). This is appropriate when the response variable can vary, to a good approximation, indefinitely in either direction, or more generally for any quantity that only varies by a relatively small amount compared to the variation in the predictive variables, e.g. human heights. However, these assumptions are inappropriate for some types of response variables. For example, in cases where the response variable is expected to be always positive and varying over a wide range, constant input changes lead to geometrically (i.e. exponentially) varying, rather than constantly varying, output changes. As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of −950. Logically, a more realistic model would instead predict a constant "rate" of increased beach attendance (e.g. an increase of 10 degrees leads to a doubling in beach attendance, and a drop of 10 degrees leads to a halving in attendance). Such a model is termed an "exponential-response model" (or "log-linear model", since the logarithm of the response is predicted to vary linearly). Similarly, a model that predicts a probability of making a yes/no choice (a Bernoulli variable) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Imagine, for example, a model that predicts the likelihood of a given person going to the beach as a function of temperature. A reasonable model might predict, for example, that a change in 10 degrees makes a person two times more or less likely to go to the beach. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.). Rather, it is the "odds" that are doubling: from 2:1 odds, to 4:1 odds, to 8:1 odds, etc. Such a model is a "log-odds or logistic model". Generalized linear models cover all these situations by allowing for response variables that have arbitrary distributions (rather than simply normal distributions), and for an arbitrary function of the response variable (the "link function") to vary linearly with the predictors (rather than assuming that the response itself must vary linearly). For example, the case above of predicted number of beach attendees would typically be modeled with a Poisson distribution and a log link, while the case of predicted probability of beach attendance would typically be modelled with a Bernoulli distribution (or binomial distribution, depending on exactly how the problem is phrased) and a log-odds (or "logit") link function. Overview. In a generalized linear model (GLM), each outcome Y of the dependent variables is assumed to be generated from a particular distribution in an exponential family, a large class of probability distributions that includes the normal, binomial, Poisson and gamma distributions, among others. The conditional mean μ of the distribution depends on the independent variables X through: formula_0 where E(Y | X) is the expected value of Y conditional on X; X"β is the "linear predictor", a linear combination of unknown parameters β"; "g" is the link function. In this framework, the variance is typically a function, V, of the mean: formula_1 It is convenient if V follows from an exponential family of distributions, but it may simply be that the variance is a function of the predicted value. The unknown parameters, β, are typically estimated with maximum likelihood, maximum quasi-likelihood, or Bayesian techniques. Model components. The GLM consists of three elements: 1. A particular distribution for modeling formula_2 from among those which are considered exponential families of probability distributions, 2. A linear predictor formula_3, and 3. A link function formula_4 such that formula_5. Probability distribution. An overdispersed exponential family of distributions is a generalization of an exponential family and the exponential dispersion model of distributions and includes those families of probability distributions, parameterized by formula_6 and formula_7, whose density functions "f" (or probability mass function, for the case of a discrete distribution) can be expressed in the form formula_8 The "dispersion parameter", formula_7, typically is known and is usually related to the variance of the distribution. The functions formula_9, formula_10, formula_11, formula_12, and formula_13 are known. Many common distributions are in this family, including the normal, exponential, gamma, Poisson, Bernoulli, and (for fixed number of trials) binomial, multinomial, and negative binomial. For scalar formula_14 and formula_6 (denoted formula_15 and formula_16 in this case), this reduces to formula_17 formula_6 is related to the mean of the distribution. If formula_10 is the identity function, then the distribution is said to be in canonical form (or "natural form"). Note that any distribution can be converted to canonical form by rewriting formula_6 as formula_18 and then applying the transformation formula_19. It is always possible to convert formula_12 in terms of the new parametrization, even if formula_20 is not a one-to-one function; see comments in the page on exponential families. If, in addition, formula_11 is the identity and formula_7 is known, then formula_6 is called the "canonical parameter" (or "natural parameter") and is related to the mean through formula_21 For scalar formula_14 and formula_6, this reduces to formula_22 Under this scenario, the variance of the distribution can be shown to be formula_23 For scalar formula_14 and formula_6, this reduces to formula_24 Linear predictor. The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbol "η" (Greek "eta") denotes a linear predictor. It is related to the expected value of the data through the link function. "η" is expressed as linear combinations (thus, "linear") of unknown parameters β. The coefficients of the linear combination are represented as the matrix of independent variables X. "η" can thus be expressed as formula_25 Link function. The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined "canonical" link function which is derived from the exponential of the response's density function. However, in some cases it makes sense to try to match the domain of the link function to the range of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example Bayesian probit regression. When using a distribution function with a canonical parameter formula_26 the canonical link function is the function that expresses formula_16 in terms of formula_27 i.e. formula_28 For the most common distributions, the mean formula_29 is one of the parameters in the standard form of the distribution's density function, and then formula_30 is the function as defined above that maps the density function into its canonical form. When using the canonical link function, formula_31 which allows formula_32 to be a sufficient statistic for formula_33. Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here). In the cases of the exponential and gamma distributions, the domain of the canonical link function is not the same as the permitted range of the mean. In particular, the linear predictor may be positive, which would give an impossible negative mean. When maximizing the likelihood, precautions must be taken to avoid this. An alternative is to use a noncanonical link function. In the case of the Bernoulli, binomial, categorical and multinomial distributions, the support of the distributions is not the same type of data as the parameter being predicted. In all of these cases, the predicted parameter is one or more probabilities, i.e. real numbers in the range formula_34. The resulting model is known as "logistic regression" (or "multinomial logistic regression" in the case that "K"-way rather than binary values are being predicted). For the Bernoulli and binomial distributions, the parameter is a single probability, indicating the likelihood of occurrence of a single event. The Bernoulli still satisfies the basic condition of the generalized linear model in that, even though a single outcome will always be either 0 or 1, the "expected value" will nonetheless be a real-valued probability, i.e. the probability of occurrence of a "yes" (or 1) outcome. Similarly, in a binomial distribution, the expected value is "Np", i.e. the expected proportion of "yes" outcomes will be the probability to be predicted. For categorical and multinomial distributions, the parameter to be predicted is a "K"-vector of probabilities, with the further restriction that all probabilities must add up to 1. Each probability indicates the likelihood of occurrence of one of the "K" possible values. For the multinomial distribution, and for the vector form of the categorical distribution, the expected values of the elements of the vector can be related to the predicted probabilities similarly to the binomial and Bernoulli distributions. Fitting. Maximum likelihood. The maximum likelihood estimates can be found using an iteratively reweighted least squares algorithm or a Newton's method with updates of the form: formula_35 where formula_36 is the observed information matrix (the negative of the Hessian matrix) and formula_37 is the score function; or a Fisher's scoring method: formula_38 where formula_39 is the Fisher information matrix. Note that if the canonical link function is used, then they are the same. Bayesian methods. In general, the posterior distribution cannot be found in closed form and so must be approximated, usually using Laplace approximations or some type of Markov chain Monte Carlo method such as Gibbs sampling. Examples. General linear models. A possible point of confusion has to do with the distinction between generalized linear models and general linear models, two broad statistical models. Co-originator John Nelder has expressed regret over this terminology. The general linear model may be viewed as a special case of the generalized linear model with identity link and responses normally distributed. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development. Results for the generalized linear model with non-identity link are asymptotic (tending to work well with large samples). Linear regression. A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. In linear regression, the use of the least-squares estimator is justified by the Gauss–Markov theorem, which does not assume that the distribution is normal. From the perspective of generalized linear models, however, it is useful to suppose that the distribution function is the normal distribution with constant variance and the link function is the identity, which is the canonical link if the variance is known. Under these assumptions, the least-squares estimator is obtained as the maximum-likelihood parameter estimate. For the normal distribution, the generalized linear model has a closed form expression for the maximum-likelihood estimates, which is convenient. Most other GLMs lack closed form estimates. Binary data. When the response data, "Y", are binary (taking on only values 0 and 1), the distribution function is generally chosen to be the Bernoulli distribution and the interpretation of "μ"i is then the probability, "p", of "Y"i taking on the value one. There are several popular link functions for binomial functions. Logit link function. The most typical link function is the canonical logit link: formula_40 GLMs with this setup are logistic regression models (or "logit models"). Probit link function as popular choice of inverse cumulative distribution function. Alternatively, the inverse of any continuous cumulative distribution function (CDF) can be used for the link since the CDF's range is formula_34, the range of the binomial mean. The normal CDF formula_41 is a popular choice and yields the probit model. Its link is formula_42 The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.) Complementary log-log (cloglog). The complementary log-log function may also be used: formula_43 This link function is asymmetric and will often produce different results from the logit and probit link functions. The cloglog model corresponds to applications where we observe either zero events (e.g., defects) or one or more, where the number of events is assumed to follow the Poisson distribution. The Poisson assumption means that formula_44 where "μ" is a positive number denoting the expected number of events. If "p" represents the proportion of observations with at least one event, its complement formula_45 and then formula_46 A linear model requires the response variable to take values over the entire real line. Since "μ" must be positive, we can enforce that by taking the logarithm, and letting log("μ") be a linear model. This produces the "cloglog" transformation formula_47 Identity link. The identity link "g(p) = p" is also sometimes used for binomial data to yield a linear probability model. However, the identity link can predict nonsense "probabilities" less than zero or greater than one. This can be avoided by using a transformation like cloglog, probit or logit (or any inverse cumulative distribution function). A primary merit of the identity link is that it can be estimated using linear math—and other standard link functions are approximately linear matching the identity link near "p" = 0.5. Variance function. The variance function for "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;quasibinomial" data is: formula_48 where the dispersion parameter "τ" is exactly 1 for the binomial distribution. Indeed, the standard binomial likelihood omits "τ". When it is present, the model is called "quasibinomial", and the modified likelihood is called a quasi-likelihood, since it is not generally the likelihood corresponding to any real family of probability distributions. If "τ" exceeds 1, the model is said to exhibit overdispersion. Multinomial regression. The binomial case may be easily extended to allow for a multinomial distribution as the response (also, a Generalized Linear Model for counts, with a constrained total). There are two ways in which this is usually done: Ordered response. If the response variable is ordinal, then one may fit a model function of the form: formula_49 for "m" &gt; 2. Different links "g" lead to ordinal regression models like proportional odds models or ordered probit models. Unordered response. If the response variable is a nominal measurement, or the data do not satisfy the assumptions of an ordered model, one may fit a model of the following form: formula_50 for "m" &gt; 2. Different links "g" lead to multinomial logit or multinomial probit models. These are more general than the ordered response models, and more parameters are estimated. Count data. Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm, the canonical link. The variance function is proportional to the mean formula_51 where the dispersion parameter "τ" is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model is often described as Poisson with overdispersion or "quasi-Poisson". Extensions. Correlated or clustered data. The standard GLM assumes that the observations are uncorrelated. Extensions have been developed to allow for correlation between observations, as occurs for example in longitudinal studies and clustered designs: Generalized additive models. Generalized additive models (GAMs) are another extension to GLMs in which the linear predictor "η" is not restricted to be linear in the covariates X but is the sum of smoothing functions applied to the "xi"s: formula_52 The smoothing functions "fi" are estimated from the data. In general this requires a large number of data points and is computationally intensive. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{E}(\\mathbf{Y}\\mid\\mathbf{X}) = \\boldsymbol{\\mu} = g^{-1}(\\mathbf{X}\\boldsymbol{\\beta}), " }, { "math_id": 1, "text": " \\operatorname{Var}(\\mathbf{Y}\\mid\\mathbf{X}) = \\operatorname{V}(g^{-1}(\\mathbf{X}\\boldsymbol{\\beta})). " }, { "math_id": 2, "text": " Y " }, { "math_id": 3, "text": "\\eta = X \\beta" }, { "math_id": 4, "text": "g" }, { "math_id": 5, "text": "\\operatorname{E}(Y \\mid X) = \\mu = g^{-1}(\\eta)" }, { "math_id": 6, "text": "\\boldsymbol\\theta" }, { "math_id": 7, "text": "\\tau" }, { "math_id": 8, "text": " f_Y(\\mathbf{y} \\mid \\boldsymbol\\theta, \\tau) = h(\\mathbf{y},\\tau) \\exp \\left(\\frac{\\mathbf{b}(\\boldsymbol\\theta)^{\\rm T}\\mathbf{T}(\\mathbf{y}) - A(\\boldsymbol\\theta)} {d(\\tau)} \\right). \\,\\!" }, { "math_id": 9, "text": "h(\\mathbf{y},\\tau)" }, { "math_id": 10, "text": "\\mathbf{b}(\\boldsymbol\\theta)" }, { "math_id": 11, "text": "\\mathbf{T}(\\mathbf{y})" }, { "math_id": 12, "text": "A(\\boldsymbol\\theta)" }, { "math_id": 13, "text": "d(\\tau)" }, { "math_id": 14, "text": "\\mathbf{y}" }, { "math_id": 15, "text": "y" }, { "math_id": 16, "text": "\\theta" }, { "math_id": 17, "text": " f_Y(y \\mid \\theta, \\tau) = h(y,\\tau) \\exp \\left(\\frac{b(\\theta)T(y) - A(\\theta)}{d(\\tau)} \\right). \\,\\!" }, { "math_id": 18, "text": "\\boldsymbol\\theta'" }, { "math_id": 19, "text": "\\boldsymbol\\theta = \\mathbf{b}(\\boldsymbol\\theta')" }, { "math_id": 20, "text": "\\mathbf{b}(\\boldsymbol\\theta')" }, { "math_id": 21, "text": " \\boldsymbol\\mu = \\operatorname{E}(\\mathbf{y}) = \\nabla A(\\boldsymbol\\theta). \\,\\!" }, { "math_id": 22, "text": " \\mu = \\operatorname{E}(y) = A'(\\theta)." }, { "math_id": 23, "text": "\\operatorname{Var}(\\mathbf{y}) = \\nabla^2 A(\\boldsymbol\\theta) d(\\tau). \\,\\!" }, { "math_id": 24, "text": "\\operatorname{Var}(y) = A''(\\theta) d(\\tau). \\,\\!" }, { "math_id": 25, "text": " \\eta = \\mathbf{X}\\boldsymbol{\\beta}.\\," }, { "math_id": 26, "text": "\\theta," }, { "math_id": 27, "text": "\\mu," }, { "math_id": 28, "text": "\\theta = b(\\mu)." }, { "math_id": 29, "text": "\\mu" }, { "math_id": 30, "text": "b(\\mu)" }, { "math_id": 31, "text": "b(\\mu) = \\theta = \\mathbf{X}\\boldsymbol{\\beta}," }, { "math_id": 32, "text": "\\mathbf{X}^{\\rm T} \\mathbf{Y}" }, { "math_id": 33, "text": "\\boldsymbol{\\beta}" }, { "math_id": 34, "text": "[0,1]" }, { "math_id": 35, "text": " \\boldsymbol\\beta^{(t+1)} = \\boldsymbol\\beta^{(t)} + \\mathcal{J}^{-1}(\\boldsymbol\\beta^{(t)}) u(\\boldsymbol\\beta^{(t)}), " }, { "math_id": 36, "text": "\\mathcal{J}(\\boldsymbol\\beta^{(t)})" }, { "math_id": 37, "text": "u(\\boldsymbol\\beta^{(t)})" }, { "math_id": 38, "text": " \\boldsymbol\\beta^{(t+1)} = \\boldsymbol\\beta^{(t)} + \\mathcal{I}^{-1}(\\boldsymbol\\beta^{(t)}) u(\\boldsymbol\\beta^{(t)}), " }, { "math_id": 39, "text": "\\mathcal{I}(\\boldsymbol\\beta^{(t)})" }, { "math_id": 40, "text": "g(p) = \\operatorname{logit} p = \\ln \\left( { p \\over 1-p } \\right)." }, { "math_id": 41, "text": "\\Phi" }, { "math_id": 42, "text": "g(p) = \\Phi^{-1}(p).\\,\\!" }, { "math_id": 43, "text": "g(p) = \\log(-\\log(1-p))." }, { "math_id": 44, "text": "\\Pr(0) = \\exp(-\\mu)," }, { "math_id": 45, "text": " 1-p = \\Pr(0) = \\exp(-\\mu)," }, { "math_id": 46, "text": " -\\log(1-p) = \\mu." }, { "math_id": 47, "text": "\\log(-\\log(1-p)) = \\log(\\mu)." }, { "math_id": 48, "text": "\\operatorname{Var}(Y_i)= \\tau\\mu_i (1-\\mu_i)\\,\\!" }, { "math_id": 49, "text": " g(\\mu_m) = \\eta_m = \\beta_0 + X_1 \\beta_1 + \\cdots + X_p \\beta_p + \\gamma_2 + \\cdots + \\gamma_m = \\eta_1 + \\gamma_2 + \\cdots + \\gamma_m \\text{ where } \\mu_m = \\operatorname{P}(Y \\leq m). \\," }, { "math_id": 50, "text": " g(\\mu_m) = \\eta_m = \\beta_{m,0} + X_1 \\beta_{m,1} + \\cdots + X_p \\beta_{m,p} \\text{ where } \\mu_m = \\mathrm{P}(Y = m \\mid Y \\in \\{1,m\\} ). \\," }, { "math_id": 51, "text": "\\operatorname{var}(Y_i) = \\tau\\mu_i,\\, " }, { "math_id": 52, "text": "\\eta = \\beta_0 + f_1(x_1) + f_2(x_2) + \\cdots \\,\\!" } ]
https://en.wikipedia.org/wiki?curid=747122
74715593
Gaussian probability space
In probability theory particularly in the Malliavin calculus, a Gaussian probability space is a probability space together with a Hilbert space of mean zero, real-valued Gaussian random variables. Important examples include the classical or abstract Wiener space with some suitable collection of Gaussian random variables. Definition. A Gaussian probability space formula_0 consists of formula_6 Irreducibility. A Gaussian probability space is called "irreducible" if formula_7. Such spaces are denoted as formula_8. Non-irreducible spaces are used to work on subspaces or to extend a given probability space. Irreducible Gaussian probability spaces are classified by the dimension of the Gaussian space formula_9. Subspaces. A "subspace" formula_10 of a Gaussian probability space formula_0 consists of Example: Let formula_0 be a Gaussian probability space with a closed subspace formula_11. Let formula_17 be the orthogonal complement of formula_18 in formula_9. Since orthogonality implies independence between formula_17 and formula_18, we have that formula_19 is independent of formula_14. Define formula_13 via formula_20. Remark. For formula_21 we have formula_22. Fundamental algebra. Given a Gaussian probability space formula_0 one defines the algebra of cylindrical random variables formula_23 where formula_24 is a polynomial in formula_25 and calls formula_26 the "fundamental algebra". For any formula_27 it is true that formula_28. For an irreducible Gaussian probability formula_8 the fundamental algebra formula_26 is a dense set in formula_29 for all formula_30. Numerical and Segal model. An irreducible Gaussian probability formula_8 where a basis was chosen for formula_9 is called a "numerical model". Two numerical models are isomorphic if their Gaussian spaces have the same dimension. Given a separable Hilbert space formula_31, there exists always a canoncial irreducible Gaussian probability space formula_32 called the "Segal model" with formula_31 as a Gaussian space.
[ { "math_id": 0, "text": "(\\Omega,\\mathcal{F},P,\\mathcal{H},\\mathcal{F}^{\\perp}_{\\mathcal{H}})" }, { "math_id": 1, "text": "(\\Omega,\\mathcal{F},P)" }, { "math_id": 2, "text": "\\mathcal{H}\\subset L^2(\\Omega,\\mathcal{F},P)" }, { "math_id": 3, "text": "X\\in \\mathcal{H}" }, { "math_id": 4, "text": "\\mathcal{F}_{\\mathcal{H}}" }, { "math_id": 5, "text": "\\mathcal{F}^{\\perp}_{\\mathcal{H}}" }, { "math_id": 6, "text": "\\mathcal{F}=\\mathcal{F}_{\\mathcal{H}} \\otimes \\mathcal{F}^{\\perp}_{\\mathcal{H}}." }, { "math_id": 7, "text": "\\mathcal{F}=\\mathcal{F}_{\\mathcal{H}}" }, { "math_id": 8, "text": "(\\Omega,\\mathcal{F},P,\\mathcal{H})" }, { "math_id": 9, "text": "\\mathcal{H}" }, { "math_id": 10, "text": "(\\Omega,\\mathcal{F},P,\\mathcal{H}_1,\\mathcal{A}^{\\perp}_{\\mathcal{H}_1})" }, { "math_id": 11, "text": "\\mathcal{H}_1\\subset \\mathcal{H}" }, { "math_id": 12, "text": "\\mathcal{A}^{\\perp}_{\\mathcal{H}_1}\\subset \\mathcal{F}" }, { "math_id": 13, "text": "\\mathcal{A}^{\\perp}_{\\mathcal{H}_1}" }, { "math_id": 14, "text": "\\mathcal{A}_{\\mathcal{H}_1}" }, { "math_id": 15, "text": "\\mathcal{A}=\\mathcal{A}_{\\mathcal{H}_1}\\otimes \\mathcal{A}^{\\perp}_{\\mathcal{H}_1}" }, { "math_id": 16, "text": "\\mathcal{A}\\cap\\mathcal{F}^{\\perp}_{\\mathcal{H}}=\\mathcal{A}^{\\perp}_{\\mathcal{H}_1}" }, { "math_id": 17, "text": "V" }, { "math_id": 18, "text": "\\mathcal{H}_1" }, { "math_id": 19, "text": "\\mathcal{A}_V" }, { "math_id": 20, "text": "\\mathcal{A}^{\\perp}_{\\mathcal{H}_1}:=\\sigma(\\mathcal{A}_V,\\mathcal{F}^{\\perp}_{\\mathcal{H}})=\\mathcal{A}_V \\vee \\mathcal{F}^{\\perp}_{\\mathcal{H}}" }, { "math_id": 21, "text": "G=L^2(\\Omega,\\mathcal{F}^{\\perp}_{\\mathcal{H}},P)" }, { "math_id": 22, "text": "L^2(\\Omega,\\mathcal{F},P)=L^2((\\Omega,\\mathcal{F}_{\\mathcal{H}},P);G)" }, { "math_id": 23, "text": "\\mathbb{A}_{\\mathcal{H}}=\\{F=P(X_1,\\dots,X_n):X_i\\in \\mathcal{H}\\}" }, { "math_id": 24, "text": "P" }, { "math_id": 25, "text": "\\R[X_n,\\dots,X_n]" }, { "math_id": 26, "text": "\\mathbb{A}_{\\mathcal{H}}" }, { "math_id": 27, "text": "p<\\infty" }, { "math_id": 28, "text": "\\mathbb{A}_{\\mathcal{H}}\\subset L^p(\\Omega,\\mathcal{F},P)" }, { "math_id": 29, "text": "L^p(\\Omega,\\mathcal{F},P)" }, { "math_id": 30, "text": "p\\in[1,\\infty[" }, { "math_id": 31, "text": "\\mathcal{G}" }, { "math_id": 32, "text": "\\operatorname{Seg}(\\mathcal{G})" } ]
https://en.wikipedia.org/wiki?curid=74715593
7471587
P-rep
In statistical hypothesis testing, p-rep or "p"rep has been proposed as a statistical alternative to the classic p-value. Whereas a p-value is the probability of obtaining a result under the null hypothesis, p-rep purports to compute the probability of replicating an effect. The derivation of p-rep contained significant mathematical errors. For a while, the Association for Psychological Science recommended that articles submitted to Psychological Science and their other journals report p-rep rather than the classic p-value, but this is no longer the case. Calculation. Approximation from "p". The value of the p-rep ("p"rep) can be approximated based on the p-value ("p") as follows: formula_0 The above applies for one-tailed distributions. Criticism. The fact that the p-rep has a one-to-one correspondence with the p-value makes it clear that this new measure brings no additional information beyond that conveyed by the significance of the result. Killeen acknowledges this lack of information, but suggests that p-rep better captures the way naive experimenters conceptualize p-values and statistical hypothesis testing. Among the criticisms of p-rep is the fact that while it attempts to estimate replicability, it ignores results from other studies which can accurately guide this estimate. A further criticism of the p-rep statistic involves the logic of experimentation. The scientific value of replicable data lies in the adequate accounting for previously unmeasured factors (e.g., unmeasured participant variables, [[experimenter's bias]], etc.), The idea that a single study can capture a logical likelihood of such unmeasured factors affecting the outcome, and thus the likelihood of replicability, is a logical fallacy. External links. [[Category:Statistical tests]] [[Category:Statistical hypothesis testing]]
[ { "math_id": 0, "text": "p_\\text{rep} = \\left[ 1 + \\left( \\frac{p}{1-p} \\right)^{\\frac{2}{3}} \\right]^{-1}." } ]
https://en.wikipedia.org/wiki?curid=7471587
74718671
Method of moving asymptotes
Optimization algorithm The Method of Moving Asymptotes (MMA) is an optimization algorithm developed by Krister Svanberg in the 1980s. It's primarily used for solving non-linear programming problems, particularly those related to structural design and topology optimization. History. MMA was introduced by Krister Svanberg in a 1987 paper titled, "The method of moving asymptotes—a new method for structural optimization." The method was proposed as an alternative to traditional optimization methods, offering an approach that could handle large-scale problems, especially in the realm of structural design. Another paper was published in 1993 by Svanberg which added some extensions to the method, including mini-max formulations and first and second order dual methods to solve subproblems. Another version that is globally convergent was proposed by Zillober. Algorithm overview. The Method of Moving Asymptotes functions as an iterative scheme. The key idea behind MMA is to approximate the original non-linear constraints and objective function with a simpler, convex approximation. This approximation is represented by linear constraints and a convex objective function. Starting from an initial guess, each iteration consists of the following steps: Given an iteration point formula_0, calculate formula_1 and the gradients formula_2 for formula_3. Generate a subproblem formula_4 by replacing, in formula_5, the (usually implicit) functions formula_6 by approximating explicit functions formula_7, based on the calculations from Step I. Solve formula_4 and let the optimal solution of this subproblem be the next iteration point formula_8. Let formula_9 and return to Step I until convergence. The moving asymptotes serve as an adaptive mechanism. They shift and change with each iteration, progressively closing in on the optimal solution. This ensures that the approximations become increasingly accurate as the algorithm progresses. Applications. The Method of Moving Asymptotes has been widely applied in various fields including:
[ { "math_id": 0, "text": "x^{(k)}" }, { "math_id": 1, "text": "f_i(x^{(k)})" }, { "math_id": 2, "text": "\\nabla f_i(x^{(k)})" }, { "math_id": 3, "text": "i = 0,1, \\dots, m" }, { "math_id": 4, "text": "P^{(k)}" }, { "math_id": 5, "text": "P" }, { "math_id": 6, "text": "f_i" }, { "math_id": 7, "text": "f_i^{(k)}" }, { "math_id": 8, "text": "x^{(k+1)}" }, { "math_id": 9, "text": "k = k + 1" } ]
https://en.wikipedia.org/wiki?curid=74718671
7472170
Composition of relations
Mathematical operation In the mathematics of binary relations, the composition of relations is the forming of a new binary relation "R"; "S" from two given binary relations "R" and "S". In the calculus of relations, the composition of relations is called relative multiplication, and its result is called a relative product. Function composition is the special case of composition of relations where all relations involved are functions. The word uncle indicates a compound relation: for a person to be an uncle, he must be the brother of a parent. In algebraic logic it is said that the relation of Uncle (formula_0) is the composition of relations "is a brother of" (formula_1) and "is a parent of" (formula_2). formula_3 Beginning with Augustus De Morgan, the traditional form of reasoning by syllogism has been subsumed by relational logical expressions and their composition. Definition. If formula_4 and formula_5 are two binary relations, then their composition formula_6 is the relation formula_7 In other words, formula_8 is defined by the rule that says formula_9 if and only if there is an element formula_10 such that formula_11 (that is, formula_12 and formula_13). Notational variations. The semicolon as an infix notation for composition of relations dates back to Ernst Schroder's textbook of 1895. Gunther Schmidt has renewed the use of the semicolon, particularly in "Relational Mathematics" (2011). The use of semicolon coincides with the notation for function composition used (mostly by computer scientists) in category theory, as well as the notation for dynamic conjunction within linguistic dynamic semantics. A small circle formula_14 has been used for the infix notation of composition of relations by John M. Howie in his books considering semigroups of relations. However, the small circle is widely used to represent composition of functions formula_15 which "reverses" the text sequence from the operation sequence. The small circle was used in the introductory pages of "Graphs and Relations" until it was dropped in favor of juxtaposition (no infix notation). Juxtaposition formula_16 is commonly used in algebra to signify multiplication, so too, it can signify relative multiplication. Further with the circle notation, subscripts may be used. Some authors prefer to write formula_17 and formula_18 explicitly when necessary, depending whether the left or the right relation is the first one applied. A further variation encountered in computer science is the Z notation: formula_19 is used to denote the traditional (right) composition, while left composition is denoted by a fat semicolon. The unicode symbols are ⨾ and ⨟. Mathematical generalizations. Binary relations formula_20 are morphisms formula_21 in the category formula_22. In Rel the objects are sets, the morphisms are binary relations and the composition of morphisms is exactly composition of relations as defined above. The category Set of sets and functions is a subcategory of formula_22 where the maps formula_23 are functions formula_24. Given a regular category formula_25, its category of internal relations formula_26 has the same objects as formula_25, but now the morphisms formula_23 are given by subobjects formula_27 in formula_25. Formally, these are jointly monic spans between formula_28 and formula_29. Categories of internal relations are allegories. In particular formula_30. Given a field formula_31 (or more generally a principal ideal domain), the category of relations internal to matrices over formula_31, formula_32 has morphisms formula_33 linear subspaces formula_34. The category of linear relations over the finite field formula_35 is isomorphic to the phase-free qubit ZX-calculus modulo scalars. Composition in terms of matrices. Finite binary relations are represented by logical matrices. The entries of these matrices are either zero or one, depending on whether the relation represented is false or true for the row and column corresponding to compared objects. Working with such matrices involves the Boolean arithmetic with formula_43 and formula_44 An entry in the matrix product of two logical matrices will be 1, then, only if the row and column multiplied have a corresponding 1. Thus the logical matrix of a composition of relations can be found by computing the matrix product of the matrices representing the factors of the composition. "Matrices constitute a method for "computing" the conclusions traditionally drawn by means of hypothetical syllogisms and sorites." Heterogeneous relations. Consider a heterogeneous relation formula_45 that is, where formula_46 and formula_47 are distinct sets. Then using composition of relation formula_39 with its converse formula_48 there are homogeneous relations formula_49 (on formula_46) and formula_50 (on formula_47). If for all formula_51 there exists some formula_52 such that formula_53 (that is, formula_39 is a (left-)total relation), then for all formula_54 so that formula_49 is a reflexive relation or formula_55 where I is the identity relation formula_56 Similarly, if formula_39 is a surjective relation then formula_57 In this case formula_58 The opposite inclusion occurs for a difunctional relation. The composition formula_59 is used to distinguish relations of Ferrer's type, which satisfy formula_60 Example. Let formula_61 { France, Germany, Italy, Switzerland } and formula_62 { French, German, Italian } with the relation formula_39 given by formula_63 when formula_64 is a national language of formula_65 Since both formula_46 and formula_47 is finite, formula_39 can be represented by a logical matrix, assuming rows (top to bottom) and columns (left to right) are ordered alphabetically: formula_66 The converse relation formula_67 corresponds to the transposed matrix, and the relation composition formula_68 corresponds to the matrix product formula_50 when summation is implemented by logical disjunction. It turns out that the formula_69 matrix formula_50 contains a 1 at every position, while the reversed matrix product computes as: formula_70 This matrix is symmetric, and represents a homogeneous relation on formula_71 Correspondingly, formula_72 is the universal relation on formula_73 hence any two languages share a nation where they both are spoken (in fact: Switzerland). Vice versa, the question whether two given nations share a language can be answered using formula_74 Schröder rules. For a given set formula_75 the collection of all binary relations on formula_76 forms a Boolean lattice ordered by inclusion formula_77 Recall that complementation reverses inclusion: formula_78 In the calculus of relations it is common to represent the complement of a set by an overbar: formula_79 If formula_40 is a binary relation, let formula_80 represent the converse relation, also called the "transpose". Then the Schröder rules are formula_81 Verbally, one equivalence can be obtained from another: select the first or second factor and transpose it; then complement the other two relations and permute them. Though this transformation of an inclusion of a composition of relations was detailed by Ernst Schröder, in fact Augustus De Morgan first articulated the transformation as Theorem K in 1860. He wrote formula_82 With Schröder rules and complementation one can solve for an unknown relation formula_28 in relation inclusions such as formula_83 For instance, by Schröder rule formula_84 and complementation gives formula_85 which is called the left residual of formula_40 by formula_39. Quotients. Just as composition of relations is a type of multiplication resulting in a product, so some operations compare to division and produce quotients. Three quotients are exhibited here: left residual, right residual, and symmetric quotient. The left residual of two relations is defined presuming that they have the same domain (source), and the right residual presumes the same codomain (range, target). The symmetric quotient presumes two relations share a domain and a codomain. Definitions: Using Schröder's rules, formula_89 is equivalent to formula_90 Thus the left residual is the greatest relation satisfying formula_91 Similarly, the inclusion formula_92 is equivalent to formula_93 and the right residual is the greatest relation satisfying formula_94 One can practice the logic of residuals with Sudoku. Join: another form of composition. A fork operator formula_95 has been introduced to fuse two relations formula_96 and formula_97 into formula_98 The construction depends on projections formula_99 and formula_100 understood as relations, meaning that there are converse relations formula_101 and formula_102 Then the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;fork of formula_103 and formula_104 is given by formula_105 Another form of composition of relations, which applies to general formula_106-place relations for formula_107 is the "join" operation of relational algebra. The usual composition of two binary relations as defined here can be obtained by taking their join, leading to a ternary relation, followed by a projection that removes the middle component. For example, in the query language SQL there is the operation Join (SQL). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x U z" }, { "math_id": 1, "text": "x B y" }, { "math_id": 2, "text": "y P z" }, { "math_id": 3, "text": "U = BP \\quad \\text{ is equivalent to: } \\quad xByPz \\text{ if and only if } xUz." }, { "math_id": 4, "text": "R \\subseteq X \\times Y" }, { "math_id": 5, "text": "S \\subseteq Y \\times Z" }, { "math_id": 6, "text": "R; S" }, { "math_id": 7, "text": "R; S = \\{(x,z) \\in X \\times Z : \\text{ there exists } y \\in Y \\text{ such that } (x,y) \\in R \\text{ and } (y,z) \\in S\\}." }, { "math_id": 8, "text": "R; S \\subseteq X \\times Z" }, { "math_id": 9, "text": "(x,z) \\in R; S" }, { "math_id": 10, "text": "y \\in Y" }, { "math_id": 11, "text": "x\\,R\\,y\\,S\\,z" }, { "math_id": 12, "text": "(x,y) \\in R" }, { "math_id": 13, "text": "(y,z) \\in S" }, { "math_id": 14, "text": "(R \\circ S)" }, { "math_id": 15, "text": "g(f(x)) = (g \\circ f)(x)" }, { "math_id": 16, "text": "(RS)" }, { "math_id": 17, "text": "\\circ_l" }, { "math_id": 18, "text": "\\circ_r" }, { "math_id": 19, "text": "\\circ" }, { "math_id": 20, "text": "R \\subseteq X\\times Y" }, { "math_id": 21, "text": "R : X\\to Y" }, { "math_id": 22, "text": "\\mathsf{Rel}" }, { "math_id": 23, "text": "X\\to Y" }, { "math_id": 24, "text": "f:X\\to Y" }, { "math_id": 25, "text": "\\mathbb{X}" }, { "math_id": 26, "text": "\\mathsf{Rel}(\\mathbb{X})" }, { "math_id": 27, "text": "R\\subseteq X\\times Y" }, { "math_id": 28, "text": "X" }, { "math_id": 29, "text": "Y" }, { "math_id": 30, "text": "\\mathsf{Rel}(\\mathsf{Set})\\cong \\mathsf{Rel}" }, { "math_id": 31, "text": "k" }, { "math_id": 32, "text": "\\mathsf{Rel}(\\mathsf{Mat}(k))" }, { "math_id": 33, "text": "n\\to m" }, { "math_id": 34, "text": "R \\subseteq k^n\\oplus k^m" }, { "math_id": 35, "text": "\\mathbb{F}_2" }, { "math_id": 36, "text": "R;(S;T) = (R;S);T." }, { "math_id": 37, "text": "R \\, ; S" }, { "math_id": 38, "text": "(R \\, ; S)^\\textsf{T} = S^{\\textsf{T}} \\, ; R^{\\textsf{T}}." }, { "math_id": 39, "text": "R" }, { "math_id": 40, "text": "S" }, { "math_id": 41, "text": "R." }, { "math_id": 42, "text": "S." }, { "math_id": 43, "text": "1 + 1 = 1" }, { "math_id": 44, "text": "1 \\times 1 = 1." }, { "math_id": 45, "text": "R \\subseteq A \\times B;" }, { "math_id": 46, "text": "A" }, { "math_id": 47, "text": "B" }, { "math_id": 48, "text": "R^\\textsf{T}," }, { "math_id": 49, "text": "R R^\\textsf{T}" }, { "math_id": 50, "text": "R^\\textsf{T} R" }, { "math_id": 51, "text": "x \\in A" }, { "math_id": 52, "text": "y \\in B," }, { "math_id": 53, "text": "x R y" }, { "math_id": 54, "text": "x, x R R^\\textsf{T} x" }, { "math_id": 55, "text": "I \\subseteq R R^\\textsf{T}" }, { "math_id": 56, "text": "\\{(x,x) : x \\in A\\}." }, { "math_id": 57, "text": "R^\\textsf{T} R \\supseteq I = \\{(x,x) : x \\in B\\}." }, { "math_id": 58, "text": "R \\subseteq R R^\\textsf{T} R." }, { "math_id": 59, "text": "\\bar{R}^\\textsf{T} R " }, { "math_id": 60, "text": "R \\bar{R}^\\textsf{T} R = R." }, { "math_id": 61, "text": "A = " }, { "math_id": 62, "text": "B = " }, { "math_id": 63, "text": "a R b" }, { "math_id": 64, "text": "b" }, { "math_id": 65, "text": "a." }, { "math_id": 66, "text": "\\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & 1 \\\\\n 1 & 1 & 1\n\\end{pmatrix}." }, { "math_id": 67, "text": "R^\\textsf{T}" }, { "math_id": 68, "text": "R^\\textsf{T}; R" }, { "math_id": 69, "text": "3 \\times 3" }, { "math_id": 70, "text": "R R^\\textsf{T} = \\begin{pmatrix}\n 1 & 0 & 0 & 1 \\\\\n 0 & 1 & 0 & 1 \\\\\n 0 & 0 & 1 & 1 \\\\\n 1 & 1 & 1 & 1\n\\end{pmatrix}." }, { "math_id": 71, "text": "A." }, { "math_id": 72, "text": "R^\\textsf{T} \\, ; R" }, { "math_id": 73, "text": "B," }, { "math_id": 74, "text": "R \\, ; R^\\textsf{T}." }, { "math_id": 75, "text": "V," }, { "math_id": 76, "text": "V" }, { "math_id": 77, "text": "(\\subseteq)." }, { "math_id": 78, "text": "A \\subseteq B \\text{ implies } B^{\\complement} \\subseteq A^{\\complement}." }, { "math_id": 79, "text": "\\bar{A} = A^{\\complement}." }, { "math_id": 80, "text": "S^\\textsf{T}" }, { "math_id": 81, "text": "Q R \\subseteq S \\quad \\text{ is equivalent to } \\quad Q^\\textsf{T} \\bar{S} \\subseteq \\bar{R} \\quad \\text{ is equivalent to } \\quad \\bar{S} R^\\textsf{T} \\subseteq \\bar{Q}." }, { "math_id": 82, "text": "L M \\subseteq N \\text{ implies } \\bar{N} M^\\textsf{T} \\subseteq \\bar{L}." }, { "math_id": 83, "text": "R X \\subseteq S \\quad \\text{and} \\quad XR \\subseteq S." }, { "math_id": 84, "text": "R X \\subseteq S \\text{ implies } R^\\textsf{T} \\bar{S} \\subseteq \\bar{X}," }, { "math_id": 85, "text": "X \\subseteq \\overline{R^\\textsf{T} \\bar{S}}," }, { "math_id": 86, "text": "A\\backslash B \\mathrel{:=} \\overline{A^\\textsf{T} \\bar{B} }" }, { "math_id": 87, "text": "D/C \\mathrel{:=} \\overline{\\bar{D} C^\\textsf{T}}" }, { "math_id": 88, "text": "\\operatorname{syq} (E, F) \\mathrel{:=} \\overline{E^\\textsf{T} \\bar{F} } \\cap \\overline{\\bar{E}^\\textsf{T} F}" }, { "math_id": 89, "text": "A X \\subseteq B" }, { "math_id": 90, "text": "X \\subseteq A \\backslash B." }, { "math_id": 91, "text": "A X \\subseteq B." }, { "math_id": 92, "text": "Y C \\subseteq D" }, { "math_id": 93, "text": "Y \\subseteq D / C," }, { "math_id": 94, "text": "Y C \\subseteq D." }, { "math_id": 95, "text": "(<)" }, { "math_id": 96, "text": "c : H \\to A" }, { "math_id": 97, "text": "d : H \\to B" }, { "math_id": 98, "text": "c \\,(<)\\, d : H \\to A \\times B." }, { "math_id": 99, "text": "a : A \\times B \\to A" }, { "math_id": 100, "text": "b : A \\times B \\to B," }, { "math_id": 101, "text": "a^{\\textsf{T}}" }, { "math_id": 102, "text": "b^{\\textsf{T}}." }, { "math_id": 103, "text": "c" }, { "math_id": 104, "text": "d" }, { "math_id": 105, "text": "c\\,(<)\\,d ~\\mathrel{:=}~ c ;a^\\textsf{T} \\cap\\ d ;b^\\textsf{T}." }, { "math_id": 106, "text": "n" }, { "math_id": 107, "text": "n \\geq 2," } ]
https://en.wikipedia.org/wiki?curid=7472170
74724462
MUQ
MUQ can refer to: See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "MUQ(p)" } ]
https://en.wikipedia.org/wiki?curid=74724462
747279
Snellen chart
Eye chart A Snellen chart is an eye chart that can be used to measure visual acuity. Snellen charts are named after the Dutch ophthalmologist Herman Snellen who developed the chart in 1862 as a measurement tool for the acuity formula developed by his professor Franciscus Cornelius Donders. Many ophthalmologists and vision scientists now use an improved chart known as the LogMAR chart. History. Snellen developed charts using symbols based in a 5×5 unit grid. The experimental charts developed in 1861 used abstract symbols. Snellen's charts published in 1862 used alphanumeric capitals in the 5×5 grid. The original chart shows A, C, E, G, L, N, P, R, T, 5, V, Z, B, D, 4, F, H, K, O, S, 3, U, Y, A, C, E, G, L, 2. Description. The normal Snellen chart is printed with eleven lines of block letters. The first line consists of one very large letter, which may be one of several letters, for example E, H, or N. Subsequent rows have increasing numbers of letters that decrease in size. A person taking the test covers one eye from 6 metres or 20 feet away, and reads aloud the letters of each row, beginning at the top. The smallest row that can be read accurately indicates the visual acuity in that specific eye. The symbols on an acuity chart are formally known as "optotypes". In the case of the traditional Snellen chart, the optotypes have the appearance of block letters, and are intended to be seen and read as letters. They are not, however, letters from any ordinary typographer's font. They have a particular, simple geometry in which: Only the nine letters C, D, E, F, L, O, P, T, Z are used in the common Snellen chart. The perception of five out of six letters (or similar ratio) is judged to be the Snellen fraction. Wall-mounted Snellen charts are inexpensive and are sometimes used for approximate assessment of vision, e.g. in a primary-care physician's office. Whenever acuity must be assessed carefully (as in an eye doctor's examination), or where there is a possibility that the examinee might attempt to deceive the examiner (as in a motor vehicle license office), equipment is used that can present the letters in a variety of randomized patterns. BS 4274-1:1968 (British Standards Institution) "Specification for test charts for determining distance visual acuity" was replaced by BS 4274-1:2003 "Test charts for clinical determination of distance visual acuity — Specification". It states that "the luminance of the presentation shall be uniform and not less than 120 cd/m2. Any variation across the test chart shall not exceed 20 %." According to BS 4274-1:2003 only the letters C, D, E, F, H, K, N, P, R, U, V, and Z should be used for the testing of vision based upon equal legibility of the letters. Snellen fraction. Visual acuity is the distance at which test is made / distance at which the smallest optotype identified subtends an angle of five arcminutes and the critical distinguishing features of the optotype subtend an angle of one arcminute. "6/6"(m) or "20/20"(ft) vision. Snellen defined "standard vision" as the ability to recognize one of his optotypes when it subtended 5 minutes of arc. Thus the optotype can only be recognized if the person viewing it can discriminate a spatial pattern separated by a visual angle of one minute of arc. Outside the United States, the standard chart distance is , and normal acuity is designated "6/6". Other acuities are expressed as ratios with a numerator of 6. Some clinics do not have 6-metre eye lanes available, and either a half-size chart subtending the same angles at , or a reversed chart projected and viewed by a mirror is used to achieve the correct sized letters. In the most familiar acuity test, a Snellen chart is placed at a standard distance: 6 metres. At this distance, the symbols on the line representing "normal" acuity subtend an angle of five minutes of arc, and the thickness of the lines and of the spaces between the lines subtends one minute of arc. This line, designated 6/6 (or 20/20), is the smallest line that a person with normal acuity can read at a distance of 6 metres. This definition is arbitrary, since human eyes typically have higher acuity, as Tscherning writes, "We have found also that the best eyes have a visual acuity which approaches 2, and we can be almost certain that if, with a good illumination, the acuity is only equal to 1, the eye presents defects sufficiently pronounced to be easily established." Three lines above, the letters have twice the height of those letters on the 6/6 (or 20/20 in the US) line. If this is the smallest line a person can read, the person's acuity is "6/12" ("20/40"), meaning that this person needs to approach to a distance of to read letters that a person with normal acuity could read at . In an even more approximate manner, this person could be said to have "half" the normal acuity of 6/6. At exactly 6 metres' distance from the patient, the letters on the 6/6 line shall subtend 5 minutes of arc (such that the individual limbs of the letters subtend 1 minute of arc), which means that the chart should be sized such that these letters are 8.73 mm tall and the topmost (6/60) "E" should be 87.3 mm tall. Putting it another way, the eye should be at a distance 68.76 times the height of the top (6/60) letter. The formula is formula_0 where formula_1 is the optotype height or width (which are the same due to the optotype being on a square grid), formula_2 is the distance from eye to chart, and formula_3 is the angle subtended by the optotype, which is 5 arcminutes as specified by Snellen. Another calculation for United States clinics using 20-foot chart distances (slightly more than 6 m), and using a 17 mm model eye for calculations, and a letter which subtends 5 minutes of arc, gives a vertical height of the 20/20 letter to be 8.75 mm. Acuity charts are used during many kinds of vision examinations, such as "refracting" the eye to determine the best eyeglass prescription. The largest letter on an eye chart often represents an acuity of 6/60 (20/200), the value that is considered "legally blind" in the US. Many individuals with high myopia cannot read the large E without glasses, but can read the 6/6 (20/20) line or 6/4.5 (20/15) line with glasses. By contrast, legally blind individuals have a visual acuity of 6/60 (20/200) or less when using the best corrective lens. Electronic chart. To ensure adequate illumination of the Snellen charts, various medical device manufacturers had developed Snellen chart products with backlight or projection. Digital chart. Since computer monitors typically have good lighting for reading and LCD/LED monitors have high DPI (between 96 and 480) they are suitable for displaying optotypes. Commonly digital chart products support randomizing optotypes displayed to prevent patients from memorizing lines they have previously read. In Google Play and App Store (iOS), there are Snellen chart apps for smart phones and tablets. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w=2d \\, {\\tan{{\\theta} \\over {2}}}" }, { "math_id": 1, "text": "w" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=747279
74732341
Arie Bialostocki
Israeli-American mathematician and physics professor Arie Bialostocki is an Israeli American mathematician with expertise and contributions in discrete mathematics and finite groups. Education and career. Arie received his BSc, MSc, and PhD (1984) degrees from Tel-Aviv University in Israel. His dissertation was done under the supervision of Marcel Herzog. After a year of postdoc at University of Calgary, Canada, he took a faculty position at the University of Idaho, became a professor in 1992, and continued to work there until he retired at the end of 2011. At Idaho, Arie maintained correspondence and collaborations with researchers from around the world who would share similar interests in mathematics. His Erdős number is 1. He has supervised seven PhD students and numerous undergraduate students who enjoyed his colorful anecdotes and advice. He organized the Research Experience for Undergraduates (REU) program at the University of Idaho from 1999 to 2003 attracting many promising undergraduates who themselves have gone on to their outstanding research careers. Mathematics research. Arie has published more than 50 publications. Some of Bialostocki's contributions include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "d_2(B)=d_2(G)" }, { "math_id": 3, "text": "d_2(X)" }, { "math_id": 4, "text": "2" }, { "math_id": 5, "text": "A=(a_1,a_2,\\ldots,a_n)" }, { "math_id": 6, "text": "{\\mathbb Z}_m" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "{\\lfloor{n/2}\\rfloor \\choose {m}}+{\\lceil{n/2}\\rceil \\choose{m}}" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "n=2m-1" }, { "math_id": 11, "text": "b(m, k; r)" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "n\\geqslant k+2" } ]
https://en.wikipedia.org/wiki?curid=74732341
747328
Proxy (climate)
Preserved physical characteristics allowing reconstruction of past climatic conditions In the study of past climates ("paleoclimatology"), climate proxies are preserved physical characteristics of the past that stand in for direct meteorological measurements and enable scientists to reconstruct the climatic conditions over a longer fraction of the Earth's history. Reliable global records of climate only began in the 1880s, and proxies provide the only means for scientists to determine climatic patterns before record-keeping began. A large number of climate proxies have been studied from a variety of geologic contexts. Examples of proxies include stable isotope measurements from ice cores, growth rates in tree rings, species composition of sub-fossil pollen in lake sediment or foraminifera in ocean sediments, temperature profiles of boreholes, and stable isotopes and mineralogy of corals and carbonate speleothems. In each case, the proxy indicator has been influenced by a particular seasonal climate parameter (e.g., summer temperature or monsoon intensity) at the time in which they were laid down or grew. Interpretation of climate proxies requires a range of ancillary studies, including calibration of the sensitivity of the proxy to climate and cross-verification among proxy indicators. Proxies can be combined to produce temperature reconstructions longer than the instrumental temperature record and can inform discussions of global warming and climate history. The geographic distribution of proxy records, just like the instrumental record, is not at all uniform, with more records in the northern hemisphere. Proxies. In science, it is sometimes necessary to study a variable which cannot be measured directly. This can be done by "proxy methods," in which a variable which correlates with the variable of interest is measured, and then used to infer the value of the variable of interest. Proxy methods are of particular use in the study of the past climate, beyond times when direct measurements of temperatures are available. Most proxy records have to be calibrated against independent temperature measurements, or against a more directly calibrated proxy, during their period of overlap to estimate the relationship between temperature and the proxy. The longer history of the proxy is then used to reconstruct temperature from earlier periods. Ice cores. Drilling. Ice cores are cylindrical samples from within ice sheets in the Greenland, Antarctic, and North American regions. First attempts of extraction occurred in 1956 as part of the International Geophysical Year. As original means of extraction, the U.S. Army's Cold Regions Research and Engineering Laboratory used an -long modified electrodrill in 1968 at Camp Century, Greenland, and Byrd Station, Antarctica. Their machinery could drill through of ice in 40–50 minutes. From 1300 to in depth, core samples were in diameter and 10 to long. Deeper samples of 15 to long were not uncommon. Every subsequent drilling team improves their method with each new effort. Proxy. The ratio between the 16O and 18O water molecule isotopologues in an ice core helps determine past temperatures and snow accumulations. The heavier isotope (18O) condenses more readily as temperatures decrease and falls more easily as precipitation, while the lighter isotope (16O) needs colder conditions to precipitate. The farther north one needs to go to find elevated levels of the 18O isotopologue, the warmer the period. In addition to oxygen isotopes, water contains hydrogen isotopes – 1H and 2H, usually referred to as H and D (for deuterium) – that are also used for temperature proxies. Normally, ice cores from Greenland are analyzed for δ18O and those from Antarctica for δ-deuterium. Those cores that analyze for both show a lack of agreement. (In the figure, δ18O is for the trapped air, not the ice. δD is for the ice.) Air bubbles in the ice, which contain trapped greenhouse gases such as carbon dioxide and methane, are also helpful in determining past climate changes. From 1989 to 1992, the European Greenland Ice Core Drilling Project drilled in central Greenland at coordinates 72° 35' N, 37° 38' W. The ices in that core were 3840 years old at a depth of 770 m, 40,000 years old at 2521 m, and 200,000 years old or more at 3029 m bedrock. Ice cores in Antarctica can reveal the climate records for the past 650,000 years. Location maps and a complete list of U.S. ice core drilling sites can be found on the website for the National Ice Core Laboratory. Tree rings. Dendroclimatology is the science of determining past climates from trees, primarily from properties of the annual tree rings. Tree rings are wider when conditions favor growth, narrower when times are difficult. Two primary factors are temperature and humidity / water availability. Other properties of the annual rings, such as maximum latewood density (MXD) have been shown to be better proxies than simple ring width. Using tree rings, scientists have estimated many local climates for hundreds to thousands of years previous. By combining multiple tree-ring studies (sometimes with other climate proxy records), scientists have estimated past regional and global climates (see Temperature record of the past 1000 years). Fossil leaves. Paleoclimatologists often use leaf teeth to reconstruct mean annual temperature in past climates, and they use leaf size as a proxy for mean annual precipitation. In the case of mean annual precipitation reconstructions, some researchers believe taphonomic processes cause smaller leaves to be overrepresented in the fossil record, which can bias reconstructions. However, recent research suggests that the leaf fossil record may not be significantly biased toward small leaves. New approaches retrieve data such as CO2 content of past atmospheres from fossil leaf stomata and isotope composition, measuring cellular CO2 concentrations. A 2014 study was able to use the carbon-13 isotope ratios to estimate the CO2 amounts of the past 400 million years, the findings hint at a higher climate sensitivity to CO2 concentrations. Boreholes. Borehole temperatures are used as temperature proxies. Since heat transfer through the ground is slow, temperature measurements at a series of different depths down the borehole, adjusted for the effect of rising heat from inside the Earth, can be "inverted" (a mathematical formula to solve matrix equations) to produce a non-unique series of surface temperature values. The solution is "non-unique" because there are multiple possible surface temperature reconstructions that can produce the same borehole temperature profile. In addition, due to physical limitations, the reconstructions are inevitably "smeared", and become more smeared further back in time. When reconstructing temperatures around 1500 AD, boreholes have a temporal resolution of a few centuries. At the start of the 20th century, their resolution is a few decades; hence they do not provide a useful check on the instrumental temperature record. However, they are broadly comparable. These confirmations have given paleoclimatologists the confidence that they can measure the temperature of 500 years ago. This is concluded by a depth scale of about to measure the temperatures from 100 years ago and to measure the temperatures from 1,000 years ago. Boreholes have a great advantage over many other proxies in that no calibration is required: they are actual temperatures. However, they record surface temperature not the near-surface temperature (1.5 meter) used for most "surface" weather observations. These can differ substantially under extreme conditions or when there is surface snow. In practice the effect on borehole temperature is believed to be generally small. A second source of error is contamination of the well by groundwater may affect the temperatures, since the water "carries" more modern temperatures with it. This effect is believed to be generally small, and more applicable at very humid sites. It does not apply in ice cores where the site remains frozen all year. More than 600 boreholes, on all continents, have been used as proxies for reconstructing surface temperatures. The highest concentration of boreholes exist in North America and Europe. Their depths of drilling typically range from 200 to greater than 1,000 meters into the crust of the Earth or ice sheet. A small number of boreholes have been drilled in the ice sheets; the purity of the ice there permits longer reconstructions. Central Greenland borehole temperatures show "a warming over the last 150 years of approximately 1°C ± 0.2°C preceded by a few centuries of cool conditions. Preceding this was a warm period centered around A.D. 1000, which was warmer than the late 20th century by approximately 1°C." A borehole in the Antarctica icecap shows that the "temperature at A.D. 1 [was] approximately 1°C warmer than the late 20th century". Borehole temperatures in Greenland were responsible for an important revision to the isotopic temperature reconstruction, revealing that the former assumption that "spatial slope equals temporal slope" was incorrect. Corals. Ocean coral skeletal rings, or bands, also share paleoclimatological information, similarly to tree rings. In 2002, a report was published on the findings of Drs. Lisa Greer and Peter Swart, associates of University of Miami at the time, in regard to stable oxygen isotopes in the calcium carbonate of coral. Cooler temperatures tend to cause coral to use heavier isotopes in its structure, while warmer temperatures result in more normal oxygen isotopes being built into the coral structure. Denser water salinity also tends to contain the heavier isotope. Greer's coral sample from the Atlantic Ocean was taken in 1994 and dated back to 1935. Greer recalls her conclusions, "When we look at the averaged annual data from 1935 to about 1994, we see it has the shape of a sine wave. It is periodic and has a significant pattern of oxygen isotope composition that has a peak at about every twelve to fifteen years." Surface water temperatures have coincided by also peaking every twelve and a half years. However, since recording this temperature has only been practiced for the last fifty years, correlation between recorded water temperature and coral structure can only be drawn so far back. Pollen grains. Pollen can be found in sediments. Plants produce pollen in large quantities and it is extremely resistant to decay. It is possible to identify a plant species from its pollen grain. The identified plant community of the area at the relative time from that sediment layer, will provide information about the climatic condition. The abundance of pollen of a given vegetation period or year depends partly on the weather conditions of the previous months, hence pollen density provides information on short-term climatic conditions. The study of prehistoric pollen is palynology. Dinoflagellate cysts. Dinoflagellates occur in most aquatic environments and during their life cycle, some species produce highly resistant organic-walled cysts for a dormancy period when environmental conditions are not appropriate for growth. Their living depth is relatively shallow (dependent upon light penetration), and closely coupled to diatoms on which they feed. Their distribution patterns in surface waters are closely related to physical characteristics of the water bodies, and nearshore assemblages can also be distinguished from oceanic assemblages. The distribution of dinocysts in sediments has been relatively well documented and has contributed to understanding the average sea-surface conditions that determine the distribution pattern and abundances of the taxa (). Several studies, including and have compiled box and gravity cores in the North Pacific analyzing them for palynological content to determine the distribution of dinocysts and their relationships with sea surface temperature, salinity, productivity and upwelling. Similarly, and use a box core at 576.5 m of water depth from 1992 in the central Santa Barbara Basin to determine oceanographic and climatic changes during the past 40 kyr in the area. Lake and ocean sediments. Similar to their study on other proxies, paleoclimatologists examine oxygen isotopes in the contents of ocean sediments. Likewise, they measure the layers of varve (deposited fine and coarse silt or clay) laminating lake sediments. Lake varves are primarily influenced by: Diatoms, foraminifera, radiolarians, ostracods, and coccolithophores are examples of biotic proxies for lake and ocean conditions that are commonly used to reconstruct past climates. The distribution of the species of these and other aquatic creatures preserved in the sediments are useful proxies. The optimal conditions for species preserved in the sediment act as clues. Researchers use these clues to reveal what the climate and environment was like when the creatures died. The oxygen isotope ratios in their shells can also be used as proxies for temperature. Water isotopes and temperature reconstruction. Ocean water is mostly H216O, with small amounts of HD16O and H218O, where D denotes deuterium, i.e. hydrogen with an extra neutron. In Vienna Standard Mean Ocean Water (VSMOW) the ratio of D to H is 155.76x10−6 and O-18 to O-16 is 2005.2x10−6. Isotope fractionation occurs during changes between condensed and vapour phases: the vapour pressure of heavier isotopes is lower, so vapour contains relatively more of the lighter isotopes and when the vapour condenses the precipitation preferentially contains heavier isotopes. The difference from VSMOW is expressed as δ18O = 1000‰ formula_0; and a similar formula for δD. δ values for precipitation are always negative. The major influence on δ is the difference between ocean temperatures where the moisture evaporated and the place where the final precipitation occurred; since ocean temperatures are relatively stable the δ value mostly reflects the temperature where precipitation occurs. Taking into account that the precipitation forms above the inversion layer, we are left with a linear relation: δ 18O = aT + b This is empirically calibrated from measurements of temperature and δ as a = 0.67 ‰/°C for Greenland and 0.76 ‰/°C for East Antarctica. The calibration was initially done on the basis of "spatial" variations in temperature and it was assumed that this corresponded to "temporal" variations. More recently, borehole thermometry has shown that for glacial-interglacial variations, a = 0.33 ‰/°C, implying that glacial-interglacial temperature changes were twice as large as previously believed. A study published in 2017 called the previous methodology to reconstruct paleo ocean temperatures 100 million years ago into question, suggesting it has been relatively stable during that time, much colder. Membrane lipids. A novel climate proxy obtained from peat (lignites, ancient peat) and soils, membrane lipids known as glycerol dialkyl glycerol tetraether (GDGT) is helping to study paleo environmental factors, which control relative distribution of differently branched GDGT isomers. The study authors note, "These branched membrane lipids are produced by an as yet unknown group of anaerobic soil bacteria." As of 2018[ [update]], there is a decade of research demonstrating that in mineral soils the degree of methylation of bacteria (brGDGTs), helps to calculate mean annual air temperatures. This proxy method was used to study the climate of the early Palaeogene, at the Cretaceous–Paleogene boundary, and researchers found that annual air temperatures, over land and at mid-latitude, averaged about 23–29 °C (± 4.7 °C), which is 5–10 °C higher than most previous findings. Pseudoproxies. The skill of algorithms used to combine proxy records into an overall hemispheric temperature reconstruction may be tested using a technique known as "pseudoproxies". In this method, output from a climate model is sampled at locations corresponding to the known proxy network, and the temperature record produced is compared to the (known) overall temperature of the model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\times \\left( \\frac{([{}^{18}O]/[{}^{16}O])}{([{}^{18}O]/[{}^{16}O])_{\\mathrm{VSMOW}}} - 1\\right)" } ]
https://en.wikipedia.org/wiki?curid=747328
74738076
Carré du champ operator
Operator in analysis and probability theory The (French for "square of a field" operator) is a bilinear, symmetric operator from analysis and probability theory. The measures how far an infinitesimal generator is from being a derivation. The operator was introduced in 1969 by Hiroshi Kunita and independently discovered in 1976 by Jean-Pierre Roth in his doctoral thesis. The name "carré du champ" comes from electrostatics. Carré du champ operator for a Markov semigroup. Let formula_0 be a σ-finite measure space, formula_1 a Markov semigroup of non-negative operators on formula_2, formula_3 the infinitesimal generator of formula_1 and formula_4 the algebra of functions in formula_5, i.e. a vector space such that for all formula_6 also formula_7. Carré du champ operator. The of a Markovian semigroup formula_1 is the operator formula_8 defined (following P. A. Meyer) as formula_9 for all formula_10. Properties. From the definition, it follows that formula_11 For formula_12 we have formula_13 and thus formula_14 and formula_15 therefore the is positive. The domain is formula_16
[ { "math_id": 0, "text": "(X,\\mathcal{E},\\mu)" }, { "math_id": 1, "text": "\\{P_t\\}_{t\\geq 0}" }, { "math_id": 2, "text": "L^2(X,\\mu)" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\mathcal{A}" }, { "math_id": 5, "text": "\\mathcal{D}(A)" }, { "math_id": 6, "text": "f,g\\in \\mathcal{A}" }, { "math_id": 7, "text": "fg\\in \\mathcal{A}" }, { "math_id": 8, "text": "\\Gamma:\\mathcal{A}\\times \\mathcal{A}\\to\\mathbb{R}" }, { "math_id": 9, "text": "\\Gamma(f,g)=\\frac{1}{2}\\left(A(fg)-fA(g)-gA(f)\\right)" }, { "math_id": 10, "text": "f,g \\in \\mathcal{A}" }, { "math_id": 11, "text": "\\Gamma(f,g)=\\lim\\limits_{t\\to 0}\\frac{1}{2t}\\left(P_t(fg)-P_tfP_tg\\right)." }, { "math_id": 12, "text": "f\\in\\mathcal{A}" }, { "math_id": 13, "text": "P_t(f^2)\\geq (P_tf)^2" }, { "math_id": 14, "text": "A(f^2)\\geq 2 fAf" }, { "math_id": 15, "text": "\\Gamma(f):=\\Gamma(f,f)\\geq 0,\\quad \\forall f\\in\\mathcal{A}" }, { "math_id": 16, "text": "\\mathcal{D}(A):=\\left\\{f \\in L^2(X,\\mu) ;\\;\\lim\\limits_{t\\downarrow 0}\\frac{P_t f-f}{t}\\text{ exists and is in } L^2(X,\\mu)\\right\\}." } ]
https://en.wikipedia.org/wiki?curid=74738076
7473871
Levinson's inequality
In mathematics, Levinson's inequality is the following inequality, due to Norman Levinson, involving positive numbers. Let formula_0 and let formula_1 be a given function having a third derivative on the range formula_2, and such that formula_3 for all formula_4. Suppose formula_5 and formula_6 for formula_7. Then formula_8 The Ky Fan inequality is the special case of Levinson's inequality, where formula_9
[ { "math_id": 0, "text": "a>0" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "(0,2a)" }, { "math_id": 3, "text": "f'''(x)\\geq 0" }, { "math_id": 4, "text": "x\\in (0,2a)" }, { "math_id": 5, "text": "0<x_i\\leq a" }, { "math_id": 6, "text": "0<p_i" }, { "math_id": 7, "text": " i = 1, \\ldots, n" }, { "math_id": 8, "text": "\\frac{\\sum_{i=1}^np_i f(x_i)}{\\sum_{i=1}^np_i}-f\\left(\\frac{\\sum_{i=1}^np_ix_i}{\\sum_{i=1}^np_i}\\right)\\le\\frac{\\sum_{i=1}^np_if(2a-x_i)}{\\sum_{i=1}^np_i}-f\\left(\\frac{\\sum_{i=1}^np_i(2a-x_i)}{\\sum_{i=1}^np_i}\\right)." }, { "math_id": 9, "text": "p_i=1,\\ a=\\frac{1}{2}, \\text{ and } f(x) = \\log x. " } ]
https://en.wikipedia.org/wiki?curid=7473871
74741552
Heawood family
In graph theory the term Heawood family refers to either one of the following two related graph families generated via ΔY- and YΔ-transformations: In either setting the members of the graph family are collectively known as Heawood graphs, as the Heawood graph is a member. This is in analogy to the Petersen family, which too is named after its member the Petersen graph. The Heawood families play a significant role in topological graph theory. They contain the smallest known examples for graphs that are intrinsically knotted, that are not 4-flat, or that have Colin de Verdière graph invariant formula_2. The formula_0-family. The formula_0-family is generated from the complete graph formula_0 through repeated application of ΔY- and YΔ-transformations. The family consists of 20 graphs, all of which have 21 edges. The unique smallest member, formula_0, has seven vertices. The unique largest member, the Heawood graph, has 14 vertices. Only 14 out of the 20 graphs are intrinsically knotted, all of which are minor minimal with this property. The other six graphs have knotless embeddings. This shows that knotless graphs are not closed under ΔY- and YΔ-transformations. All members of the formula_0-family are intrinsically chiral. The formula_1-family. The formula_1-family is generated from the complete multipartite graph formula_1 through repeated application of ΔY- and YΔ-transformations. The family consists of 58 graphs, all of which have 22 edges. The unique smallest member, formula_1, has eight vertices. The unique largest member has 14 vertices. All graphs in this family are intrinsically knotted and are minor minimal with this property. The formula_3-family. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Is the Heawood family the complete list of excluded minors of the 4-flat graphs and for formula_4? The Heawood family generated from both formula_0 and formula_1 through repeated application of ΔY- and YΔ-transformations is the disjoint union of the formula_0-family and the formula_1-family. It consists of 78 graphs. This graph family has significance in the study of 4-flat graphs, i.e., graphs with the property that every 2-dimensional CW complex built on them can be embedded into 4-space. Hein van der Holst (2006) showed that the graphs in the Heawood family are not 4-flat and have Colin de Verdière graph invariant formula_5. In particular, they are neither planar nor linkless. Van der Holst suggested that they might form the complete list of excluded minors for both the 4-flat graphs and the graphs with formula_4. This conjecture can be further motivated from structural similarities to other topologically defined graphs classes:
[ { "math_id": 0, "text": "K_7" }, { "math_id": 1, "text": "K_{3,3,1,1}" }, { "math_id": 2, "text": "\\mu=6" }, { "math_id": 3, "text": "\\{K_7,K_{3,3,1,1}\\}" }, { "math_id": 4, "text": "\\mu\\le 5" }, { "math_id": 5, "text": "\\mu = 6" }, { "math_id": 6, "text": "K_5" }, { "math_id": 7, "text": "K_{3,3}" }, { "math_id": 8, "text": "\\mu\\le 3" }, { "math_id": 9, "text": "K_6" }, { "math_id": 10, "text": "K_{3,3,1}" }, { "math_id": 11, "text": "\\mu\\le 4" } ]
https://en.wikipedia.org/wiki?curid=74741552
74743071
YΔ- and ΔY-transformation
An operation on graphs In graph theory, ΔY- and YΔ-transformations (also written delta-wye and wye-delta) are a pair of operations on graphs. A ΔY-transformation replaces a triangle by a vertex of degree three; and conversely, a YΔ-transformation replaces a vertex of degree three by a triangle. The names for the operations derive from the shapes of the involved subgraphs, which look respectively like the letter Y and the Greek capital letter Δ. A YΔ-transformation may create parallel edges, even if applied to a simple graph. For this reason ΔY- and YΔ-transformations are most naturally considered as operations on multigraphs. On multigraphs both operations preserve the edge count and are exact inverses of each other. In the context of simple graphs it is common to combine a YΔ-transformation with a subsequent "normalization step" that reduces parallel edges to a single edge. This may no longer preserve the number of edges, nor be exactly reversible via a ΔY-transformation. Formal definition. Let formula_0 be a graph (potentially a multigraph). Suppose formula_0 contains a triangle formula_1 with vertices formula_2 and edges formula_3. A "ΔY-transformation" of formula_0 at formula_1 deletes the edges formula_3 and adds a new vertex formula_4 adjacent to each of formula_2. Conversely, if formula_4 is a vertex of degree three with neighbors formula_2, then a "YΔ-transformation" of formula_0 at formula_4 deletes formula_4 and adds three new edges formula_3, where formula_5 connects formula_6 and formula_7. If the resulting graph should be a simple graph, then any resulting parallel edges are to be replaced by a single edge. Relevance. ΔY- and YΔ-transformations are a tool both in pure graph theory as well as applications. Both operations preserve a number of natural topological properties of graphs. Applying a YΔ-transformation to a 3-vertex of a planar graph, or a ΔY-transformation to a triangle face of a planar graph, results again in a planar graph. Applying ΔY- and YΔ-transformations to a linkless graph results again in a linkless graph. This fact is used to compactly describe the forbidden minors of the associated graph classes as ΔY-families generated from a small number of graphs (see the section on ΔY-families below). A particularly relevant application exists in electrical engineering in the study of three-phase power systems (see Y-Δ transform (electrical engineering)). In this context they are also known as star-triangle transformations and are a special case of star-mesh transformations. ΔY-families. The ΔY-family generated by a graph formula_0 is the smallest family of graphs that contains formula_0 and is closed under YΔ- and ΔY-transformations. Equivalently, it is constructed from formula_0 by recursively applying these transformations until no new graph is generated. If formula_0 is a finite graph it generates a finite ΔY-family, all members of which have the same edge count. The ΔY-family generated by several graphs is the smallest family that contains all these graphs and is closed under YΔ- and ΔY-transformation. Some notable families are generated in this way: YΔY-reducible graphs. A graph is YΔY-reducible if it can be reduced to a single vertex by a sequence of ΔY- or YΔ-transformations and the following normalization steps: The YΔY-reducible graphs form a minor closed family and therefore have a forbidden minor characterization (by the Robertson-Seymour theorem). The graphs of the Petersen family constitute some (but not all) of the excluded minors. In fact, already more than 68 billion excluded minor are known. The class of YΔY-reducible graphs lies between the classes of planar graphs and linkless graphs: each planar graph is YΔY-reducible, while each YΔY-reducible graph is linkless. Both inclusions are strict: formula_11 is not planar but YΔY-reducible, while the graph in the figure is not YΔY-reducible but linkless. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\Delta" }, { "math_id": 2, "text": "x_1,x_2,x_3" }, { "math_id": 3, "text": "e_{12},e_{23},e_{31}" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "e_{ij}" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "x_j" }, { "math_id": 8, "text": "K_6" }, { "math_id": 9, "text": "K_7" }, { "math_id": 10, "text": "K_{3,3,1,1}" }, { "math_id": 11, "text": "K_5" } ]
https://en.wikipedia.org/wiki?curid=74743071
74745482
Differentiable measure
Measure that has a notion of derivative In functional analysis and measure theory, a differentiable measure is a measure that has a notion of a derivative. The theory of differentiable measure was introduced by Russian mathematician Sergei Fomin and proposed at the International Congress of Mathematicians in 1966 in Moscow as an infinite-dimensional analog of the theory of distributions. Besides the notion of a derivative of a measure by Sergei Fomin there exists also one by Anatoliy Skorokhod, one by Sergio Albeverio and Raphael Høegh-Krohn, and one by Oleg Smolyanov and Heinrich von Weizsäcker. Differentiable measure. Let This setting is rather general on purpose since for most definitions only linearity and measurability is needed. But usually one chooses formula_0 to be a real Hausdorff locally convex space with the Borel or cylindrical σ-algebra formula_1. For a measure formula_6 let formula_7 denote the shifted measure by formula_2. Fomin differentiability. A measure formula_6 on formula_8 is Fomin differentiable along formula_2 if for every set formula_4 the limit formula_9 exists. We call formula_10 the "Fomin derivative" of formula_6. Equivalently, for all sets formula_4 is formula_11 differentiable in formula_12. Skorokhod differentiability. Let formula_6 be a Baire measure and let formula_14 be the space of bounded and continuous functions on formula_0. formula_6 is Skorokhod differentiable (or "S-differentiable") along formula_2 if a Baire measure formula_15 exists such that for all formula_16 the limit formula_17 exists. In shift notation formula_18 The measure formula_15 is called the "Skorokhod derivative" (or "S-derivative" or "weak derivative") of formula_6 along formula_2 and is unique. Albeverio-Høegh-Krohn Differentiability. A measure formula_6 is Albeverio-Høegh-Krohn differentiable (or "AHK differentiable") along formula_2 if a measure formula_19 exists such that Example. Let formula_6 be a measure with a continuously differentiable Radon-Nikodým density formula_24, then the Fomin derivative is formula_25
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "h\\in X" }, { "math_id": 3, "text": "A +th\\in \\mathcal{A}" }, { "math_id": 4, "text": "A\\in\\mathcal{A}" }, { "math_id": 5, "text": "t\\in\\R" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": "\\mu_h(A):=\\mu(A+h)" }, { "math_id": 8, "text": "(X,\\mathcal{A})" }, { "math_id": 9, "text": "d_{h}\\mu(A):=\\lim\\limits_{t\\to 0}\\frac{\\mu(A+th)-\\mu(A)}{t}" }, { "math_id": 10, "text": "d_{h}\\mu" }, { "math_id": 11, "text": "f_{\\mu}^{A,h}:t\\mapsto \\mu(A+th)" }, { "math_id": 12, "text": "0" }, { "math_id": 13, "text": "d^n_{h}=d_{h}(d^{n-1}_{h})" }, { "math_id": 14, "text": "C_b(X)" }, { "math_id": 15, "text": "\\nu" }, { "math_id": 16, "text": "f\\in C_b(X)" }, { "math_id": 17, "text": "\\lim\\limits_{t\\to 0}\\int_X\\frac{ f(x-th)-f(x)}{t}\\mu(dx)=\\int_X f(x)\\nu(dx)" }, { "math_id": 18, "text": "\\lim\\limits_{t\\to 0}\\int_X\\frac{ f(x-th)-f(x)}{t}\\mu(dx)=\\lim\\limits_{t\\to 0}\\int_Xf\\; d\\left(\\frac{\\mu_{th}-\\mu}{t}\\right)." }, { "math_id": 19, "text": "\\lambda\\geq 0" }, { "math_id": 20, "text": "\\mu_{th}" }, { "math_id": 21, "text": "\\lambda" }, { "math_id": 22, "text": "\\lambda_{th}=f_t\\cdot \\lambda" }, { "math_id": 23, "text": "g:\\R\\to L^2(\\lambda),\\; t\\mapsto f_{t}^{1/2}" }, { "math_id": 24, "text": "g" }, { "math_id": 25, "text": "d_{h}\\mu(A)=\\lim\\limits_{t\\to 0}\\frac{\\mu(A+th)-\\mu(A)}{t}=\\lim\\limits_{t\\to 0}\\int_A\\frac{g(x+th)-g(x)}{t}\\mathrm{d}x=\\int_A g'(x)\\mathrm{d}x." } ]
https://en.wikipedia.org/wiki?curid=74745482
74751833
Lévy's stochastic area
In probability theory, Lévy's stochastic area is a stochastic process that describes the enclosed area of a trajectory of a two-dimensional Brownian motion and its chord. The process was introduced by Paul Lévy in 1940, and in 1950 he computed the characteristic function and conditional characteristic function. The process has many unexpected connections to other objects in mathematics such as the soliton solutions of the Korteweg–De Vries equation and the Riemann zeta function. In the Malliavin calculus, the process can be used to construct a process that is smooth in the sense of Malliavin but that has no continuous modification with respect to the Banach norm. Lévy's stochastic area. Let formula_0 be a two-dimensional Brownian motion in formula_1 then Lévy's stochastic area is the process formula_2 where the Itō integral is used. Define the 1-Form formula_3 then formula_4 is the stochastic integral of formula_5 along the curve formula_6 formula_7 Area formula. Let formula_8, formula_9, formula_10 and formula_11 then Lévy computed formula_12 and formula_13 where formula_14 is the Euclidean norm.
[ { "math_id": 0, "text": "W=(W_s^{(1)},W_s^{(2)})_{s\\geq 0}" }, { "math_id": 1, "text": "\\mathbb{R}^2" }, { "math_id": 2, "text": "S(t,W)=\\frac{1}{2}\\int_0^t \\left(W_s^{(1)}dW_s^{(2)}-W_s^{(2)}dW_s^{(1)}\\right)," }, { "math_id": 3, "text": "\\vartheta=\\tfrac{1}{2}(x^1dx^2-x^2dx^1)" }, { "math_id": 4, "text": "S(t,W)" }, { "math_id": 5, "text": "\\vartheta" }, { "math_id": 6, "text": "\\varphi:[0,t]\\to \\R^2, s\\mapsto (W_s^{(1)},W_s^{(2)})" }, { "math_id": 7, "text": "S(t,W)=\\int_{W[0,t]} \\vartheta." }, { "math_id": 8, "text": "x=(x_1,x_2)\\in \\R^2" }, { "math_id": 9, "text": "a\\in \\R" }, { "math_id": 10, "text": "b=at/2" }, { "math_id": 11, "text": "S_t=S(t,W)" }, { "math_id": 12, "text": "\\mathbb{E}[\\exp(iaS_t)]=\\frac{1}{\\cosh(b)}" }, { "math_id": 13, "text": "\\mathbb{E}[\\exp(iaS_t)\\mid W_t=x]=\\frac{b}{\\sinh(b)}\\exp\\left(\\frac{\\|x\\|_2}{2t}\\left(1-b\\coth\\left(b\\right)\\right)\\right)," }, { "math_id": 14, "text": "\\|x\\|_2" } ]
https://en.wikipedia.org/wiki?curid=74751833
74756339
Optical glass
Special glass type used for optical systems Optical glass refers to a quality of glass suitable for the manufacture of optical systems such as optical lenses, prisms or mirrors. Unlike window glass or crystal, whose formula is adapted to the desired aesthetic effect, optical glass contains additives designed to modify certain optical or mechanical properties of the glass: refractive index, dispersion, transmittance, thermal expansion and other parameters. Lenses produced for optical applications use a wide variety of materials, from silica and conventional borosilicates to elements such as germanium and fluorite, some of which are essential for glass transparency in areas other than the visible spectrum. Various elements can be used to form glass, including silicon, boron, phosphorus, germanium and arsenic, mostly in oxide form, but also in the form of selenides, sulfides, fluorides and more. These materials give glass its characteristic non-crystalline structure. The addition of materials such as alkali metals, alkaline-earth metals or rare earths can change the physico-chemical properties of the whole to give the glass the qualities suited to its function. Some optical glasses use up to twenty different chemical components to obtain the desired optical properties. In addition to optical and mechanical parameters, optical glasses are characterized by their purity and quality, which are essential for their use in precision instruments. Defects are quantified and classified according to international standards: bubbles, inclusions, scratches, index defects, coloring, etc. History. The earliest known optical lenses, dating from before 700 BC, were produced under the Assyrian Empire: they were made of polished crystals, usually quartz, rather than glass. It wasn't until the rise of the Greeks and Romans that glass was used as an optical material. They used it in the form of spheres filled with water to make lenses for lighting fires (burning glass), as described by Aristophanes and Pliny, or to make very small, indistinct characters larger and sharper (magnifying glass), according to Seneca. Although the exact date of their invention is not known, glasses are said to have been described in 1299 by Sandro di Popozo in his "Treatise on Family Conduct": "I am so altered by age, that without these lenses called spectacles, I would no longer be able to read or write. They have recently been invented for the benefit of poor old people whose eyesight has become bad". At the time, however, "glasses" were actually made from beryl or quartz. The only lens available at the time, ordinary soda-lime glass, was unable to compensate for optical aberrations. However, it evolved slowly over the centuries. It was first lightened by the use of ashes, which contain manganese dioxide that transforms ferrous oxide (FeO) into ferric oxide (Fe2O3), which is much less colorful. Then, around 1450, Angelo Barovier invented "crystalline glass" ("vetro cristallino") or "Venetian glass" ("cristallo di Venezia"), improving on the previous process by purifying the ashes by leaching to obtain a purer potash. Lime was introduced, first for economic reasons in the 14th century, then as a technical improvement in Bohemia in the 17th century (Bohemian glass), eliminating a very large proportion of impurities. This practice did not arrive in France until the middle of the eighteenth century. It was at this time that the "Manufacture Royale de Glaces de Miroirs" (Compagnie de Saint-Gobain S.A.) began to produce glass composed of 74% silica, 17.5% soda and potash, and 8.5% lime. Thus, the first complex optical instruments, such as Galileo's telescope (1609), used ordinary soda-lime glass (the first "crown" glass), composed of sand, soda, potash and sometimes lime, which, although suitable for glazing or bottles, was hardly suitable for optical applications (distortion, blurred effect, irregularities, etc.). In 1674, the British inventor George Ravenscroft, wishing to rival Venetian and Bohemian crystal while being less dependent on imported raw materials, replaced lime with lead(II) oxide to compensate for glass's lack of resistance to humidity, thus inventing lead crystal (the first "flint" glass, named after the high-purity English siliceous stone used), brighter than ordinary glass, composed of silica, lead oxide and potash. Chester Moore Hall (1703-1771), using the two types of glass available (soda-lime "crown" and lead "flint"), invented the first achromatic doublet. His work was taken up by John Dollond in his "Account of some experiments concerning the different refrangibility of light", published in 1758. The real revolution in optical glass came with the development of industrial chemistry, which facilitated the composition of glass, allowing properties such as refractive index and dispersion coefficient to be varied. Between 1880 and 1886, the German chemist Otto Schott, in collaboration with Ernst Abbe, invented new glasses containing oxides such as "anhydrous baryte" (barium oxide BaO) and anhydrous boric acid (B2O3), with which he developed barium crowns, barium flints and borosilicate crowns. Between 1934 and 1956, other oxides were used. Then, by adding phosphates and fluorides, phosphate crowns and fluorine crowns were obtained. As optics became increasingly complex and diverse, manufacturers' catalogs expanded to include 100 to 200 different lenses; glass melts increasingly included special components such as oxides of heavy elements (high refractive index and low dispersion), chalcogenides (sulfide, selenide, telluride), halides such as fluorides (low refractive index and high dispersion) or phosphides, cerium-doped glasses to obtain radiation-resistant lenses, and so on. Since the 1980s, however, glass catalogs have tended to become increasingly limited. Properties. The most important physical properties of glass for optical applications are refractive index and constringency, which are decisive in the design of optical systems, and transmission, glass strength and non-linear effects. Index and constringency. The refractive index indicates the refractive power of a glass, i.e. its ability to deflect light rays to a greater or lesser extent. This deflection can be deduced from Descartes' law. The refractive index is a wavelength-dependent quantity, creating chromatic aberrations in a system by refracting rays more or less according to their wavelength: This is the phenomenon observed when light is decomposed by a prism. Several laws have approximated this relationship to wavelength, notably Cauchy's law and Sellmeier equation. The refractive index of a glass is given for the yellow line known as the "d" line of helium (then noted nd) or for the green "e" line of mercury (then noted ne), depending on usage and the two main standards used. The dependence of refractive index on wavelength requires a measure of the glass's dispersion, i.e. the difference in deviation between two wavelengths. A highly dispersive glass will deflect short wavelengths to a great extent, but long wavelengths to a lesser extent. The measure of dispersion is the Abbe number, or constringence. The main dispersion is the difference nF-nC (helium lines) or nF'-nC' (cadmium lines), and constringencies for the same lines as the refractive index are deduced by formula_0 and formula_1. A high Abbe number means that the glass is not very dispersive, and vice versa. Lenses are usually divided into two groups with the generic names of "crown" and "flint", referring respectively to low-dispersion, low-index lenses and high-dispersion, high-index lenses. Typically, the distinction is made at around νd=50: lenses below this value are flints, the others are crowns. These two parameters alone are needed to differentiate between lenses: two lenses with equal nd and νd are identical. Glasses are represented on the so-called Abbe diagram, a graph with abscissa nd and ordinate νd, where each glass is denoted by a point on the graph. Oxide glasses fall into a range of nd from 1.4 to 2.0 and νd from 20 to 90, with SiO2 being the oxide glass with the highest contringency and lowest index. Fluoride glasses can go up to νd&gt;100 and nd&lt;1.4, BeF2 being the fluoride glass at the highest constringency and lowest index. Chalcogenide glasses have indexes exceeding 2, a large proportion of which cannot be shown on an Abbe diagram due to their absorption in visible wavelengths preventing a relevant νd measurement. For optical materials that are opaque in the visible range, constringence is measured at longer wavelengths. This classification also has its limitations when it comes to active optical glasses (birefringence, acousto-optic effect, and non-linear effects), optical filters or graded-index lenses, so we restrict the term "classical optical glass" to the aforementioned glasses, i.e. those with limited index and dispersion, which can be described essentially by their dispersive behavior and refractive index. Transmission and absorption. Another very important characteristic of optical glass is its absorption and transmission behaviour. The use to which the future lens will be put determines its behavior: filters that absorb in certain spectral bands, lenses that are highly transparent in the visible, ultraviolet or infrared, resistance to radiation. As a general rule, the transmittance of the glass is given by the manufacturer, noted τ"i" or Ti, a value that depends on the thickness of the material and whose measurement makes it possible to take into account the loss of transmission due to absorption and diffusion by internal defects in the glass. Since the transmittance term takes into account the refractive index via the Fresnel coefficientformula_2, it is also dependent on the wavelength and thickness of the sample via the formula formula_3where formula_4 is the transmittance and formula_5 thickness. Transmittance windows are of particular interest when it comes to choosing the right glass for applications such as far-infrared or far-ultraviolet. These windows are the result of the absorption of the materials making up the glass, which increases in the infrared and ultraviolet. Absorption in these two wavelength ranges is due to distinct phenomena, and can evolve differently depending on environmental conditions. Ultraviolet absorption. In the ultraviolet, or "UV", the drop in transmission is due to the electronic transitions of the elements making up the glass: valence electrons absorb wavelengths whose energy corresponds to their band gap. According to solid-state band theory, electrons can only take on certain specific energy values in particular energy levels, but with sufficient energy, an electron can move from one of these levels to another. Light waves are charged with an energy "hν", inversely proportional to wavelength ("ν=c/λ"), which can enable an electron to pass from one level to another if this energy is sufficient and therefore if the wavelength is short enough. A silica glass absorbs wavelengths below 160 nm, a glass based on boron trioxide (B2O3) absorbs below 172 nm, a phosphorus pentoxide glass (P2O5) absorbs below 145 nm. There are two types of oxygen in oxide glasses: bridging and non-bridging (possessing an excess electron charge), detectable by photoelectron spectroscopy. Non-bridging oxygen possesses electrons whose kinetic energy after release by monochromatic X-rays is higher than that of bridging oxygen. Bonds between non-bridging oxygens and cations are generally ionic. These characteristics give the glass its energetic band properties, making it more or less effective at transmitting radiation. Depending on the intensity of the bonds with the cations in the glass, the transmission window varies: in the presence of alkali metals, electrons can move from one band to the other more easily, as they are less bound to the non-bridging oxygens. On the other hand, the introduction of aluminium (Al2O3) to replace silica will increase the glass's transmission window, as the tetrahedral configuration of alumina reduces the proportion of non-contacting oxygens and therefore of electrons able to move from the valence band to the conduction band. As a result, glasses containing heavy metals (such as Ti+ or Pb2+) tend to transmit less well than others, since the oxygen will tend to share its electrons with the cation and thus reduce the band gap. The disadvantage is that the addition of these metals results in higher refractive indices. Depending on the heavy metal used, the drop in UV transmission will be more or less rapid, so lead lenses transmit better than niobium or titanium lenses. Attention to crucible and furnace materials is therefore very important, as these can also influence the UV transmission window. Platinum, for example, is widely used in glass melting, but inclusions of platinum particles in the glass paste can cause undesirable transmission losses due to impurities. Another source of variation in UV transmission loss is ambient temperature: the higher the temperature of the glass, the more the UV drop will shift towards longer wavelengths, due to the material's reduced band gap. Solarization, which is the exposure of glass (or paint, for that matter) to electromagnetic radiation, can "yellow" glass depending on the wavelength and intensity of the radiation. Lenses with the best UV transmission are the most affected by solarization, which modifies their transmission window. Lenses can be doped with cerium dioxide (CeO2), which shifts the transmission drop to longer wavelengths and stabilizes it. This doping is one of the processes used to create anti-radiation glasses, since a glass doped in this way has the particular ability to protect against the most energetic types of radiation, such as X-rays and gamma rays. Infrared absorption. In infrared, or "IR", the physical phenomena leading to a drop in transmission are different. When a molecule receives a given amount of energy, it begins to vibrate in different modes: fundamental, first harmonic, second harmonic, etc., corresponding to periodic movements of the atoms in the molecule; each frequency associated with the energy of the molecule's vibration mode is absorbed. In silica glass, the Si-O bond has two main modes of vibration, rotation and elongation. Since the frequency of elongation is 0.34 × 1014 Hz, absorption will take place at 8.8 μm (fundamental), 4.4 μm (harmonic 1), 2.9 μm (harmonic 2), etc. As the absorption due to this vibration is very strong, silica becomes opaque from the first harmonic onwards. Most quartz glasses even show a marked drop in transmission at harmonic 2. Chalcogenide glasses are used to reduce the frequency of molecular vibrations: as sulfur or selenium are heavier, their vibration modes are weaker, and their transmission is, therefore, better in the infrared. However, this comes at the price of visible transmission, since chalcogenide glasses are opaque in the visible. Another solution is to produce halide glasses, in particular fluoride glasses. As fluorine is highly electronegative, the bonds between anions and cations are weakened, and vibrations are therefore weaker. Glass humidity, i.e. the presence of water in the material, has a strong influence on the transmission curve of glasses in the 2.9 μm to 4.2 μm region. Water takes the form of OH- groups, whose O-H bond vibrates at a frequency of around 90 THz, equivalent to an absorption of wavelengths from 2.9 μm to 3.6 μm. The higher the humidity of the sample, the greater the local drop in transmission, with very high humidity even causing absorption at the harmonics of the O-H bond vibration, at around 200 nm. Emission and non-linear phenomena. Lasers are often used at very high illuminance levels. It has been found that in this high illumination range, the refractive index follows a law that deviates from the linear domain and becomes proportional to the intensity of the luminous flux: formula_6 where formula_7 is the refractive index of the material, formula_8 the wavelength, formula_9 the intensity of the light beam, formula_7 the refractive index for low powers. For silica, for example, formula_10 is 3.2 × 10–20 m2 W−1 for formula_8=1,060 nm. The most dispersive glasses tend to have the highest non-linear refractive indices, probably due to the metal ions present in the glass. Above TW mm−2, the fluence (or flux) is sufficient to create higher-order non-linear optical phenomena such as multiphonon absorption and avalanche photo-ionization. The first phenomenon makes the material absorbent through the addition of two photons, which release an electron. The second phenomenon is the acceleration of an electron released by the electromagnetic field, the electron's kinetic energy being transmitted to other neighboring electrons. These two combined effects can cause damage to glass by destroying the vitreous lattice (freed electrons give energy to other electrons which are more easily freed, and the lattice bonds are weakened by electron depletion). The material may be vaporized at sufficient speed that the phonons cannot transmit the energy in the form of heat to the rest of the glass. In 1988, an experiment showed that silica, whose lattice is isotropic, is capable of emitting green radiation when crossed by powerful infrared radiation. The generation of a second harmonic in this setting is atypical, but could be explained by the presence of F-center. Fluorescence can appear in optical glasses. Fluorescence is the re-emission of higher-wavelength radiation from an illuminated material. The energy of the incident light excites the material's electrons, which then de-excite and return to their ground state, emitting a photon with a longer wavelength than the original one. This phenomenon is particularly troublesome in applications where the presence of stray light or light of a different wavelength from the reference wavelength poses a problem. In lasers, for example, it's important to agree on a single, precise spectral line. Causes of fluorescence include rare-earth ions, impurities and color centers. Fabrication. The basic materials used to manufacture optical lenses must be particularly pure, as any inclusion or impurity could not only degrade performance but also cause considerable damage to the lens (breakage, darkening, tinting, etc.). For example, the sand used to manufacture silica-based glass must contain an extremely low proportion of ferric oxide (Fe2O3) (10 ppm maximum) and even lower proportions of other oxides and elements (cobalt, copper, nickel, etc.). There are very few geographical sites where the sands are sufficiently pure for these applications. Most glass is melted in a "pot furnace", which is used to melt limited quantities of glass, while certain mass-produced optical glasses (such as borosilicate glass) are melted in "tank furnaces" for continuous glass production. The glassmaking process comprises a number of stages, beginning with the melting of the glass paste, followed by refining and then tempering or annealing, which are two different finishes. Finally, if required, the glass can be polished, particularly in the case of mirrors and lenses, for any application where the objective is high image quality. The materials are placed together in the furnace and gradually heated to their melting point. Chemical reactions of composition or decomposition of molecules take place, resulting in significant off-gassing during this phase. Hydrates, carbonates, nitrates and sulfates recompose to form the glass paste with the vitrifying elements, giving rise to gases such as water vapor, carbon dioxide, sulfur dioxide and others. For example, 1 L of soda-lime glass paste releases around 1,440 L of various gases when heated to 100 °C, of which 70% is carbon dioxide. Refining is an essential stage in the quality of optical lenses, since it involves homogenizing the glass so that the components are evenly distributed throughout the paste and the gas is fully released. Homogenization avoids the problem of streaks appearing in the lens. Chemical agents are used to release the gases, in particular arsenic pentoxide (As2O5), which decomposes into arsenic trioxide (As2O3), releasing oxygen which combines with the other elements and gases released, causing the bubbles remaining in the paste to rise. Defects such as bubbles, streaks, inclusions and discolorations can appear as a result of the glass melting process. Bubbles result from insufficient refining, streaks from glass heterogeneity (the glass has a different refractive index locally, causing distortion), inclusions may come from glass that has crystallized locally or from fragments of the vessels used for melting, glass discoloration originates from insufficient purity of the mixed products. The tempering process is reserved for glass whose structure is to be hardened. Glass used for optics is often fragile and thin, so it is not tempered. Optical fibers are tempered after drawing, to give them sufficient mechanical strength. Annealing consists in slowly cooling a glass in a controlled manner from a certain temperature at which it has begun to solidify (around 1,000 °C for silica glass or 450 °C for soda-lime glass, for example). Annealing is necessary to eliminate internal stresses in the material that may have arisen during melting (impurities, streaks, bubbles, etc.) and to prevent uneven cooling in a material, with internal parts taking longer to heat and cool. The annealing time ranges from a hundred to a thousand hours, depending on the quantity of glass to be annealed and its composition. Types of glass. The progressive development of the optical glass industry has led to the creation of new lens families. Lenses can be differentiated by their main components, which give them their mechanical, thermal and optical characteristics. In addition to the two main glass groups, flint and crown, based essentially on SiO2 silica or oxides, other groups exist, such as halide glasses and chalcogenide glasses (excluding oxygen). The following tables summarize most glass families and their composition. Each composition has its own particular properties and defects. Increasing the index often requires sacrificing transmission in the ultraviolet, and although research since the early days of glassmaking has considerably improved this state of affairs, it is not possible to obtain highly dispersive, low-refractive glasses, or low-dispersive, high-refractive glasses. Oxide glass. Flints and crowns are glasses composed of oxides, often SiO2 or TiO2 titanates. Their index ranges from 1.4 to 2.4. This large group can be identified by its characteristic transmission profile ranging from 200 nm to 2.5 μm, due to the high gap energies and photon absorption peaks of hydroxyl groups in the infrared. A variety of oxides are used, the most common being silica-based glasses, but other molecules can also be used to form glassy systems, such as : Phosphate glasses have lower melting temperatures and are more viscous than borosilicate glasses, but they are less resistant to chemical attack and less durable. Glasses based on a phosphate, borate or borophosphate vitreous system are good candidates for athermalization, since their formula_11, i.e. the variation in refractive index with temperature, is generally negative. Athermalization consists in compensating for thermal expansion of the material by changing its index. The family of phosphate glasses is particularly well-suited to these possibilities. Crown glass family. "Borosilicate crowns" are the most widely produced glass family, and the ones with the best control over final homogeneity. This family includes BK7, the glass widely used in optics. Alkali oxides and boron trioxide B2O3 make it easier to melt silicon dioxide SiO2, which requires very high temperatures to liquefy. "Barium crowns" and "dense crowns" were developed for barium's ability to significantly increase the refractive index without significantly reducing the glass's constringency or ultraviolet transmission, which lead oxide tends to do. Some lenses use a mixture of zinc oxide ZnO and barium oxide BaO. "Crowns, zinc crowns" and "crown flints" are small families of glasses containing a wide variety of oxides (CaO or BaO, ZnO and PbO respectively) to increase hardness or durability. "Phosphate crowns" are characterized by their relatively low dispersion and medium index, with generally higher dispersion in the blue, making them useful for correcting chromatism in optical combinations. "Fluorine crowns" use fluorine's properties to reduce the dispersion and index of the glass: fluorine's high electronegativity and the smaller radius of fluorine ions are responsible for this. As with phosphate crowns, these lenses are particularly suitable for correcting chromatic aberration, thanks to their partial dispersion in the blue. Flint glass family. Dense or light "flints" are long-established families, such as borosilicate crowns, and are used as optical glass as well as "crystal" for everyday glassmaking. Their main properties derive from the proportion of PbO introduced. PbO increases refractive index while decreasing Abbe number, and also affects partial dispersion. Lead oxide will also increase the density of the glass and reduce its resistance to chemical attack. The ability of the PbO-SiO2 couple to vitrify enables PbO proportions of over 70 mol per 100 to be achieved, which would not be possible if PbO were merely a chemical mesh modifier. Indeed, a high PbO concentration produces tetrahedral PbO4, which can form a glass-like mesh. The inclusion of PbO has several drawbacks. Firstly, the glasses are slightly yellow due to the high concentration of lead oxide. Secondly, inclusions and impurities such as iron(III) oxide Fe2O3 or chromium(III) oxide Cr2O3 degrade glass transmission to a far greater extent than in soda, potash or lime glasses. Thirdly, a chemical equilibrium between Pb2+ and Pb4+ is established and, in the presence of oxygen-saturated glass, leads to the creation of lead dioxide PbO2, a brown compound that darkens glass. However, this latter coloration can be reversed by a redox transformation of the glass paste, since it does not originate from impurities. To overcome these problems, titanium dioxide TiO2 and zirconium dioxide ZrO2 can be added, increasing the glass's chemical stability and preserving its ultraviolet transmission. "Barium flints" crystallize less easily than other glass families due to the presence of lead(II) oxide (PbO) in the mixture. The higher the proportion of PbO, the higher the refractive index and the lower the melting temperature, so these are glasses which, although very useful for their high indexes, present complications during melting. The BaO in these glasses is sometimes replaced by ZnO. "Lanthanum flints" and "lanthanum crowns" are extended families achieving high refractive indices with medium dispersion. The use of SiO2 in the paste creates crystallization instabilities, a pitfall avoided by replacing silica with boron trioxide B2O3 and divalent oxides. To further increase their refractive index, the use of multiple oxides has become widespread, including gadolinium, yttrium, titanium, niobium, tantalum, tungsten and zirconium oxides (Gd2O3, Y2O3, TiO2, Nb2O5, Ta2O5, WO3 and ZrO2). Short flints are a family distinguished not by their index or constringency, but by their partial dispersion. Named for their narrow blue spectrum, short flints are also an asset in optical system design for their low blue impact. They are obtained by replacing the lead oxide in flint glasses with Sb2O3 antimony oxide. Halogenide glass. The first fluoride glasses appeared around 1970 to meet a growing need for mid-infrared transmitting glasses. These glasses are composed by replacing the oxygen in oxide glasses with a halogen, fluorine or chlorine, and more rarely with heavy halogens. Their transmission covers the visible and mid-infrared range, from 200 nm to 7 μm, due to the rather high band gap (on average, a fluoride glass has its transmission dip at around 250 nm, due to its band gap of around 5 eV) and the low-frequency vibrations of the heavy-metal fluoride bonds; silica absorption results from vibrations of Si-O bonds at 1.1 × 103 cm−1, whereas fluorozirconate absorption will result from vibrations of Zr-F bonds at a frequency of 0.58 × 103 cm−1, which is why oxide and halide glasses behave so differently in the infrared. By using rare earths instead of heavy metals, we obtain a rare-earth fluoride glass that transmits even further into the infrared. Another way of transmitting further into the infrared is to make chloride glass instead of fluoride glass, but this reduces the stability of the glass. A type of glass recently developed at the University of Rennes uses a tellurium halide. As the energy gap in the visible is greater, the drop in transmission in the visible advances to 700 nm-1.5 μm, while its transmission improves in the far infrared. As the refractive index of such a glass is very high, it behaves like a chalcogenide glass, with a strong reflection that reduces its transmission. Fluoride lenses are also useful for their near-UV transmission. Near-UV transmitting glasses are few in number, but include lithium fluoride, calcium fluoride and magnesium fluoride glasses. Chalcogenide glass. Chalcogenide glasses have been specifically developed since the 1980s to improve the infrared transmission of optical glasses. Oxygen is replaced by another chalcogen (sulfur, selenium, tellurium), and silicon is replaced by heavier metals such as germanium, arsenic, antimony and others. Their index is greater than 2, and they appear black due to their weak gap and multiple absorption bands in the visible range. The transmission of these glasses ranges from 1 μm to 12 μm, but is lower than that of oxide or halide glasses due to their very high refractive index, which results in a high reflection coefficient. This group can be divided into two families: glasses that can be doped with rare-earth ions or not. The former are mainly composed of germanium and gallium sulfides and selenides, while the latter, although not doped, offer the best transmission performance in the far infrared. Classical glass designations. The field of optical lenses encompasses a multitude of materials with extremely diverse properties and equally diverse applications. Nevertheless, it is generally accepted that optical lenses fall into several major families. A large proportion of optical lenses are so-called "classic" lenses, designed for applications such as imaging and filtering. Smaller families of lenses are also part of the optical glass family, such as optical fibers, or so-called "active" lenses for applications in nonlinear optics or acousto-optics, for example. Special glasses. Fused quartz. Quartz glass is distinguished from other optical glass by the source of the material used in its manufacture. Many manufacturers produce quartz glass, but the differences lie mainly in the nature of the impurities and the water content. These differences give each quartz glass its own special characteristics, such as transmission and resistance to chemical attack. Quartz glass is made from a single material: silica. Its main properties are low expansion (α≈0.5 × 10−6 K−1), high thermal stability (up to 1,000 K) and transmission in the ultraviolet and infrared, which can be adapted as required. Optical filter. Filters are glasses designed to transmit only certain parts of the spectrum of incident light. A filter can be colorless, a simple optical glass whose transmission drop serves to cut off wavelengths beyond a certain value, or colored in various ways, by the introduction of heavy metal or rare-earth ions, by molecular coloration or even by a colloidal suspension. Filter glasses show noticeable photoluminescence. Optical filters in colored glass take the form of a blade with a parallel face and a thickness that depends on the transmission qualities required; like electronic filters, they are referred to as high-pass, low-pass, band-pass or notch filters. Laser lenses. Several types of glass are used for lasers, including Li2O-CaO-SiO2 glasses for their resistance to thermal shock, and potassium-barium-phosphate glasses, whose effective cross-section is large enough for stimulated emission. The addition of sodium, lithium or aluminum oxides drastically reduces distortion. These glasses are athermalized. In addition to these two types of glass, lithium-aluminum phosphates can be used. These are treated by ion exchange and are particularly resistant, making them ideal for applications where the average laser power is very high (e.g. femtosecond pulsed lasers), or fluorophosphates, which have a slightly non-linear index. These Nd3+-doped glasses are used as active laser medium. Gradient index lenses. Gradient-index lenses exploit the special properties of light propagation in a variable-index medium. In 1854, James Clerk Maxwell invented the "Fisheye lens" in response to a problem from the "Irish Academy" asking for the refractive index of an image-perfect material. This theoretical lens, spherical in shape, has an index of the form formula_12 where formula_7 is the refractive index of the glass at a point on the spherical lens and formula_13 the radius of this lens; it enables any point on its surface to be imaged perfectly at another point diametrically opposite. A generalization of this spherical lens was proposed in 1966 by Rudolf Karl Lüneburg , using a different index profile. In 1905, Robert Williams Wood developed a lens consisting of a blade with a parallel face whose index varies parabolically, with the extremum of the index lying on the axis of revolution of the component. The Wood lens can be used to focus or diverge rays, just like a conventional lens. Since around 1970, glass manufacturing technology has made it possible to develop, qualify and machine graded-index glasses. Two main uses for graded-index glasses are in telecommunications, with optical fibers, and in imaging, with lenses machined from graded-index material. Gradients can also be divided into three types of profile: spherical gradients, cylindrical gradients and axial gradients. There are several techniques for producing graded-index glass: neutron bombardment, ion filling or glass layer superposition. Depending on the technique used, the gradient will be stronger or weaker, and its profile more or less controlled. Injection or ionic filling methods can produce gradients of 10 to 50 mm, with an index amplitude of 0.04. Neutron bombardment and chemical vapor deposition methods produce shallow gradients (approx. 100 μm) of low amplitude. For larger gradients, there is partial lens polymerization of a monomer reacting to UV exposure (gradients of around one hundred millimeters for an index amplitude of 0, 01), or superimposing and then partially melting layers of borosilicate or flint glass (lanthanum-containing glasses are not suitable for this technique due to their recrystallization problems and thermal instability). A final technique consists of melting and then rotating the paste so that a material gradient, and therefore an index gradient, is established in the glass. Doped lenses. Certain extreme environments are not conducive to the use of conventional lenses; when the system is exposed to far-field UV radiation (X, gamma, etc.) or particle fluxes such as alpha or beta, a drop in lens transmission is observed, due to discoloration of the material. Generally speaking, electromagnetic radiation causes a drop in blue transmission, a phenomenon known as solarization. As this is detrimental to system performance, it was necessary to develop new types of radiation-resistant lenses. Radiation has a variety of effects: ionization, electron or hole capture, fission of Si-O bonds, etc. These effects can easily be amplified by the presence of impurities that change the valence of molecules or concentrate radiation, causing local degradation of the glass. In order to reduce the drop in glass transmission and performance, they are doped with CeO2, which slightly shifts the glass's transmission drop, but makes it virtually impossible to feel the effects of radiation on the glass's optical performance. Other glasses. In addition to the lenses already mentioned, all of which are specific in their design or use, there are also special glass-like materials. These include "athermalized lenses", which are produced in such a way that the optical path through the lens is independent of temperature. Note that the difference in optical path as a function of temperature is formula_14 determined by formula_15 the thickness of the glass, formula_16 the coefficient of thermal expansion, formula_7 the index, formula_17 the temperature and formula_18 the thermo-optical coefficient. Athermalized glasses can be found in the fluorinated crowns, phosphate crowns, dense crowns, barium and titanium flints and other families. "Glass-ceramics" or ceramic glasses are glasses in which the crystal-forming process has been stimulated over a long, complex heating period. The addition of crystals to initiate crystallization results in a glass with a crystallized proportion ranging from 50 to 90%. Depending on the crystals incorporated and the proportion of glass in the ceramic glass, the properties will differ. Generally speaking, ceramic glass is highly resistant to thermal shock and has near-zero thermal expansion (Schott AG's Zerodur, for example, was used specifically for the Very Large Telescope for these thermal properties). Glass quality. There are numerous standards for optical components, the aim of which is to unify the notations and tolerances applied to components, and to define optical quality standards. There are two main standards: MIL (American military standard) and ISO (international standard). In France, the AFNOR standard is very similar to the ISO standard, as the "Union de normalisation de la mécanique" is keen to conform as closely as possible to ISO publications. The MIL and ISO standards cover a very wide field, and both standardize lenses, their defects, surface treatments, test methods and schematic representations. Manufacturers. There are a number of manufacturers of special lenses for the various fields of optics, whose catalogs offer a wide choice of optical lenses and special lenses, sometimes in addition to filters and active lenses and crystals. Since 1980, however, catalogs have tended to reduce the choice, although optical design tools continue to include catalogs that no longer exist. Manufacturers include the following: In addition to catalogs of optical glass and various materials, other manufacturers also sell active or special optical glass. Examples include graded-index glass, used to focus light beams in optical fibers; optical fibers, which in a significant proportion of cases are spun optical glass wires; and optical filters. These products can be found in the catalogs of a larger number of manufacturers, a non-exhaustive but relevant list of which can be found in the same catalogs selected by optical design software: Applications. Optical glasses are mainly used in many optical instruments, as lenses or mirrors. These include, but are not limited to, telescopes, microscopes, photographic lenses and viewfinder lenses. Other possible optical systems include collimators and eyepieces. Optical lenses, especially ophthalmic lenses, are used for prescription glasses. Glasses can also be made of photochromic glass, whose tint changes according to radiation. Optical glasses are used for other, much more diverse and specialized applications, such as high-energy particle detectors (glasses detecting Cherenkov radiation, scintillation effects, etc.) and nuclear applications, such as on-board optics in systems subjected to radiation, for example. Optical glass can be spun to form an optical fiber or form graded-index lenses (SELFOC lens or Geltech lenses) for injection into these same fibers. Optical glass in one form or another, doped or undoped, can be used as an amplifying medium for lasers. Last but not least, microlithography, using ultraviolet-transmitting glasses such as Schott's FK5HT (Flint crown), LF5HT (Flint light) or LLF1HT (Flint extra light), named "i-line" glasses by the company after the ray i of mercury. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_d=\\frac{n_d-1}{n_F-n_C}" }, { "math_id": 1, "text": "\\nu_e=\\frac{n_e-1}{n_F'-n_C'}" }, { "math_id": 2, "text": "\\frac{2n}{n^2+1}" }, { "math_id": 3, "text": "\\tau _2 = \\tau _1^{\\frac{d_1}{d_2}}" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "d" }, { "math_id": 6, "text": " n(\\lambda,I)=n_0(\\lambda)+\\gamma I " }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "\\lambda" }, { "math_id": 9, "text": "I" }, { "math_id": 10, "text": "\\gamma" }, { "math_id": 11, "text": "\\frac{dn}{dT}" }, { "math_id": 12, "text": "n(r) = \\frac{2}{1+(r/R)^2}" }, { "math_id": 13, "text": "R" }, { "math_id": 14, "text": "\\Delta L=s(\\alpha(n-1)+dn/dt)\\Delta T=sGT" }, { "math_id": 15, "text": "s" }, { "math_id": 16, "text": "\\alpha" }, { "math_id": 17, "text": "T" }, { "math_id": 18, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=74756339
74760198
Nucleolus (game theory)
Solution in cooperative games In cooperative game theory, the nucleolus of a cooperative game is the solution (i.e., allocation of payments to players) that maximizes the smallest excess of a coalition (where the excess is the difference between the payment given to the coalition and the value the coalition could get by deviating). Subject to that, the nucleolus satisfies the second-smallest excess; and so on, in the leximin order. The nucleolus was introduced by David Schmeidler. Background. In a cooperative game, there is a set "N" of "players", who can cooperate and form "coalitions". Each coalition "S" (subset of players) has a "value", which is the profit that "S" can make if they coopereate on their own, ignoring the other players in "N". The players opt to form the "grand coalition" - a coalition containing all players in "N". The question then arises, how should the value of the grand coalition be allocated among the players? Each such allocation of value is called a "solution" or a "payoff vector". The "excess" of any coalition "S" from a given payoff-vector "x" is the difference between the total payoff to members of "S" under "x", and the value of "S". Note that the excess can be positive, negative or zero. Intuitively, a solution in which all coalitions have a higher excess is more stable, since coalitions are less incentivized to deviate from the grand-coalition. The nucleolus is a single solution, for which the vector of excesses of all coalitions is largest in the leximin order. Intuitively, the nucleolus maximizes the stability of the solution by minimizing the incentives of coalitions to deviate. Definitions. A "cooperative game" is represented by a value function formula_0, which assigns a value to each possible coalition (each subset of players). A "solution" to a cooperative game is a vector formula_1, which assigns a payoff to each player in "N". A solution should satisfy the basic requirement of "efficiency": the sum of payoffs should exactly equal "v"("N") -- the value of the grand coalition. A payoff solution satisfying this condition is called an "imputation". The "excess" of a payoff-vector formula_2 for a coalition formula_3 is the quantity formula_4. That is, the profit that players in coalition formula_5 obtain if they remain in the grand coalition formula_6 and do not deviate. Let formula_7 be the vector of excesses of formula_2, arranged in non-decreasing order: formula_8. For two payoff vectors formula_9, we say formula_10 is "lexicographically smaller" than formula_11 if for some index formula_12, we have formula_13 and formula_14. The "nucleolus" of formula_15 is the lexicographically largest imputation, based on this ordering. In other words, the nucleolus is an imputation whose excesses-vector is largest in the leximin order. It can be represented by the following optimization problem: formula_16where "k" = 2"N" = the number of possible coalitions. Computation. General games. A general cooperative game among "n" players is characterized by 2"n" values - one value for each possible coalition. The nucleolus of a general game can be computed by any algorithm for lexicographic max-min optimization. These algorithms usually require to solve linear programs with one constraint for each objective value, plus some additional constraints. Therefore, the number of constraints is O(2"n"). The number of iterations required by the more efficient algorithms is at most "n", so the run-time is O("n" 2"n"). If the cooperative game is given by enumerating all coalitions' values, then the input size is 2"n" , and so the above algorithms run in time polynomial in the input size. But when "n" is large, even representing such a game is computationally intensive, and there is more interest in classes of cooperative games that have a compact representation. Weighted voting games. In a "weighted voting game", each player has a weight. The weight of a coalition is the sum of weights of its members. A coalition can force a decision if its total weight is above a certain threshold. Therefore, the "value" of a coalition is 1 if its value is above the threshold, and 0 if its value is below the threshold. A weighted voting game can be represented by only "n"+1 values: a weight for each player, and the threshold. In a weighted voting game, the core can be computed in time polynomial in "n". In contrast, the least-core is NP-hard, but has a pseudopolynomial time algorithm - an algorithm polynomial in "n" and the maximum weight "W". Similarly, the nucleolus is NP-hard, but has a pseudopolynomial time algorithm. The proof relies on solving successive exponential-sized linear programs, by constructing dynamic-programming based separation oracles. Minimum-cost spanning tree games. In a "minimum-cost spanning-tree game," each player is a node in a complete graph. The graph contains an additional node "s" (the "supply node"). Each edge in the graph has a cost. The cost of each coalition "S" is the minimum cost of a spanning tree connecting all nodes in "S" to the supply node "s". The "value" of "S" is minus the cost of "S". Thus, a MCST game can be represented by O(n2) values. Computing the nucleolus on general MCST games is NP-hard, but it can be computed in polynomial time if the underlying network is a tree. Weighted cooperative matching games. In weighted cooperative matching games, the nucleolus can be computed in polynomial time. Implicitly-given value function. In some games, the value of each coalition is not given explicitly, but it can be computed by solving a set of mathematical programming problems. Using a constraint generation approach, it is possible to compute only the values of coalitions that are required for the nucleolus. This allows to compute the nucleolus efficiently in practice, when there are at most 20 players. Potters, Reijnierse and Ansing present a fast algorithm for computing the nucleolus using the prolonged simplex algorithm. Using the prekernel. If the prekernel of a cooperative game contains exactly one core vector, then the nucleolus can be computed efficiently. The algorithm is based on the ellipsoid method and on a scheme of Maschler for approximating the prekernel. Mistakes in computing the nucleolus. Guajardo and Jornsten have found mistakes in the application of linear programming and duality to computing the nucleolus. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " v : 2^N \\to \\mathbb{R} " }, { "math_id": 1, "text": " x \\in \\mathbb{R}^N " }, { "math_id": 2, "text": " x " }, { "math_id": 3, "text": " S \\subseteq N " }, { "math_id": 4, "text": " \\mathrm{excess}(x,S) := \\sum_{ i \\in S } x_i - v(S) " }, { "math_id": 5, "text": " S " }, { "math_id": 6, "text": " N " }, { "math_id": 7, "text": " \\theta(x) \\in \\mathbb{R}^{ 2^N } " }, { "math_id": 8, "text": " \\theta_i(x) \\leq \\theta_j(x), \\forall~ i < j " }, { "math_id": 9, "text": " x, y " }, { "math_id": 10, "text": " \\theta(x) " }, { "math_id": 11, "text": " \\theta(y) " }, { "math_id": 12, "text": " k " }, { "math_id": 13, "text": " \\theta_i(x) = \\theta_i(y), \\forall~ i < k " }, { "math_id": 14, "text": " \\theta_k(x) < \\theta_k(y) " }, { "math_id": 15, "text": " v " }, { "math_id": 16, "text": "\n\\begin{align}\n\\operatorname{lex}\n\\max \\min &&\n\\mathrm{excess}(x,S_1), \\mathrm{excess}(x,S_2), \\ldots, \\mathrm{excess}(x,S_k)\n\\\\\n\\text{subject to} && x ~ \\text{is an imputation}\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=74760198
7479116
Nonobtuse mesh
Polygon mesh composed of triangles with all angles ≤ 90° In computer graphics, a nonobtuse triangle mesh is a polygon mesh composed of a set of triangles in which no angle is obtuse, "i.e." greater than 90°. If each (triangle) face angle is strictly less than 90°, then the triangle mesh is said to be acute. Every polygon with formula_0 sides has a nonobtuse triangulation with formula_1 triangles (expressed in big O notation), allowing some triangle vertices to be added to the sides and interior of the polygon. These nonobtuse triangulations can be further refined to produce acute triangulations with formula_1 triangles. Nonobtuse meshes avoid certain problems of nonconvergence or of convergence to the wrong numerical solution as demonstrated by the Schwarz lantern. The immediate benefits of a nonobtuse or acute mesh include more efficient and more accurate geodesic computation using fast marching, and guaranteed validity for planar mesh embeddings via discrete harmonic maps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "O(n)" } ]
https://en.wikipedia.org/wiki?curid=7479116
7479239
Implicit solvation
Implicit solvation (sometimes termed continuum solvation) is a method to represent solvent as a continuous medium instead of individual “explicit” solvent molecules, most often used in molecular dynamics simulations and in other applications of molecular mechanics. The method is often applied to estimate free energy of solute-solvent interactions in structural and chemical processes, such as folding or conformational transitions of proteins, DNA, RNA, and polysaccharides, association of biological macromolecules with ligands, or transport of drugs across biological membranes. The implicit solvation model is justified in liquids, where the potential of mean force can be applied to approximate the averaged behavior of many highly dynamic solvent molecules. However, the interfaces and the interiors of biological membranes or proteins can also be considered as media with specific solvation or dielectric properties. These media are not necessarily uniform, since their properties can be described by different analytical functions, such as “polarity profiles” of lipid bilayers. There are two basic types of implicit solvent methods: models based on accessible surface areas (ASA) that were historically the first, and more recent continuum electrostatics models, although various modifications and combinations of the different methods are possible. The accessible surface area (ASA) method is based on experimental linear relations between Gibbs free energy of transfer and the surface area of a solute molecule. This method operates directly with free energy of solvation, unlike molecular mechanics or electrostatic methods that include only the enthalpic component of free energy. The continuum representation of solvent also significantly improves the computational speed and reduces errors in statistical averaging that arise from incomplete sampling of solvent conformations, so that the energy landscapes obtained with implicit and explicit solvent are different. Although the implicit solvent model is useful for simulations of biomolecules, this is an approximate method with certain limitations and problems related to parameterization and treatment of ionization effects. Accessible surface area-based method. The free energy of solvation of a solute molecule in the simplest ASA-based method is given by: formula_0 where formula_1 is the accessible surface area of atom "i", and formula_2 is "solvation parameter" of atom "i", i.e., a contribution to the free energy of solvation of the particular atom i per surface unit area. The needed solvation parameters for different types of atoms (carbon (C), nitrogen (N), oxygen (O), sulfur (S), etc.) are usually determined by a least squares fit of the calculated and experimental transfer free energies for a series of organic compounds. The experimental energies are determined from partition coefficients of these compounds between different solutions or media using standard mole concentrations of the solutes. Notably, "solvation energy" is the free energy needed to transfer a solute molecule from a solvent to "vacuum" (gas phase). This energy can supplement the intramolecular energy in vacuum calculated in molecular mechanics. Thus, the needed atomic solvation parameters were initially derived from water-gas partition data. However, the dielectric properties of proteins and lipid bilayers are much more similar to those of nonpolar solvents than to vacuum. Newer parameters have thus been derived from octanol-water partition coefficients or other similar data. Such parameters actually describe "transfer" energy between two condensed media or the "difference" of two solvation energies. Poisson-Boltzmann. The Poisson-Boltzmann equation (PB) describes the electrostatic environment of a solute in a solvent containing ions. It can be written in cgs units as: formula_3 or (in mks): formula_4 where formula_5 represents the position-dependent dielectric, formula_6 represents the electrostatic potential, formula_7 represents the charge density of the solute, formula_8 represents the concentration of the ion "i" at a distance of infinity from the solute, formula_9 is the valence of the ion, "q" is the charge of a proton, "k" is the Boltzmann constant, "T" is the temperature, and formula_10 is a factor for the position-dependent accessibility of position "r" to the ions in solution (often set to uniformly 1). If the potential is not large, the equation can be linearized to be solved more efficiently. Although this equation has solid theoretical justification, it is computationally expensive to calculate without approximations. A number of numerical Poisson-Boltzmann equation solvers of varying generality and efficiency have been developed, including one application with a specialized computer hardware platform. However, performance from PB solvers does not yet equal that from the more commonly used generalized Born approximation. Generalized Born model. The "Generalized Born" (GB) model is an approximation to the exact (linearized) Poisson-Boltzmann equation. It is based on modeling the solute as a set of spheres whose internal dielectric constant differs from the external solvent. The model has the following functional form: formula_11 where formula_12 and formula_13 where formula_14 is the permittivity of free space, formula_15 is the dielectric constant of the solvent being modeled, formula_16 is the electrostatic charge on particle "i", formula_17 is the distance between particles "i" and "j", and formula_18 is a quantity (with the dimension of length) termed the "effective Born radius". The effective Born radius of an atom characterizes its degree of burial inside the solute; qualitatively it can be thought of as the distance from the atom to the molecular surface. Accurate estimation of the effective Born radii is critical for the GB model. With accessible surface area. The Generalized Born (GB) model augmented with the hydrophobic solvent accessible surface area (SA) term is GBSA. It is among the most commonly used implicit solvent model combinations. The use of this model in the context of molecular mechanics is termed MM/GBSA. Although this formulation has been shown to successfully identify the native states of short peptides with well-defined tertiary structure, the conformational ensembles produced by GBSA models in other studies differ significantly from those produced by explicit solvent and do not identify the protein's native state. In particular, salt bridges are overstabilized, possibly due to insufficient electrostatic screening, and a higher-than-native alpha helix population was observed. Variants of the GB model have also been developed to approximate the electrostatic environment of membranes, which have had some success in folding the transmembrane helixes of integral membrane proteins. Ad hoc fast solvation models. Another possibility is to use ad hoc quick strategies to estimate solvation free energy. A first generation of fast implicit solvents is based on the calculation of a per-atom solvent accessible surface area. For each of group of atom types, a different parameter scales its contribution to solvation ("ASA-based model" described above). Another strategy is implemented for the CHARMM19 force-field and is called EEF1. EEF1 is based on a Gaussian-shaped solvent exclusion. The solvation free energy is formula_19 The reference solvation free energy of "i" corresponds to a suitably chosen small molecule in which group i is essentially fully solvent-exposed. The integral is over the volume "Vj" of group "j" and the summation is over all groups "j" around "i". EEF1 additionally uses a distance-dependent (non-constant) dielectric, and ionic side-chains of proteins are simply neutralized. It is only 50% slower than a vacuum simulation. This model was later augmented with the hydrophobic effect and called Charmm19/SASA. Hybrid implicit-explicit solvation models. It is possible to include a layer or sphere of water molecules around the solute, and model the bulk with an implicit solvent. Such an approach is proposed by M. J. Frisch and coworkers and by other authors. For instance in Ref. the bulk solvent is modeled with a Generalized Born approach and the multi-grid method used for Coulombic pairwise particle interactions. It is reported to be faster than a full explicit solvent simulation with the particle mesh Ewald summation (PME) method of electrostatic calculation. There are a range of hybrid methods available capable of accessing and acquiring information on solvation. Effects unaccounted for. The hydrophobic effect. Models like PB and GB allow estimation of the mean electrostatic free energy but do not account for the (mostly) entropic effects arising from solute-imposed constraints on the organization of the water or solvent molecules. This is termed the hydrophobic effect and is a major factor in the folding process of globular proteins with hydrophobic cores. Implicit solvation models may be augmented with a term that accounts for the hydrophobic effect. The most popular way to do this is by taking the solvent accessible surface area (SASA) as a proxy of the extent of the hydrophobic effect. Most authors place the extent of this effect between 5 and 45 cal/(Å2 mol). Note that this surface area pertains to the solute, while the hydrophobic effect is mostly entropic in nature at physiological temperatures and occurs on the side of the solvent. Viscosity. Implicit solvent models such as PB, GB, and SASA lack the viscosity that water molecules impart by randomly colliding and impeding the motion of solutes through their van der Waals repulsion. In many cases, this is desirable because it makes sampling of configurations and phase space much faster. This acceleration means that more configurations are visited per simulated time unit, on top of whatever CPU acceleration is achieved in comparison to explicit solvent. It can, however, lead to misleading results when kinetics are of interest. Viscosity may be added back by using Langevin dynamics instead of Hamiltonian mechanics and choosing an appropriate damping constant for the particular solvent. In practical bimolecular simulations one can often speed-up conformational search significantly (up to 100 times in some cases) by using much lower collision frequency formula_20. Recent work has also been done developing thermostats based on fluctuating hydrodynamics to account for momentum transfer through the solvent and related thermal fluctuations. One should keep in mind, though, that the folding rate of proteins does not depend linearly on viscosity for all regimes. Hydrogen bonds with solvent. Solute-solvent hydrogen bonds in the first solvation shell are important for solubility of organic molecules and especially ions. Their average energetic contribution can be reproduced with an implicit solvent model. Problems and limitations. All implicit solvation models rest on the simple idea that nonpolar atoms of a solute tend to cluster together or occupy nonpolar media, whereas polar and charged groups of the solute tend to remain in water. However, it is important to properly balance the opposite energy contributions from different types of atoms. Several important points have been discussed and investigated over the years. Choice of model solvent. It has been noted that wet 1-octanol solution is a poor approximation of proteins or biological membranes because it contains ~2M of water, and that cyclohexane would be a much better approximation. Investigation of passive permeability barriers for different compounds across lipid bilayers led to conclusion that 1,9-decadiene can serve as a good approximations of the bilayer interior, whereas 1-octanol was a very poor approximation. A set of solvation parameters derived for protein interior from protein engineering data was also different from octanol scale: it was close to cyclohexane scale for nonpolar atoms but intermediate between cyclohexane and octanol scales for polar atoms. Thus, different atomic solvation parameters should be applied for modeling of protein folding and protein-membrane binding. This issue remains controversial. The original idea of the method was to derive all solvation parameters directly from experimental partition coefficients of organic molecules, which allows calculation of solvation free energy. However, some of the recently developed electrostatic models use "ad hoc" values of 20 or 40 cal/(Å2 mol) for "all" types of atoms. The non-existent “hydrophobic” interactions of polar atoms are overridden by large electrostatic energy penalties in such models. Solid-state applications. Strictly speaking, ASA-based models should only be applied to describe "solvation", i.e., energetics of transfer between liquid or uniform media. It is possible to express van der Waals interaction energies in the solid state in the surface energy units. This was sometimes done for interpreting protein engineering and ligand binding energetics, which leads to “solvation” parameter for aliphatic carbon of ~40 cal/(Å2 mol), which is 2 times bigger than ~20 cal/(Å2 mol) obtained for transfer from water to liquid hydrocarbons, because the parameters derived by such fitting represent sum of the hydrophobic energy (i.e., 20 cal/Å2 mol) and energy of van der Waals attractions of aliphatic groups in the solid state, which corresponds to fusion enthalpy of alkanes. Unfortunately, the simplified ASA-based model cannot capture the "specific" distance-dependent interactions between different types of atoms in the solid state which are responsible for clustering of atoms with similar polarities in protein structures and molecular crystals. Parameters of such interatomic interactions, together with atomic solvation parameters for the protein interior, have been approximately derived from protein engineering data. The implicit solvation model breaks down when solvent molecules associate strongly with binding cavities in a protein, so that the protein and the solvent molecules form a continuous solid body. On the other hand, this model can be successfully applied for describing transfer from water to the "fluid" lipid bilayer. Importance of extensive testing. More testing is needed to evaluate the performance of different implicit solvation models and parameter sets. They are often tested only for a small set of molecules with very simple structure, such as hydrophobic and amphiphilic alpha helixes (α). This method was rarely tested for hundreds of protein structures. Treatment of ionization effects. Ionization of charged groups has been neglected in continuum electrostatic models of implicit solvation, as well as in standard molecular mechanics and molecular dynamics. The transfer of an ion from water to a nonpolar medium with dielectric constant of ~3 (lipid bilayer) or 4 to 10 (interior of proteins) costs significant energy, as follows from the Born equation and from experiments. However, since the charged protein residues are ionizable, they simply lose their charges in the nonpolar environment, which costs relatively little at the neutral pH: ~4 to 7 kcal/mol for Asp, Glu, Lys, and Arg amino acid residues, according to the Henderson-Hasselbalch equation, "ΔG = 2.3RT (pH - pK)". The low energetic costs of such ionization effects have indeed been observed for protein mutants with buried ionizable residues. and hydrophobic α-helical peptides in membranes with a single ionizable residue in the middle. However, all electrostatic methods, such as PB, GB, or GBSA assume that ionizable groups remain charged in the nonpolar environments, which leads to grossly overestimated electrostatic energy. In the simplest accessible surface area-based models, this problem was treated using different solvation parameters for charged atoms or Henderson-Hasselbalch equation with some modifications. However even the latter approach does not solve the problem. Charged residues can remain charged even in the nonpolar environment if they are involved in intramolecular ion pairs and H-bonds. Thus, the energetic penalties can be overestimated even using the Henderson-Hasselbalch equation. More rigorous theoretical methods describing such ionization effects have been developed, and there are ongoing efforts to incorporate such methods into the implicit solvation models. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\Delta G_\\mathrm{solv} = \\sum_{i} \\sigma_{i} \\ ASA_{i}\n" }, { "math_id": 1, "text": " ASA_{i}" }, { "math_id": 2, "text": " \\sigma_{i}" }, { "math_id": 3, "text": "\n\\vec{\\nabla}\\cdot\\left[\\epsilon(\\vec{r})\\vec{\\nabla}\\Psi(\\vec{r})\\right] = -4\\pi\\rho^{f}(\\vec{r}) - 4\\pi\\sum_{i}c_{i}^{\\infty}z_{i}q\\lambda(\\vec{r})e^{\\frac{-z_{i}q\\Psi(\\vec{r})}{kT}}\n" }, { "math_id": 4, "text": "\n\\vec{\\nabla}\\cdot\\left[\\epsilon(\\vec{r})\\vec{\\nabla}\\Psi(\\vec{r})\\right] = -\\rho^{f}(\\vec{r}) - \\sum_{i}c_{i}^{\\infty}z_{i}q\\lambda(\\vec{r})e^{\\frac{-z_{i}q\\Psi(\\vec{r})}{kT}}\n" }, { "math_id": 5, "text": "\\epsilon(\\vec{r})" }, { "math_id": 6, "text": "\\Psi(\\vec{r})" }, { "math_id": 7, "text": "\\rho^{f}(\\vec{r})" }, { "math_id": 8, "text": "c_{i}^{\\infty}" }, { "math_id": 9, "text": "z_{i}" }, { "math_id": 10, "text": "\\lambda(\\vec{r})" }, { "math_id": 11, "text": "\nG_{s} = - \\frac{1}{8\\pi\\epsilon_{0}}\\left(1-\\frac{1}{\\epsilon}\\right)\\sum_{i,j}^{N}\\frac{q_{i}q_{j}}{f_{GB}}\n" }, { "math_id": 12, "text": "\nf_{GB} = \\sqrt{r_{ij}^{2} + a_{ij}^{2}e^{-D}}\n" }, { "math_id": 13, "text": "\nD = \\left(\\frac{r_{ij}}{2a_{ij}}\\right)^{2}, a_{ij} = \\sqrt{a_{i}a_{j}}\n" }, { "math_id": 14, "text": "\\epsilon_{0}" }, { "math_id": 15, "text": "\\epsilon" }, { "math_id": 16, "text": "q_{i}" }, { "math_id": 17, "text": "r_{ij}" }, { "math_id": 18, "text": "a_{i}" }, { "math_id": 19, "text": "\n\\Delta G_{i}^{solv} = \\Delta G_{i}^{ref} - \\sum_{j} \\int_{Vj} f_i(r) dr\n" }, { "math_id": 20, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=7479239
7480
Cross section (physics)
Probability of a given process occurring in a particle collision In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted "σ" (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process. When two discrete particles interact in classical physics, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size. When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section (see detailed discussion below). When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section. For example, in Rayleigh scattering, the intensity scattered at the forward and backward angles is greater than the intensity scattered sideways, so the forward differential scattering cross section is greater than the perpendicular differential cross section, and by adding all of the infinitesimal cross sections over the whole range of angles with integral calculus, we can find the total cross section. Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur. The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section. Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics. With light scattering off of a particle, the cross section specifies the amount of optical power scattered from light of a given irradiance (power per area). It is important to note that although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section relative to some physical process. For example, plasmonic nanoparticles can have light scattering cross sections for particular frequencies that are much larger than their actual cross-sectional areas. Collision among gas particles. In a gas of finite-sized particles there are collisions among particles that depend on their cross-sectional size. The average distance that a particle travels between collisions depends on the density of gas particles. These quantities are related by formula_0 where "σ" is the cross section of a two-particle collision (SI unit: m2), "λ" is the mean free path between collisions (SI unit: m), "n" is the number density of the target particles (SI unit: m−3). If the particles in the gas can be treated as hard spheres of radius "r" that interact by direct contact, as illustrated in Figure 1, then the effective cross section for the collision of a pair is formula_1 If the particles in the gas interact by a force with a larger range than their physical size, then the cross section is a larger effective area that may depend on a variety of variables such as the energy of the particles. Cross sections can be computed for atomic collisions but also are used in the subatomic realm. For example, in nuclear physics a "gas" of low-energy neutrons collides with nuclei in a reactor or other nuclear device, with a cross section that is energy-dependent and hence also with well-defined mean free path between collisions. Attenuation of a beam of particles. If a beam of particles enters a thin layer of material of thickness d"z", the flux Φ of the beam will decrease by dΦ according to formula_2 where "σ" is the total cross section of "all" events, including scattering, absorption, or transformation to another species. The volumetric number density of scattering centers is designated by "n". Solving this equation exhibits the exponential attenuation of the beam intensity: formula_3 where Φ0 is the initial flux, and "z" is the total thickness of the material. For light, this is called the Beer–Lambert law. Differential cross section. Consider a classical measurement where a single particle is scattered off a single stationary target particle. Conventionally, a spherical coordinate system is used, with the target placed at the origin and the "z" axis of this coordinate system aligned with the incident beam. The angle "θ" is the scattering angle, measured between the incident beam and the scattered beam, and the "φ" is the azimuthal angle. The impact parameter "b" is the perpendicular offset of the trajectory of the incoming particle, and the outgoing particle emerges at an angle "θ". For a given interaction (coulombic, magnetic, gravitational, contact, etc.), the impact parameter and the scattering angle have a definite one-to-one functional dependence on each other. Generally the impact parameter can neither be controlled nor measured from event to event and is assumed to take all possible values when averaging over many scattering events. The differential size of the cross section is the area element in the plane of the impact parameter, i.e. d"σ" "b" d"φ" d"b". The differential angular range of the scattered particle at angle "θ" is the solid angle element dΩ sin "θ" d"θ" d"φ". The differential cross section is the quotient of these quantities, . It is a function of the scattering angle (and therefore also the impact parameter), plus other observables such as the momentum of the incoming particle. The differential cross section is always taken to be positive, even though larger impact parameters generally produce less deflection. In cylindrically symmetric situations (about the beam axis), the azimuthal angle "φ" is not changed by the scattering process, and the differential cross section can be written as formula_4. In situations where the scattering process is not azimuthally symmetric, such as when the beam or target particles possess magnetic moments oriented perpendicular to the beam axis, the differential cross section must also be expressed as a function of the azimuthal angle. For scattering of particles of incident flux "F"inc off a stationary target consisting of many particles, the differential cross section at an angle ("θ","φ") is related to the flux of scattered particle detection "F"out("θ","φ") in particles per unit time by formula_5 Here ΔΩ is the finite angular size of the detector (SI unit: sr), "n" is the number density of the target particles (SI unit: m−3), and "t" is the thickness of the stationary target (SI unit: m). This formula assumes that the target is thin enough that each beam particle will interact with at most one target particle. The total cross section "σ" may be recovered by integrating the differential cross section over the full solid angle (4π steradians): formula_6 It is common to omit the "differential" qualifier when the type of cross section can be inferred from context. In this case, "σ" may be referred to as the "integral cross section" or "total cross section". The latter term may be confusing in contexts where multiple events are involved, since "total" can also refer to the sum of cross sections over all events. The differential cross section is extremely useful quantity in many fields of physics, as measuring it can reveal a great amount of information about the internal structure of the target particles. For example, the differential cross section of Rutherford scattering provided strong evidence for the existence of the atomic nucleus. Instead of the solid angle, the momentum transfer may be used as the independent variable of differential cross sections. Differential cross sections in inelastic scattering contain resonance peaks that indicate the creation of metastable states and contain information about their energy and lifetime. Quantum scattering. In the time-independent formalism of quantum scattering, the initial wave function (before scattering) is taken to be a plane wave with definite momentum "k": formula_7 where "z" and "r" are the "relative" coordinates between the projectile and the target. The arrow indicates that this only describes the "asymptotic behavior" of the wave function when the projectile and target are too far apart for the interaction to have any effect. After scattering takes place it is expected that the wave function takes on the following asymptotic form: formula_8 where "f" is some function of the angular coordinates known as the scattering amplitude. This general form is valid for any short-ranged, energy-conserving interaction. It is not true for long-ranged interactions, so there are additional complications when dealing with electromagnetic interactions. The full wave function of the system behaves asymptotically as the sum formula_9 The differential cross section is related to the scattering amplitude: formula_10 This has the simple interpretation as the probability density for finding the scattered projectile at a given angle. A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; for example in a simple scattering experiment the number of particles scattered per unit of time (current of scattered particles "I"r) depends only on the number of incident particles per unit of time (current of incident particles "I"i), the characteristics of target (for example the number of particles per unit of surface "N"), and the type of interaction. For "Nσ" ≪ 1 we have formula_11 Relation to the S-matrix. If the reduced masses and momenta of the colliding system are "m"i, pi and "m"f, pf before and after the collision respectively, the differential cross section is given by formula_12 where the on-shell "T" matrix is defined by formula_13 in terms of the S-matrix. Here "δ" is the Dirac delta function. The computation of the S-matrix is the main goal of the scattering theory. Units. Although the SI unit of total cross sections is m2, a smaller unit is usually used in practice. In nuclear and particle physics, the conventional unit is the barn b, where 1 b = 10−28 m2 = 100 fm2. Smaller prefixed units such as mb and μb are also widely used. Correspondingly, the differential cross section can be measured in units such as mb/sr. When the scattered radiation is visible light, it is conventional to measure the path length in centimetres. To avoid the need for conversion factors, the scattering cross section is expressed in cm2, and the number concentration in cm−3. The measurement of the scattering of visible light is known as nephelometry, and is effective for particles of 2–50 μm in diameter: as such, it is widely used in meteorology and in the measurement of atmospheric pollution. The scattering of X-rays can also be described in terms of scattering cross sections, in which case the square ångström is a convenient unit: 1 Å2 = 10−20 m2 = = 108 b. The sum of the scattering, photoelectric, and pair-production cross-sections (in barns) is charted as the "atomic attenuation coefficient" (narrow-beam), in barns. Scattering of light. For light, as in other settings, the scattering cross section for particles is generally different from the geometrical cross section of the particle, and it depends upon the wavelength of light and the permittivity, shape, and size of the particle. The total amount of scattering in a sparse medium is proportional to the product of the scattering cross section and the number of particles present. In the interaction of light with particles, many processes occur, each with their own cross sections, including absorption, scattering, and photoluminescence. The sum of the absorption and scattering cross sections is sometimes referred to as the attenuation or extinction cross section. formula_14 The total extinction cross section is related to the attenuation of the light intensity through the Beer–Lambert law, which says that attenuation is proportional to particle concentration: formula_15 where "Aλ" is the attenuation at a given wavelength "λ", "C" is the particle concentration as a number density, and "l" is the path length. The absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance T: formula_16 Combining the scattering and absorption cross sections in this manner is often necessitated by the inability to distinguish them experimentally, and much research effort has been put into developing models that allow them to be distinguished, the Kubelka-Munk theory being one of the most important in this area. Cross section and Mie theory. Cross sections commonly calculated using Mie theory include efficiency coefficients for extinction formula_17, scattering formula_18, and Absorption formula_19 cross sections. These are normalized by the geometrical cross sections of the particle formula_20 as formula_21 The cross section is defined by formula_22 where formula_23 is the energy flow through the surrounding surface, and formula_24 is the intensity of the incident wave. For a plane wave the intensity is going to be formula_25, where formula_26 is the impedance of the host medium. The main approach is based on the following. Firstly, we construct an imaginary sphere of radius formula_27 (surface formula_28) around the particle (the scatterer). The net rate of electromagnetic energy crosses the surface formula_28 is formula_29 where formula_30 is the time averaged Poynting vector. If formula_31 energy is absorbed within the sphere, otherwise energy is being created within the sphere. We will not consider this case here. If the host medium is non-absorbing, the energy must be absorbed by the particle. We decompose the total field into incident and scattered parts formula_32, and the same for the magnetic field formula_33. Thus, we can decompose formula_34 into the three terms formula_35, where formula_36 where formula_37, formula_38, and formula_39. All the field can be decomposed into the series of vector spherical harmonics (VSH). After that, all the integrals can be taken. In the case of a uniform sphere of radius formula_40, permittivity formula_41, and permeability formula_42, the problem has a precise solution. The scattering and extinction coefficients are formula_43 formula_44 Where formula_45. These are connected as formula_46 Dipole approximation for the scattering cross section. Let us assume that a particle supports only electric and magnetic dipole modes with polarizabilities formula_47 and formula_48 (here we use the notation of magnetic polarizability in the manner of Bekshaev et al. rather than the notation of Nieto-Vesperinas et al.) expressed through the Mie coefficients as formula_49 Then the cross sections are given by formula_50 formula_51 and, finally, the electric and magnetic absorption cross sections formula_52 are formula_53 and formula_54 For the case of a no-inside-gain particle, i.e. no energy is emitted by the particle internally (formula_55), we have a particular case of the Optical theorem formula_56 Equality occurs for non-absorbing particles, i.e. for formula_57. Scattering of light on extended bodies. In the context of scattering light on extended bodies, the scattering cross section, "σ"sc, describes the likelihood of light being scattered by a macroscopic particle. In general, the scattering cross section is different from the geometrical cross section of a particle, as it depends upon the wavelength of light and the permittivity in addition to the shape and size of the particle. The total amount of scattering in a sparse medium is determined by the product of the scattering cross section and the number of particles present. In terms of area, the "total cross section" ("σ") is the sum of the cross sections due to absorption, scattering, and luminescence: formula_14 The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to concentration: "Aλ" "Clσ", where "Aλ" is the absorbance at a given wavelength "λ", "C" is the concentration as a number density, and "l" is the path length. The extinction or absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance T: formula_58 Relation to physical size. There is no simple relationship between the scattering cross section and the physical size of the particles, as the scattering cross section depends on the wavelength of radiation used. This can be seen when looking at a halo surrounding the Moon on a decently foggy evening: Red light photons experience a larger cross sectional area of water droplets than photons of higher energy. The halo around the Moon thus has a perimeter of red light due to lower energy photons being scattering further from the center of the Moon. Photons from the rest of the visible spectrum are left within the center of the halo and perceived as white light. Meteorological range. The scattering cross section is related to the meteorological range "L"V: formula_59 The quantity "Cσ"scat is sometimes denoted "b"scat, the scattering coefficient per unit length. Examples. Elastic collision of two hard spheres. The following equations apply to two hard spheres that undergo a perfectly elastic collision. Let "R" and "r" denote the radii of the scattering center and scattered sphere, respectively. The differential cross section is formula_60 and the total cross section is formula_61 In other words, the total scattering cross section is equal to the area of the circle (with radius "r" + "R") within which the center of mass of the incoming sphere has to arrive for it to be deflected. Rutherford scattering. In Rutherford scattering, an incident particle with charge "q" and energy "E" scatters off a fixed particle with charge "Q". The differential cross section is formula_62 where formula_63 is the vacuum permittivity. The total cross section is infinite unless a cutoff for small scattering angles formula_64 is applied. This is due to the long range of the formula_65 Coulomb potential. Scattering from a 2D circular mirror. The following example deals with a beam of light scattering off a circle with radius "r" and a perfectly reflecting boundary. The beam consists of a uniform density of parallel rays, and the beam-circle interaction is modeled within the framework of geometric optics. Because the problem is genuinely two-dimensional, the cross section has unit of length (e.g., metre). Let "α" be the angle between the light ray and the radius joining the reflection point of the ray with the center point of the mirror. Then the increase of the length element perpendicular to the beam is formula_66 The reflection angle of this ray with respect to the incoming ray is 2"α", and the scattering angle is formula_67 The differential relationship between incident and reflected intensity "I" is formula_68 The differential cross section is therefore (dΩ d"θ") formula_69 Its maximum at "θ" π corresponds to backward scattering, and its minimum at "θ" 0 corresponds to scattering from the edge of the circle directly forward. This expression confirms the intuitive expectations that the mirror circle acts like a diverging lens. The total cross section is equal to the diameter of the circle: formula_70 Scattering from a 3D spherical mirror. The result from the previous example can be used to solve the analogous problem in three dimensions, i.e., scattering from a perfectly reflecting sphere of radius "a". The plane perpendicular to the incoming light beam can be parameterized by cylindrical coordinates "r" and "φ". In any plane of the incoming and the reflected ray we can write (from the previous example): formula_71 while the impact area element is formula_72 In spherical coordinates, formula_73 Together with the trigonometric identity formula_74 we obtain formula_75 The total cross section is formula_76 See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma = \\frac{1}{n \\lambda}," }, { "math_id": 1, "text": "\\sigma = \\pi \\left(2r\\right)^2" }, { "math_id": 2, "text": "\\frac{\\mathrm d \\Phi}{\\mathrm d z} = -n \\sigma \\Phi," }, { "math_id": 3, "text": "\\Phi = \\Phi_0 e^{-n \\sigma z}," }, { "math_id": 4, "text": " \\frac{\\mathrm{d} \\sigma}{\\mathrm{d}(\\cos \\theta)} =\\int_0^{2\\pi} \\frac{\\mathrm{d} \\sigma}{\\mathrm{d} \\Omega} \\,\\mathrm{d}\\varphi " }, { "math_id": 5, "text": "\\frac{\\mathrm d \\sigma}{\\mathrm d \\Omega}(\\theta,\\varphi) = \\frac{1}{n t \\Delta\\Omega} \\frac{F_\\text{out}(\\theta,\\varphi)}{F_\\text{inc}}." }, { "math_id": 6, "text": "\\sigma = \\oint_{4\\pi} \\frac{\\mathrm d \\sigma}{\\mathrm d \\Omega} \\, \\mathrm d \\Omega = \\int_0^{2\\pi} \\int_0^\\pi \\frac{\\mathrm d \\sigma}{\\mathrm d \\Omega} \\sin \\theta \\, \\mathrm d \\theta \\, \\mathrm d \\varphi." }, { "math_id": 7, "text": "\\phi_-(\\mathbf r) \\;\\stackrel{r \\to \\infty}{\\longrightarrow}\\; e^{i k z}," }, { "math_id": 8, "text": "\\phi_+(\\mathbf r) \\;\\stackrel{r \\to \\infty}{\\longrightarrow}\\; f(\\theta,\\phi) \\frac{e^{i k r}}{r}," }, { "math_id": 9, "text": "\\phi(\\mathbf r) \\;\\stackrel{r \\to \\infty}{\\longrightarrow}\\; \\phi_-(\\mathbf r) + \\phi_+(\\mathbf r)." }, { "math_id": 10, "text": "\\frac{\\mathrm d \\sigma}{\\mathrm d \\Omega}(\\theta, \\phi) = \\bigl|f(\\theta, \\phi)\\bigr|^2." }, { "math_id": 11, "text": "\\begin{align}\nI_\\text{r} &= I_\\text{i}N\\sigma, \\\\\n\\sigma &= \\frac{I_\\text{r}}{I_\\text{i}} \\frac{1}{N} \\\\\n&= \\text{probability of interaction} \\times \\frac{1}{N}.\n\\end{align}" }, { "math_id": 12, "text": "\\frac{\\mathrm d\\sigma}{\\mathrm d\\Omega} = \\left(2\\pi\\right)^4 m_\\text{i} m_\\text{f} \\frac{p_\\text{f}}{p_\\text{i}} \\bigl|T_{\\text{f}\\text{i}}\\bigr|^2," }, { "math_id": 13, "text": "S_{\\text{f}\\text{i}} = \\delta_{\\text{f}\\text{i}} - 2\\pi i \\delta\\left(E_\\text{f} - E_\\text{i}\\right) \\delta\\left(\\mathbf{p}_\\text{i} - \\mathbf{p}_\\text{f}\\right) T_{\\text{f}\\text{i}}" }, { "math_id": 14, "text": "\\sigma = \\sigma_\\text{abs} + \\sigma_\\text{sc} + \\sigma_\\text{lum}." }, { "math_id": 15, "text": "A_\\lambda = C l \\sigma," }, { "math_id": 16, "text": "A_\\lambda = -\\log \\mathcal{T}." }, { "math_id": 17, "text": "Q_\\text{ext}" }, { "math_id": 18, "text": "Q_\\text{sc}" }, { "math_id": 19, "text": "Q_\\text{abs}" }, { "math_id": 20, "text": "\\sigma_\\text{geom} = \\pi a^2" }, { "math_id": 21, "text": "\nQ_\\alpha = \\frac{\\sigma_\\alpha}{\\sigma_\\text{geom}}, \\qquad \\alpha = \\text{ext}, \\text{sc}, \\text{abs}.\n" }, { "math_id": 22, "text": "\n\\sigma_\\alpha = \\frac{W_\\alpha}{I_{\\text{inc}}}\n" }, { "math_id": 23, "text": "\\left[W_\\alpha \\right] = \\left[ \\text{W} \\right]" }, { "math_id": 24, "text": " \\left[I_{\\text{inc}}\\right] = \\left[ \\frac{\\text{W}}{\\text{m}^2} \\right]" }, { "math_id": 25, "text": "I_{\\text{inc}} = |\\mathbf{E}|^2 / (2 \\eta)" }, { "math_id": 26, "text": "\\eta = \\sqrt{\\mu \\mu_0 / (\\varepsilon \\varepsilon_0)}" }, { "math_id": 27, "text": "r" }, { "math_id": 28, "text": "A" }, { "math_id": 29, "text": "\nW_\\text{a} = - \\oint_A \\mathbf{\\Pi} \\cdot \\hat{\\mathbf{r}} dA\n" }, { "math_id": 30, "text": "\\mathbf{\\Pi} = \\frac{1}{2} \\operatorname{Re} \\left[ \\mathbf{E}^* \\times \\mathbf{H} \\right]" }, { "math_id": 31, "text": "W_\\text{a} > 0" }, { "math_id": 32, "text": "\\mathbf{E} = \\mathbf{E}_\\text{i} + \\mathbf{E}_\\text{s}" }, { "math_id": 33, "text": "\\mathbf{H}" }, { "math_id": 34, "text": "W_a" }, { "math_id": 35, "text": " W_\\text{a} = W_\\text{i} - W_\\text{s} + W_{\\text{ext}} " }, { "math_id": 36, "text": "\nW_\\text{i} = - \\oint_A \\mathbf{\\Pi}_\\text{i} \\cdot \\hat{\\mathbf{r}} dA \\equiv 0, \\qquad\nW_\\text{s} = \\oint_A \\mathbf{\\Pi}_\\text{s} \\cdot \\hat{\\mathbf{r}} dA, \\qquad\nW_{\\text{ext}} = \\oint_A \\mathbf{\\Pi}_{\\text{ext}} \\cdot \\hat{\\mathbf{r}} dA.\n" }, { "math_id": 37, "text": "\\mathbf{\\Pi}_\\text{i} = \\frac{1}{2} \\operatorname{Re} \\left[ \\mathbf{E}_\\text{i}^* \\times \\mathbf{H}_\\text{i} \\right] " }, { "math_id": 38, "text": "\\mathbf{\\Pi}_\\text{s} = \\frac{1}{2} \\operatorname{Re} \\left[ \\mathbf{E}_\\text{s}^* \\times \\mathbf{H}_\\text{s} \\right] " }, { "math_id": 39, "text": "\\mathbf{\\Pi}_{\\text{ext}} = \\frac{1}{2} \\operatorname{Re} \\left[ \\mathbf{E}_s^* \\times \\mathbf{H}_i + \\mathbf{E}_i^* \\times \\mathbf{H}_s \\right] " }, { "math_id": 40, "text": "a" }, { "math_id": 41, "text": "\\varepsilon" }, { "math_id": 42, "text": "\\mu" }, { "math_id": 43, "text": "\nQ_\\text{sc} = \\frac{2}{k^2a^2}\\sum_{n=1}^\\infty (2n+1)(|a_{n}|^2+|b_{n}|^2)\n" }, { "math_id": 44, "text": "\nQ_\\text{ext} = \\frac{2}{k^2a^2}\\sum_{n=1}^\\infty (2n+1)\\Re(a_{n}+b_{n}) \n" }, { "math_id": 45, "text": "k = n_\\text{host} k_0" }, { "math_id": 46, "text": "\n\\sigma_\\text{ext} = \\sigma_\\text{sc} + \\sigma_\\text{abs} \\qquad \\text{or} \\qquad Q_\\text{ext} = Q_\\text{sc} + Q_\\text{abs}\n" }, { "math_id": 47, "text": "\\mathbf{p} = \\alpha^e \\mathbf{E}" }, { "math_id": 48, "text": "\\mathbf{m} = (\\mu \\mu_0)^{-1}\\alpha^m \\mathbf{H}" }, { "math_id": 49, "text": "\n\\alpha^e = 4 \\pi \\varepsilon_0 \\cdot i \\frac{3 \\varepsilon}{2 k^3} a_1, \\qquad \\alpha^m = 4 \\pi \\mu_0 \\cdot i \\frac{3 \\mu}{2 k^3} b_1.\n" }, { "math_id": 50, "text": "\n\\sigma_{\\text{ext}} = \\sigma_{\\text{ext}}^{\\text{(e)}} + \\sigma_{\\text{ext}}^{\\text{(m)}} =\n\\frac{1}{4\\pi \\varepsilon \\varepsilon_0} \\cdot 4\\pi k \\Im(\\alpha^e) + \n\\frac{1}{4\\pi \\mu \\mu_0} \\cdot 4\\pi k \\Im(\\alpha^m)\n" }, { "math_id": 51, "text": "\n\\sigma_{\\text{sc}} = \\sigma_{\\text{sc}}^{\\text{(e)}} + \\sigma_{\\text{sc}}^{\\text{(m)}} = \n\\frac{1}{(4\\pi \\varepsilon \\varepsilon_0)^2} \\cdot \\frac{8\\pi}{3} k^4 |\\alpha^e|^2 +\n\\frac{1}{(4\\pi \\mu \\mu_0)^2} \\cdot \\frac{8\\pi}{3} k^4 |\\alpha^m|^2\n" }, { "math_id": 52, "text": "\\sigma_{\\text{abs}} = \\sigma_{\\text{abs}}^{\\text{(e)}} + \\sigma_{\\text{abs}}^{\\text{(m)}}" }, { "math_id": 53, "text": "\n\\sigma_{\\text{abs}}^{\\text{(e)}} = \\frac{1}{4 \\pi \\varepsilon \\varepsilon_0} \\cdot 4\\pi k \\left[ \\Im(\\alpha^e) - \\frac{k^3}{6 \\pi \\varepsilon \\varepsilon_0} |\\alpha^e|^2\\right]\n" }, { "math_id": 54, "text": "\n\\sigma_{\\text{abs}}^{\\text{(m)}} = \\frac{1}{4 \\pi \\mu \\mu_0} \\cdot 4\\pi k \\left[ \\Im(\\alpha^m) - \\frac{k^3}{6 \\pi \\mu \\mu_0} |\\alpha^m|^2\\right]\n" }, { "math_id": 55, "text": "\\sigma_{\\text{abs}} > 0" }, { "math_id": 56, "text": "\n\\frac{1}{4\\pi \\varepsilon \\varepsilon_0} \\Im(\\alpha^e) + \\frac{1}{4\\pi \\mu \\mu_0} \\Im(\\alpha^m) \\geq \\frac{2 k^3}{3} \\left[ \\frac{|\\alpha^e|^2}{(4\\pi \\varepsilon \\varepsilon_0)^2} + \\frac{|\\alpha^m|^2}{(4\\pi \\mu \\mu_0)^2} \\right]\n" }, { "math_id": 57, "text": "\\Im(\\varepsilon) = \\Im(\\mu) = 0" }, { "math_id": 58, "text": "A_\\lambda = - \\log \\mathcal{T}." }, { "math_id": 59, "text": "L_\\text{V} = \\frac{3.9}{C \\sigma_\\text{scat}}." }, { "math_id": 60, "text": "\n\\frac{d\\sigma}{d\\Omega} = \\frac{R^2}{4},\n" }, { "math_id": 61, "text": "\n\\sigma_\\text{tot} = \\pi \\left(r + R\\right)^2.\n" }, { "math_id": 62, "text": "\n\\frac{d \\sigma}{d \\Omega} = \\left(\\frac{q \\, Q}{16\\pi\\varepsilon_0 E \\sin^2(\\theta/2)} \\right)^2\n" }, { "math_id": 63, "text": "\\varepsilon_0" }, { "math_id": 64, "text": "\\theta" }, { "math_id": 65, "text": "1/r" }, { "math_id": 66, "text": "\\mathrm dx = r \\cos \\alpha \\,\\mathrm d \\alpha." }, { "math_id": 67, "text": "\\theta = \\pi - 2 \\alpha." }, { "math_id": 68, "text": "I \\,\\mathrm d \\sigma = I \\,\\mathrm dx(x) = I r \\cos \\alpha \\,\\mathrm d \\alpha = I \\frac{r}{2} \\sin \\left(\\frac{\\theta}{2}\\right) \\,\\mathrm d \\theta = I \\frac{\\mathrm d \\sigma}{\\mathrm d \\theta} \\,\\mathrm d \\theta." }, { "math_id": 69, "text": "\\frac{\\mathrm d \\sigma}{\\mathrm d \\theta} = \\frac{r}{2} \\sin \\left(\\frac{\\theta}{2}\\right)." }, { "math_id": 70, "text": "\\sigma = \\int_0^{2 \\pi} \\frac{\\mathrm d \\sigma}{\\mathrm d \\theta} \\,\\mathrm d \\theta = \\int_0^{2 \\pi} \\frac{r}{2} \\sin \\left(\\frac{\\theta}{2}\\right) \\,\\mathrm d \\theta = 2 r." }, { "math_id": 71, "text": "\\begin{align}\nr &= a \\sin \\alpha,\\\\\n\\mathrm dr &= a \\cos \\alpha \\,\\mathrm d \\alpha,\n\\end{align}" }, { "math_id": 72, "text": " \\mathrm d \\sigma = \\mathrm d r(r) \\times r \\,\\mathrm d \\varphi = \\frac{a^2}{2} \\sin \\left(\\frac{\\theta}{2}\\right) \\cos \\left(\\frac{\\theta}{2}\\right) \\,\\mathrm d \\theta \\,\\mathrm d \\varphi." }, { "math_id": 73, "text": "\\mathrm d\\Omega = \\sin \\theta \\,\\mathrm d \\theta \\,\\mathrm d \\varphi." }, { "math_id": 74, "text": "\\sin \\theta = 2 \\sin \\left(\\frac{\\theta}{2}\\right) \\cos \\left(\\frac{\\theta}{2}\\right)," }, { "math_id": 75, "text": "\\frac{\\mathrm d \\sigma}{\\mathrm d \\Omega} = \\frac{a^2}{4}." }, { "math_id": 76, "text": "\\sigma = \\oint_{4 \\pi} \\frac{\\mathrm d \\sigma}{\\mathrm d \\Omega} \\,\\mathrm d \\Omega = \\pi a^2." } ]
https://en.wikipedia.org/wiki?curid=7480
74800
Torus
Doughnut-shaped surface of revolution In geometry, a torus (pl.: tori or toruses) is a surface of revolution generated by revolving a circle in three-dimensional space one full revolution about an axis that is coplanar with the circle. The main types of toruses include ring toruses, horn toruses, and spindle toruses. A ring torus is sometimes colloquially referred to as a donut or doughnut. If the axis of revolution does not touch the circle, the surface has a ring shape and is called a torus of revolution, also known as a ring torus. If the axis of revolution is tangent to the circle, the surface is a horn torus. If the axis of revolution passes twice through the circle, the surface is a spindle torus (or "self-crossing torus" or "self-intersecting torus"). If the axis of revolution passes through the center of the circle, the surface is a degenerate torus, a double-covered sphere. If the revolved curve is not a circle, the surface is called a "toroid", as in a square toroid. Real-world objects that approximate a torus of revolution include swim rings, inner tubes and ringette rings. A torus should not be confused with a "solid torus", which is formed by rotating a disk, rather than a circle, around an axis. A solid torus is a torus plus the volume inside the torus. Real-world objects that approximate a "solid torus" include O-rings, non-inflatable lifebuoys, ring doughnuts, and bagels. In topology, a ring torus is homeomorphic to the Cartesian product of two circles: formula_0, and the latter is taken to be the definition in that context. It is a compact 2-manifold of genus 1. The ring torus is one way to embed this space into Euclidean space, but another way to do this is the Cartesian product of the embedding of formula_1 in the plane with itself. This produces a geometric object called the Clifford torus, a surface in 4-space. In the field of topology, a torus is any topological space that is homeomorphic to a torus. The surface of a coffee cup and a doughnut are both topological tori with genus one. An example of a torus can be constructed by taking a rectangular strip of flexible material such as rubber, and joining the top edge to the bottom edge, and the left edge to the right edge, without any half-twists (compare Klein bottle). Etymology. "Torus" is a Latin word for "a round, swelling, elevation, protuberance". Geometry. A torus can be parametrized as: formula_2 using angular coordinates formula_3 representing rotation around the tube and rotation around the torus' axis of revolution, respectively, where the "major radius" formula_4 is the distance from the center of the tube to the center of the torus and the "minor radius" formula_5 is the radius of the tube. The ratio formula_6 is called the "aspect ratio" of the torus. The typical doughnut confectionery has an aspect ratio of about 3 to 2. An implicit equation in Cartesian coordinates for a torus radially symmetric about the formula_7-axis is formula_8 Algebraically eliminating the square root gives a quartic equation, formula_9 The three classes of standard tori correspond to the three possible aspect ratios between R and r: When "R" ≥ "r", the interior formula_10 of this torus is diffeomorphic (and, hence, homeomorphic) to a product of a Euclidean open disk and a circle. The volume of this solid torus and the surface area of its torus are easily computed using Pappus's centroid theorem, giving: formula_11 These formulas are the same as for a cylinder of length 2π"R" and radius r, obtained from cutting the tube along the plane of a small circle, and unrolling it by straightening out (rectifying) the line running around the center of the tube. The losses in surface area and volume on the inner side of the tube exactly cancel out the gains on the outer side. Expressing the surface area and the volume by the distance p of an outermost point on the surface of the torus to the center, and the distance q of an innermost point to the center (so that "R" = and "r" =), yields formula_12 As a torus is the product of two circles, a modified version of the spherical coordinate system is sometimes used. In traditional spherical coordinates there are three measures, R, the distance from the center of the coordinate system, and θ and φ, angles measured from the center point. As a torus has, effectively, two center points, the centerpoints of the angles are moved; φ measures the same angle as it does in the spherical system, but is known as the "toroidal" direction. The center point of θ is moved to the center of r, and is known as the "poloidal" direction. These terms were first used in a discussion of the Earth's magnetic field, where "poloidal" was used to denote "the direction toward the poles". In modern use, toroidal and poloidal are more commonly used to discuss magnetic confinement fusion devices. Topology. Topologically, a torus is a closed surface defined as the product of two circles: "S"1 × "S"1. This can be viewed as lying in C2 and is a subset of the 3-sphere "S"3 of radius √2. This topological torus is also often called the Clifford torus. In fact, "S"3 is filled out by a family of nested tori in this manner (with two degenerate circles), a fact which is important in the study of "S"3 as a fiber bundle over "S"2 (the Hopf bundle). The surface described above, given the relative topology from formula_13, is homeomorphic to a topological torus as long as it does not intersect its own axis. A particular homeomorphism is given by stereographically projecting the topological torus into formula_13 from the north pole of "S"3. The torus can also be described as a quotient of the Cartesian plane under the identifications formula_14 or, equivalently, as the quotient of the unit square by pasting the opposite edges together, described as a fundamental polygon "ABA"−1"B"−1. The fundamental group of the torus is just the direct product of the fundamental group of the circle with itself: formula_15 Intuitively speaking, this means that a closed path that circles the torus' "hole" (say, a circle that traces out a particular latitude) and then circles the torus' "body" (say, a circle that traces out a particular longitude) can be deformed to a path that circles the body and then the hole. So, strictly 'latitudinal' and strictly 'longitudinal' paths commute. An equivalent statement may be imagined as two shoelaces passing through each other, then unwinding, then rewinding. If a torus is punctured and turned inside out then another torus results, with lines of latitude and longitude interchanged. This is equivalent to building a torus from a cylinder, by joining the circular ends together, in two ways: around the outside like joining two ends of a garden hose, or through the inside like rolling a sock (with the toe cut off). Additionally, if the cylinder was made by gluing two opposite sides of a rectangle together, choosing the other two sides instead will cause the same reversal of orientation. The first homology group of the torus is isomorphic to the fundamental group (this follows from Hurewicz theorem since the fundamental group is abelian). Two-sheeted cover. The 2-torus double-covers the 2-sphere, with four ramification points. Every conformal structure on the 2-torus can be represented as a two-sheeted cover of the 2-sphere. The points on the torus corresponding to the ramification points are the Weierstrass points. In fact, the conformal type of the torus is determined by the cross-ratio of the four points. "n"-dimensional torus. The torus has a generalization to higher dimensions, the , often called the "n"-torus or hypertorus for short. (This is the more typical meaning of the term ""n"-torus", the other referring to "n" holes or of genus "n".) Recalling that the torus is the product space of two circles, the "n"-dimensional torus is the product of "n" circles. That is: formula_16 The standard 1-torus is just the circle: formula_17. The torus discussed above is the standard 2-torus, formula_18. And similar to the 2-torus, the "n"-torus, formula_19 can be described as a quotient of formula_20 under integral shifts in any coordinate. That is, the "n"-torus is formula_20 modulo the action of the integer lattice formula_21 (with the action being taken as vector addition). Equivalently, the "n"-torus is obtained from the "n"-dimensional hypercube by gluing the opposite faces together. An "n"-torus in this sense is an example of an "n-"dimensional compact manifold. It is also an example of a compact abelian Lie group. This follows from the fact that the unit circle is a compact abelian Lie group (when identified with the unit complex numbers with multiplication). Group multiplication on the torus is then defined by coordinate-wise multiplication. Toroidal groups play an important part in the theory of compact Lie groups. This is due in part to the fact that in any compact Lie group "G" one can always find a maximal torus; that is, a closed subgroup which is a torus of the largest possible dimension. Such maximal tori "T" have a controlling role to play in theory of connected "G". Toroidal groups are examples of protori, which (like tori) are compact connected abelian groups, which are not required to be manifolds. Automorphisms of "T" are easily constructed from automorphisms of the lattice formula_21, which are classified by invertible integral matrices of size "n" with an integral inverse; these are just the integral matrices with determinant ±1. Making them act on formula_20 in the usual way, one has the typical "toral automorphism" on the quotient. The fundamental group of an "n"-torus is a free abelian group of rank "n". The "k"-th homology group of an "n"-torus is a free abelian group of rank "n" choose "k". It follows that the Euler characteristic of the "n"-torus is 0 for all "n". The cohomology ring "H"•(formula_19, Z) can be identified with the exterior algebra over the Z-module formula_21 whose generators are the duals of the "n" nontrivial cycles. Configuration space. As the "n"-torus is the "n"-fold product of the circle, the "n"-torus is the configuration space of "n" ordered, not necessarily distinct points on the circle. Symbolically, formula_22. The configuration space of "unordered", not necessarily distinct points is accordingly the orbifold formula_23, which is the quotient of the torus by the symmetric group on "n" letters (by permuting the coordinates). For "n" = 2, the quotient is the Möbius strip, the edge corresponding to the orbifold points where the two coordinates coincide. For "n" = 3 this quotient may be described as a solid torus with cross-section an equilateral triangle, with a twist; equivalently, as a triangular prism whose top and bottom faces are connected with a 1/3 twist (120°): the 3-dimensional interior corresponds to the points on the 3-torus where all 3 coordinates are distinct, the 2-dimensional face corresponds to points with 2 coordinates equal and the 3rd different, while the 1-dimensional edge corresponds to points with all 3 coordinates identical. These orbifolds have found significant applications to music theory in the work of Dmitri Tymoczko and collaborators (Felipe Posada, Michael Kolinas, et al.), being used to model musical triads. Flat torus. A flat torus is a torus with the metric inherited from its representation as the quotient, formula_24/L, where L is a discrete subgroup of formula_24 isomorphic to formula_25. This gives the quotient the structure of a Riemannian manifold, as well as the structure of an abelian Lie group. Perhaps the simplest example of this is when L = formula_25: formula_26, which can also be described as the Cartesian plane under the identifications ("x", "y") ~ ("x" + 1, "y") ~ ("x", "y" + 1). This particular flat torus (and any uniformly scaled version of it) is known as the "square" flat torus. This metric of the square flat torus can also be realised by specific embeddings of the familiar 2-torus into Euclidean 4-space or higher dimensions. Its surface has zero Gaussian curvature everywhere. It is flat in the same sense that the surface of a cylinder is flat. In 3 dimensions, one can bend a flat sheet of paper into a cylinder without stretching the paper, but this cylinder cannot be bent into a torus without stretching the paper (unless some regularity and differentiability conditions are given up, see below). A simple 4-dimensional Euclidean embedding of a rectangular flat torus (more general than the square one) is as follows: formula_27 where "R" and "P" are positive constants determining the aspect ratio. It is diffeomorphic to a regular torus but not isometric. It can not be analytically embedded (smooth of class "Ck", 2 ≤ "k" ≤ ∞) into Euclidean 3-space. Mapping it into "3"-space requires one to stretch it, in which case it looks like a regular torus. For example, in the following map: formula_28 If "R" and "P" in the above flat torus parametrization form a unit vector ("R", "P") = (cos("η"), sin("η")) then "u", "v", and 0 &lt; "η" &lt; π/2 parameterize the unit 3-sphere as Hopf coordinates. In particular, for certain very specific choices of a square flat torus in the 3-sphere "S"3, where "η" = π/4 above, the torus will partition the 3-sphere into two congruent solid tori subsets with the aforesaid flat torus surface as their common boundary. One example is the torus T defined by formula_29 Other tori in "S"3 having this partitioning property include the square tori of the form "Q"⋅T, where "Q" is a rotation of 4-dimensional space formula_30, or in other words "Q" is a member of the Lie group SO(4). It is known that there exists no "C"2 (twice continuously differentiable) embedding of a flat torus into 3-space. (The idea of the proof is to take a large sphere containing such a flat torus in its interior, and shrink the radius of the sphere until it just touches the torus for the first time. Such a point of contact must be a tangency. But that would imply that part of the torus, since it has zero curvature everywhere, must lie strictly outside the sphere, which is a contradiction.) On the other hand, according to the Nash-Kuiper theorem, which was proven in the 1950s, an isometric "C"1 embedding exists. This is solely an existence proof and does not provide explicit equations for such an embedding. In April 2012, an explicit "C"1 (continuously differentiable) embedding of a flat torus into 3-dimensional Euclidean space formula_13 was found. It is a flat torus in the sense that, as a metric space, it is isometric to a flat square torus. It is similar in structure to a fractal as it is constructed by repeatedly corrugating an ordinary torus. Like fractals, it has no defined Gaussian curvature. However, unlike fractals, it does have defined surface normals, yielding a so-called "smooth fractal". The key to obtain the smoothness of this corrugated torus is to have the amplitudes of successive corrugations decreasing faster than their "wavelengths". (These infinitely recursive corrugations are used only for embedding into three dimensions; they are not an intrinsic feature of the flat torus.) This is the first time that any such embedding was defined by explicit equations or depicted by computer graphics. Conformal classification of flat tori. In the study of Riemann surfaces, one says that any two smooth compact geometric surfaces are "conformally equivalent" when there exists a smooth homeomorphism between them that is both angle-preserving and orientation-preserving. The Uniformization theorem guarantees that every Riemann surface is conformally equivalent to one that has constant Gaussian curvature. In the case of a torus, the constant curvature must be zero. Then one defines the "moduli space" of the torus to contain one point for each conformal equivalence class, with the appropriate topology. It turns out that this moduli space "M" may be identified with a punctured sphere that is smooth except for two points that have less angle than 2π (radians) around them: One has total angle = π and the other has total angle = 2π/3. "M" may be turned into a compact space "M*" by adding one additional point that represents the limiting case as a rectangular torus approaches an aspect ratio of 0 in the limit. The result is that this compactified moduli space is a sphere with "three" points each having less than 2π total angle around them. (Such points are termed "cusps".) This additional point will have zero total angle around it. Due to symmetry, "M*" may be constructed by glueing together two congruent geodesic triangles in the hyperbolic plane along their (identical) boundaries, where each triangle has angles of π/2, π/3, and 0. As a result the area of each triangle can be calculated as π - (π/2 + π/3 + 0) = π/6, so it follows that the compactified moduli space "M*" has area equal to π/3. The other two cusps occur at the points corresponding in "M*" to a) the square torus (total angle = π) and b) the hexagonal torus (total angle = 2π/3). These are the only conformal equivalence classes of flat tori that have any conformal automorphisms other than those generated by translations and negation. Genus "g" surface. In the theory of surfaces there is a more general family of objects, the "genus" "g" surfaces. A genus "g" surface is the connected sum of "g" two-tori. (And so the torus itself is the surface of genus 1.) To form a connected sum of two surfaces, remove from each the interior of a disk and "glue" the surfaces together along the boundary circles. (That is, merge the two boundary circles so they become just one circle.) To form the connected sum of more than two surfaces, successively take the connected sum of two of them at a time until they are all connected. In this sense, a genus "g" surface resembles the surface of "g" doughnuts stuck together side by side, or a 2-sphere with "g" handles attached. As examples, a genus zero surface (without boundary) is the two-sphere while a genus one surface (without boundary) is the ordinary torus. The surfaces of higher genus are sometimes called "n"-holed tori (or, rarely, "n"-fold tori). The terms double torus and triple torus are also occasionally used. The classification theorem for surfaces states that every compact connected surface is topologically equivalent to either the sphere or the connect sum of some number of tori, disks, and real projective planes. Toroidal polyhedra. Polyhedra with the topological type of a torus are called toroidal polyhedra, and have Euler characteristic "V" − "E" + "F" = 0. For any number of holes, the formula generalizes to "V" − "E" + "F" = 2 − 2"N", where "N" is the number of holes. The term "toroidal polyhedron" is also used for higher-genus polyhedra and for immersions of toroidal polyhedra. Automorphisms. The homeomorphism group (or the subgroup of diffeomorphisms) of the torus is studied in geometric topology. Its mapping class group (the connected components of the homeomorphism group) is surjective onto the group formula_31 of invertible integer matrices, which can be realized as linear maps on the universal covering space formula_20 that preserve the standard lattice formula_21 (this corresponds to integer coefficients) and thus descend to the quotient. At the level of homotopy and homology, the mapping class group can be identified as the action on the first homology (or equivalently, first cohomology, or on the fundamental group, as these are all naturally isomorphic; also the first cohomology group generates the cohomology algebra: formula_32 Since the torus is an Eilenberg–MacLane space "K"("G", 1), its homotopy equivalences, up to homotopy, can be identified with automorphisms of the fundamental group); all homotopy equivalences of the torus can be realized by homeomorphisms – every homotopy equivalence is homotopic to a homeomorphism. Thus the short exact sequence of the mapping class group splits (an identification of the torus as the quotient of formula_20 gives a splitting, via the linear maps, as above): formula_33 The mapping class group of higher genus surfaces is much more complicated, and an area of active research. Coloring a torus. The torus's Heawood number is seven, meaning every graph that can be embedded on the torus has a chromatic number of at most seven. (Since the complete graph formula_34 can be embedded on the torus, and formula_35, the upper bound is tight.) Equivalently, in a torus divided into regions, it is always possible to color the regions using no more than seven colors so that no neighboring regions are the same color. (Contrast with the four color theorem for the plane.) de Bruijn torus. In combinatorial mathematics, a "de Bruijn torus" is an array of symbols from an alphabet (often just 0 and 1) that contains every "m"-by-"n" matrix exactly once. It is a torus because the edges are considered wraparound for the purpose of finding matrices. Its name comes from the De Bruijn sequence, which can be considered a special case where "n" is 1 (one dimension). Cutting a torus. A solid torus of revolution can be cut by "n" (&gt; 0) planes into at most formula_36 parts. (This assumes the pieces may not be rearranged but must remain in place for all cuts.) The first 11 numbers of parts, for 0 ≤ "n" ≤ 10 (including the case of "n" = 0, not covered by the above formulas), are as follows: 1, 2, 6, 13, 24, 40, 62, 91, 128, 174, 230, ... (sequence in the OEIS). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S^{1}\\times S^{1}" }, { "math_id": 1, "text": "S^{1}" }, { "math_id": 2, "text": "\\begin{align}\nx(\\theta, \\varphi) &= (R + r \\cos \\theta) \\cos{\\varphi}\\\\\ny(\\theta, \\varphi) &= (R + r \\cos \\theta) \\sin{\\varphi}\\\\\nz(\\theta, \\varphi) &= r \\sin \\theta\\\\\n\\end{align}" }, { "math_id": 3, "text": "\\theta, \\varphi \\in [0,2\\pi)," }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "R/r" }, { "math_id": 7, "text": "z" }, { "math_id": 8, "text": "{\\textstyle \\bigl(\\sqrt{x^2 + y^2} - R\\bigr)^2} + z^2 = r^2." }, { "math_id": 9, "text": "\\left(x^2 + y^2 + z^2 + R^2 - r^2\\right)^2 = 4R^2\\left(x^2+y^2\\right)." }, { "math_id": 10, "text": "{\\textstyle \\bigl(\\sqrt{x^2 + y^2} - R\\bigr)^2} + z^2 < r^2" }, { "math_id": 11, "text": "\\begin{align}\nA &= \\left( 2\\pi r \\right) \\left(2 \\pi R \\right) = 4 \\pi^2 R r, \\\\[5mu]\nV &= \\left ( \\pi r^2 \\right ) \\left( 2 \\pi R \\right) = 2 \\pi^2 R r^2.\n\\end{align}" }, { "math_id": 12, "text": "\\begin{align}\nA &= 4 \\pi^2 \\left(\\frac{p+q}{2}\\right) \\left(\\frac{p-q}{2}\\right) = \\pi^2 (p+q) (p-q), \\\\[5mu]\nV &= 2 \\pi^2 \\left(\\frac{p+q}{2}\\right) \\left(\\frac{p-q}{2}\\right)^2 = \\tfrac14 \\pi^2 (p+q) (p-q)^2.\n\\end{align}" }, { "math_id": 13, "text": "\\mathbb{R}^{3}" }, { "math_id": 14, "text": "(x,y) \\sim (x+1,y) \\sim (x,y+1), \\," }, { "math_id": 15, "text": "\\pi_1(\\mathbb{T}^2) = \\pi_1(\\mathbb{S}^1) \\times \\pi_1(\\mathbb{S}^1) \\cong \\mathbb{Z} \\times \\mathbb{Z}." }, { "math_id": 16, "text": "\\mathbb{T}^n = \\underbrace{\\mathbb{S}^1 \\times \\cdots \\times \\mathbb{S}^1}_n." }, { "math_id": 17, "text": "\\mathbb{T}^{1}=\\mathbb{S}^{1} " }, { "math_id": 18, "text": "\\mathbb{T}^2" }, { "math_id": 19, "text": "\\mathbb{T}^{n}" }, { "math_id": 20, "text": "\\mathbb{R}^{n}" }, { "math_id": 21, "text": "\\mathbb{Z}^{n}" }, { "math_id": 22, "text": "\\mathbb{T}^{n}= (\\mathbb{S}^{1})^{n}" }, { "math_id": 23, "text": "\\mathbb{T}^{n} / \\mathbb{S}_{n}" }, { "math_id": 24, "text": "\\mathbb{R}^{2}" }, { "math_id": 25, "text": "\\mathbb{Z}^{2}" }, { "math_id": 26, "text": "\\mathbb{R}^{2}/\\mathbb{Z}^{2}" }, { "math_id": 27, "text": "(x,y,z,w) = (R\\cos u, R\\sin u, P\\cos v, P\\sin v)" }, { "math_id": 28, "text": "(x,y,z) = ((R+P\\sin v)\\cos u, (R+P\\sin v)\\sin u, P\\cos v)." }, { "math_id": 29, "text": "T = \\left\\{ (x,y,z,w) \\in \\mathbb{S}^3 \\mid x^2+y^2 = \\frac 1 2, \\ z^2+w^2 = \\frac 1 2 \\right\\}." }, { "math_id": 30, "text": "\\mathbb{R}^{4}" }, { "math_id": 31, "text": "\\operatorname{GL}(n,\\mathbb{Z})" }, { "math_id": 32, "text": "\\operatorname{MCG}_{\\operatorname{Ho}}(\\mathbb{T}^n) = \\operatorname{Aut}(\\pi_1(X)) = \\operatorname{Aut}(\\mathbb{Z}^n) = \\operatorname{GL}(n,\\mathbb{Z})." }, { "math_id": 33, "text": "1 \\to \\operatorname{Homeo}_0(\\mathbb{T}^n) \\to \\operatorname{Homeo}(\\mathbb{T}^n) \\to \\operatorname{MCG}_{\\operatorname{TOP}}(\\mathbb{T}^n) \\to 1." }, { "math_id": 34, "text": "\\mathsf{K_7}" }, { "math_id": 35, "text": "\\chi (\\mathsf{K_7}) = 7" }, { "math_id": 36, "text": "\\begin{pmatrix}n+2 \\\\ n-1\\end{pmatrix} +\\begin{pmatrix}n \\\\ n-1\\end{pmatrix} = \\tfrac{1}{6}(n^3 + 3n^2 + 8n)" } ]
https://en.wikipedia.org/wiki?curid=74800
74803
Doughnut
Sweet food made from deep-fried dough A doughnut or donut () is a type of pastry made from leavened fried dough. It is popular in many countries and is prepared in various forms as a sweet snack that can be homemade or purchased in bakeries, supermarkets, food stalls, and franchised specialty vendors. "Doughnut" is the traditional spelling, while "donut" is the simplified version; the terms are used interchangeably. Doughnuts are usually deep fried from a flour dough, but other types of batters can also be used. Various toppings and flavors are used for different types, such as sugar, chocolate or maple glazing. Doughnuts may also include water, leavening, eggs, milk, sugar, oil, shortening, and natural or artificial flavors. The two most common types are the ring doughnut and the filled doughnut, which is injected with fruit preserves (the jelly doughnut), cream, custard, or other sweet fillings. Small pieces of dough are sometimes cooked as doughnut holes. Once fried, doughnuts may be glazed with a sugar icing, spread with icing or chocolate, or topped with powdered sugar, cinnamon, sprinkles or fruit. Other shapes include balls, flattened spheres, twists, and other forms. Doughnut varieties are also divided into cake (including the old-fashioned) and yeast-risen doughnuts. Doughnuts are often accompanied by coffee or milk. They are sold at doughnut shops, convenience stores, petrol/gas stations, cafes or fast food restaurants. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; History. Forerunner. A recipe for a deep-fried dough ball was recorded by Cato the Elder in his "de agri cultura", using cheese, honey, and poppy seeds, called "globi". Similar types of fried dough recipes have either spread to, or originated, in other parts of Europe and the World. The cookbook "Küchenmeisterei" ("Mastery of the Kitchen"), published in Nuremberg in 1485, offers a recipe for "Gefüllte Krapfen", stuffed, fried dough cakes. The Spanish and Portuguese churro is a choux pastry dough that would also be served in a ring-shape. The recipe may have been brought from, or introduced to China, in the 16th century. England and North America. Dutch settlers brought "olykoek" ("oil(y) cake") to New York (or New Amsterdam) in the early 18th century. These doughnuts closely resembled later ones but did not yet have their current ring shape. A recipe for fried dough "nuts" was published, in 1750 England, under the title "How to make Hertfordshire Cakes, Nuts and Pincushions”, in "The Country Housewife’s Family Companion by William Ellis." A recipe labelled "dow nuts", again from Hertfordshire, was found in a book of recipes and domestic tips written around 1800, by the wife of Baron Thomas Dimsdale, the recipe being given to the dowager Baroness by an acquaintance who transcribed for her the cooking instructions for a "dow nut". The first cookbook using the near conventional "dough nuts" spelling was possibly the 1803 edition of "The Frugal Housewife: Or, Complete Woman Cook", which included dough nuts in an appendix of American recipes. One of the earliest mentions of "dough-nut" was in Washington Irving's 1809 book "A History of New York, from the Beginning of the World to the End of the Dutch Dynasty": &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Sometimes the table was graced with immense apple-pies, or saucers full of preserved peaches and pears; but it was always sure to boast of an enormous dish of balls of sweetened dough, fried in hog’s fat, and called dough-nuts, or oly koeks: a delicious kind of cake, at present scarce known in this city, excepting in genuine Dutch families. The name "oly koeks" was almost certainly related to the "oliekoek": a Dutch delicacy of "sweetened cake fried in fat." Etymology. "Dough nut". One of the earliest known literary usages of the term dates to an 1808 short story describing a spread of "fire-cakes and dough-nuts". Washington Irving described "dough-nuts", in his 1809 "History of New York, as" "balls of sweetened dough, fried in hog's fat, and called dough-nuts, or" olykoeks"." These "nuts" of fried dough might now be called doughnut holes. The word "nut" is here used in the earlier sense of "small rounded cake or cookie", also seen in ginger nut. "Doughnut" is the traditional spelling and still dominates even in the United States though "donut" is often used. At present, "doughnut" and the shortened form "donut" are both pervasive in American English. "Donut". The first known printed use of "donut" was in "Peck's Bad Boy and his Pa" by George W. Peck, published in 1900, in which a character is quoted as saying, "Pa said he guessed he hadn't got much appetite, and he would just drink a cup of coffee and eat a donut." According to author John T. Edge the alternative spelling "donut" was invented in the 1920s when the New York–based Display Doughnut Machine Corporation abbreviated the word to make it more pronounceable by the foreigners they hoped would buy their automated doughnut making equipment. The donut spelling also showed up in a "Los Angeles Times" article dated August 10, 1929 in which Bailey Millard jokingly complains about the decline of spelling, and that he "can't swallow the 'wel-dun donut' nor the ever so 'gud bred'". The interchangeability of the two spellings can be found in a series of "National Donut Week" articles in "The New York Times" that covered the 1939 World's Fair. In four articles beginning 9 October, two mention the "donut" spelling. Dunkin' Donuts, which was so-named in 1950, following its 1948 founding under the name Open Kettle (Quincy, Massachusetts), is the oldest surviving company to use the "donut" variation; other chains, such as the defunct Mayflower Doughnut Corporation (1931), did not use that spelling. According to the Oxford Dictionaries while "doughnut" is used internationally, the spelling "donut" is American. The spelling "donut" remained rare until the 1950s, and has since grown significantly in popularity. Types. Rings. Hanson Gregory, an American, claimed to have invented the ring-shaped doughnut in 1847 aboard a lime-trading ship when he was 16 years old. Gregory was dissatisfied with the greasiness of doughnuts twisted into various shapes and with the raw center of regular doughnuts. He claimed to have punched a hole in the center of dough with the ship's tin pepper box, and to have later taught the technique to his mother. "Smithsonian Magazine" states that his mother, Elizabeth Gregory, "made a deep-fried dough that cleverly used her son's spice cargo of nutmeg and cinnamon, along with lemon rind," and "put hazelnuts or walnuts in the center, where the dough might not cook through", and called the food 'doughnuts'. Ring doughnuts are formed by one of two methods: by joining the ends of a long, skinny piece of dough into a ring, or by using a doughnut cutter, which simultaneously cuts the outside and inside shape, leaving a doughnut-shaped piece of dough and a doughnut hole (the dough removed from the center). This smaller piece of dough can be cooked and served as a "doughnut hole" or added back to the batch to make more doughnuts. A disk-shaped doughnut can also be stretched and pinched into a torus until the center breaks to form a hole. Alternatively, a doughnut depositor can be used to place a circle of liquid dough (batter) directly into the fryer. There are two types of ring doughnuts, those made from a yeast-based dough for raised doughnuts, or those made from a special type of cake batter. Yeast-raised doughnuts contain about 25% oil by weight, whereas cake doughnuts' oil content is around 20%, but have extra fat included in the batter before frying. Cake doughnuts are fried for about 90 seconds at approximately , turning once. Yeast-raised doughnuts absorb more oil because they take longer to fry, about 150 seconds, at . Cake doughnuts typically weigh between , whereas yeast-raised doughnuts average and are generally larger, and taller (due to rising) when finished. Daniela Galarza, for "Eater", wrote that "the now-standard doughnut’s hole is still up for debate. Food writer Michael Krondl surmises that the shape came from recipes that called for the dough to be shaped like a jumble – a once common ring-shaped cookie. In "Cuisine and Culture: A History of Food and People", culinary historian Linda Civitello writes that the hole was invented because it allowed the doughnuts to cook faster. By 1870 doughnut cutters shaped in two concentric circles, one smaller than the other, began to appear in home-shopping catalogues". Topping. After frying, ring doughnuts are often topped. Raised doughnuts are generally covered with a glaze (icing). Cake doughnuts can also be glazed, powdered with confectioner's sugar, or covered with cinnamon and granulated sugar. They are also often topped with cake frosting (top only) and sometimes sprinkled with coconut, chopped peanuts, or sprinkles. Holes. Doughnut holes are small, bite-sized doughnuts that were traditionally made from the dough taken from the center of ring doughnuts. Before long, doughnut sellers saw the opportunity to market "holes" as a novelty and many chains offer their own variety, some with their own brand names such as "Munchkins" from Dunkin' Donuts and "Timbits" from Tim Hortons. Traditionally, doughnut holes are made by frying the dough removed from the center portion of the doughnut. Consequently, they are considerably smaller than a standard doughnut and tend to be spherical. Similar to standard doughnuts, doughnut holes may be topped with confections, such as glaze or powdered sugar. Originally, most varieties of doughnut holes were derivatives of their ring doughnut (yeast-based dough or cake batter) counterparts. However, doughnut holes can also be made by dropping a small ball of dough into hot oil from a specially shaped nozzle or cutter. This production method has allowed doughnut sellers to produce bite-sized versions of non-ring doughnuts, such as filled doughnuts, fritters and Dutchies. Filled. Filled doughnuts are flattened spheres injected with fruit preserves, cream, custard, or other sweet fillings, and often dipped into powdered sugar or topped off with frosting. Common varieties include the Boston cream, coconut, key lime, and jelly. Other shapes. Others include the fritter and the Dutchie, which are usually glazed. These have been available on Tim Hortons' doughnut menu since the chain's inception in 1964, and a 1991 "Toronto Star" report found these two were the chain's most popular type of fried dough in Canada. There are many other specialized doughnut shapes such as old-fashioned, bars or Long Johns (a rectangular shape), or twists. Other shapes include balls, flattened spheres, twists, and other forms. In the northeast United States, bars and twists are usually referred to as "crullers". Another is the beignet, a square-shaped doughnut covered with powdered sugar, commonly associated with New Orleans. Science. Cake vs yeast style. Yeast doughnuts and cake doughnuts contain most of the same ingredients, however, their structural differences arise from the type of flour and leavening agent used. In cake doughnuts, cake flour is used, and the resulting doughnut has a different texture because cake flour has a relatively low protein content of about 7 to 8 percent. In yeast doughnuts, a flour with a higher protein content of about 9 to 12 percent is used, resulting in a doughnut that is lighter and more airy. In addition, yeast doughnuts utilize yeast as a leavening agent. Specifically, "Yeast cells are thoroughly distributed throughout the dough and begin to feed on the sugar that is present ... carbon dioxide gas is generated, which raises the dough, making it light and porous." Whereas this process is biological, the leavening process in cake doughnuts is chemical. In cake doughnuts, the most common leavening agent is baking powder. Baking powder is essentially "baking soda with acid added. This neutralizes the base and produces more CO2 according to the following equation: NaHCO3 + H+ → Na+ + H2O + CO2." Physical structure. The physical structure of the doughnut is created by the combination of flour, leavening agent, sugar, eggs, salt, water, shortening, milk solids, and additional components. The most important ingredients for creating the dough network are the flour and eggs. The main protein in flour is gluten, which is overall responsible for creating elastic dough because this protein acts as "coiled springs." The gluten network is composed of two separate molecules named glutenin and gliadin. Specifically, "the backbone of the gluten network likely consists of the largest glutenin molecules, or subunits, aligned and tightly linked to one another. These tightly linked glutenin subunits associate more loosely, along with gliadin, into larger gluten aggregates." The gluten strands then tangle and interact with other strands and other molecules, resulting in networks that provide the elasticity of the dough. In mixing, the gluten is developed when the force of the mixer draws the gluten from the wheat endosperm, allowing the gluten matrix to trap the gas cells. Molecular composition. Eggs function as emulsifiers, foaming agents, and tenderizers in the dough. The egg white proteins, mainly ovalbumin, "function as structure formers. Egg solids, chiefly the egg white solids combined with the moisture in the egg, are considered structure-forming materials that help significantly to produce proper volume, grain, and texture." The egg yolk contributes proteins, fats, and emulsifiers to the dough. Emulsifying agents are essential to doughnut formation because they prevent the fat molecules from separating from the water molecules in the dough. The main emulsifier in egg yolk is called lecithin, which is a phospholipid. "The fatty acids are attracted to fats and oils (lipids) in food, while the phosphate group is attracted to water. It is this ability to attract both lipids and water that allow phospholipids such as lecithin to act as emulsifiers." The proteins from both the egg yolk and the egg whites contribute to the structure of the dough through a process called coagulation. When heat is applied to the dough, the egg proteins will begin to unfold, or denature, and then form new bonds with one another, thus creating a gel-like network that can hold water and gas. Shortening is responsible for providing tenderness and aerating the dough. In terms of its molecular structure, "a typical shortening that appears solid [at room temperature] contains 15–20% solids and, hence, 80–85% liquid oil ... this small amount of solids can be made to hold all of the liquid in a matrix of very small, stable, needlelike crystals (beta-prime crystals)." This crystalline structure is considered highly stable due to how tightly its molecules are packed. The sugar used in baking is essentially sucrose, and besides imparting sweetness in the doughnut, sugar also functions in the color and tenderness of the final product. Sucrose is a simple carbohydrate whose structure is made up of a glucose molecule bound to a fructose molecule. Milk is utilized in the making of doughnuts, but in large scale bakeries, one form of milk used is nonfat dry milk solids. These solids are obtained by removing most of the water from skim milk with heat, and this heat additionally denatures the whey proteins and increases the absorption properties of the remaining proteins. The ability of the casein and whey proteins to absorb excess water is essential to prolonging the doughnut's freshness. The major whey protein in the nonfat milk solids is known as beta-lactoglobulin, and a crucial feature of its structure is that there exists a single sulfhydryl group that is protected by the alpha helix, and when heating of the milk solids occurs, these groups participate in disulfide exchanges with other molecules. This interchange prevents the renaturation of the whey proteins. If the crosslinking of the sulfide groups does not occur, the whey proteins can rebond and weaken the gluten network. Water is a necessary ingredient in the production of doughnuts because it activates the other ingredients, allowing them to perform their functions in building the doughnut's structure. For example, sugar and salt crystals must be dissolved in order for them to act in the dough, whereas larger molecules, such as the starches or proteins, must be hydrated in order for them to absorb moisture. Another important consideration of water is its degree of hardness, which measures the amount of impurities in the water source. Pure water consists of two parts hydrogen and one part oxygen, but water used in baking often is not pure. Baker's salt (NaCl) is usually used as an ingredient due to its high purity, whereas the salts in water are derived from varying minerals. As an ingredient, "salt is added to enhance the flavour of cakes and breads and to ‘toughen up’ the soft mixture of fat and sugar." If relatively soft water is being used, more salt should be added in order to strengthen the gluten network of the dough, but if not enough salt is added during the baking process, the flavor of the bread will not be appealing to consumers. Health effects. Doughnuts are unhealthful, though some are less so than others. According to "Prevention Magazine", doughnuts made from enriched flour provide some thiamine, riboflavin, and niacin, along with some fiber, but they are high in sugar and calories. Steps to improve the healthfulness of doughnuts include removing trans fats. Dough rheology. An important property of the dough that affects the final product is the dough's rheology. This property measures the ability of the dough to flow. It can be represented by the power law equation: formula_0 where formula_1 is the tangentic stress, formula_2 is the viscosity coefficient, formula_3 is the shear rate, and formula_4 is the flow index. Many factors affect dough rheology including the type and volume of ingredients and the force applied during mixing. Dough is usually described as a viscoelastic material, meaning that its rheology depends on both the viscosity and the elasticity. The viscosity coefficient and the flow index are unique to the type of dough being analyzed, while the tangential stress and the shear rate are measurements which depend on the type of force being applied to the dough. Regional variations. Asia. Cambodia. "Nom kong" (នំបុ័ងកង់), the traditional Cambodian doughnut, is named after its shape – the word ‘កង់’ (pronounced "kong" in Khmer) literally means “wheel”, whilst "nom" (‘នំបុ័ង’) is the general word for pastry or any kind of starchy food. A very inexpensive treat for everyday Cambodians, this sweet pastry consists of a jasmine rice flour dough moulded into a classic ring shape and then deep fried in fat, then drizzled with a palm sugar toffee and sprinkled with sesame seeds. The rice flour gives it a chewy texture that Cambodians are fond of. This childhood snack is what inspired Cambodian-American entrepreneur Ted Ngoy to build his doughnut empire, inspiring the film The Donut King. China. A few sweet, doughnut-style pastries are regional in nature. Cantonese cuisine features an oval-shaped pastry called "ngàuhleisōu" (牛脷酥, lit. "ox-tongue pastry", due to its tongue-like shape). A spherical food called "saa1 jung" (沙翁), which is also similar to a cream puff but denser with a doughnut-like texture and usually prepared with sugar sprinkled on top, is normally available in dim sum Cantonese restaurants. An oilier Beijing variant of this called 高力豆沙, "gaoli dousha", is filled with red bean paste; originally, it was made with egg white instead of dough. Many Chinese cultures make a chewy doughnut known as "shuangbaotai" (雙包胎), which consists of two conjoined balls of dough. Chinese restaurants in the United States sometimes serve small fried pastries similar to doughnut holes with condensed milk as a sauce. Chinese cuisine features long, deep-fried doughnut sticks that are often quite oily, hence their name in Mandarin, "yóutiáo" (油條, "oil strips"); in Cantonese, this doughnut-style pastry is called "yàuhjagwái" (油炸鬼, "ghosts fried in oil"). These pastries are lightly salted and are often served with congee, a traditional rice porridge or soy milk for breakfast. India. In India, an old-fashioned sweet called gulgula is made of sweetened, deep-fried flour balls. A leavening agent may or may not be used. There are a couple of unrelated doughnut-shaped food items. A savory, fried, ring-shaped snack called a "vada" is often referred to as the Indian doughnut. The "vada" is made from "dal", lentil or potato flours rather than wheat flour. In North India, it is in the form of a bulging disc called "dahi-vada", and is soaked in curd, sprinkled with spices and sliced vegetables, and topped with a sweet and sour chutney. In South India, a vada is eaten with "sambar" and a coconut chutney. Sweet pastries similar to old-fashioned doughnuts called "badushahi" and "jalebi" are also popular. "Balushahi", also called "badushah", is made from flour, deep fried in clarified butter, and dipped in sugar syrup. Unlike a doughnut, "balushahi" is dense. A "balushahi" is ring-shaped, but the well in the center does not go all the way through to form a hole typical of a doughnut. "Jalebi", which is typically pretzel-shaped, is made by deep frying batter in oil and soaking it in sugar syrup. A variant of "jalebi", called "imarti", is shaped with a small ring in the center around which a geometric pattern is arranged. Along with these Indian variants, typical varieties of doughnuts are also available from U.S. chains such as Krispy Kreme and Dunkin' Donuts retail outlets, as well as local brands such as Mad Over Donuts and the Donut Baker. Indonesia. The Indonesian, "donat kentang" is a potato doughnut, a ring-shaped fritter made from flour and mashed potatoes, coated in powder sugar or icing sugar. Japan. In Japan, "an-doughnut" (あんドーナッツ, "bean paste doughnut") is widely available at bakeries. "An-doughnut" is similar to Germany's "Berliner", except it contains red azuki bean paste. Mister Donut is one of the most popular doughnut chains in Japan. Native to Okinawa is a spheroid pastry similar to doughnuts called "sata andagi". Mochi donuts are "a cross between a traditional cake-like doughnut and chewy mochi dough similar to what’s wrapped around ice cream". This hybrid confection was originally popularized in Japan by Mister Donut before spreading to the United States via Hawaii. The Mister Donut style, also known as "pon de ring", uses tapioca flour and produces mochi donuts that are easy to pull apart. Another variation developed in the United States uses glutinous rice flour which produces a denser mochi donut akin to Hawaiian-style butter mochi. Mochi donuts made from glutinous rice flour "typically contain half the amount of calories as the standard cake or yeast doughnut". Malaysia. "Kuih keria" is a hole doughnut made from boiled sweet potato that is mashed. The sweet potato mash is shaped into rings and fried. The hot doughnut is then rolled in granulated sugar. The result is a doughnut with a sugar-crusted skin. Nepal. "Sel roti" is a Nepali homemade, ring-shaped, rice doughnut prepared during Tihar, the widely celebrated Hindu festival in Nepal. A semiliquid dough is usually prepared by adding milk, water, sugar, butter, cardamom, and mashed banana to rice flour, which is often left to ferment for up to 24 hours. A "sel roti" is traditionally fried in "ghee". Pakistan. Doughnuts are available at most bakeries across Pakistan. The Navaz Sharif variety, available mainly in the city of Karachi, is covered in chocolate and filled with cream, similar to a Boston cream. Doughnuts can readily be found at the many Dunkin' Donuts branches spread across Pakistan. Philippines. Local varieties of doughnuts sold by peddlers and street vendors throughout the Philippines are usually made of plain well-kneaded dough, deep-fried in refined coconut oil and sprinkled with refined (not powdered or confectioner's) sugar. Round versions of this doughnut are known as "buñuelos" (also spelled "bunwelos", and sometimes confusingly known as "bicho-bicho"), similar to the doughnuts in Spain and former Spanish colonies. Indigenous versions of the doughnut also exist, like the "cascaron", which is prepared similarly, but uses ground glutinous rice and coconut milk in place of wheat flour and milk. Other native doughnut recipes include the "shakoy", "kumukunsi", and "binangkal". "Shakoy" or "siyakoy" from the Visayas islands (also known as "lubid-lubid" in the northern Philippines) uses a length of dough twisted into a distinctive rope-like shape before being fried. The preparation is almost exactly the same as doughnuts, though there are variants made from glutinous rice flour. The texture can range from soft and fluffy, to sticky and chewy, to hard and crunchy (in the latter case, they are known as "pilipit"). They are sprinkled with white sugar, but can also be topped with sesame seeds or caramelized sugar. "Kumukunsi" is a "jalebi"-like native doughnut from the Maguindanao people. It is made with rice flour, duck eggs, and sugar that is molded into rope-like strands and then fried in a loose spiral. It has the taste and consistency of a creamy pancake. "Binangkal" are simple fried dough balls covered in sesame seeds. Other fried dough desserts include the mesh-like "lokot-lokot", the fried rice cake "panyalam", and the banana fritter "maruya", among others. Taiwan. In Taiwan, "shuāngbāotāi" (雙胞胎, lit. "twins") is two pieces of dough wrapped together before frying. Thailand. In Thailand, a popular breakfast food is "pa thong ko", also known as Thai donuts, a version of the Chinese "yiu ja guoy/youtiao". Often sold from food stalls in markets or by the side of the road, these doughnuts are small, sometimes X-shaped, and sold by the bag full. They are often eaten in the morning with hot Thai tea. Vietnam. Vietnamese varieties of doughnuts include "bánh tiêu"," bánh cam", and "bánh rán". "Bánh tiêu" is a sesame-topped, deep-fried pastry that is hallow. It can be eaten alone or cut in half and served with "bánh bò", a gelatinous cake, placed inside the pastry. "Bánh cam" is from Southern Vietnam and is a ball-shaped, deep-fried pastry coated entirely in sesame seeds and containing a mung bean paste filling. "Bánh rán" is from Northern Vietnam and is similar to "bánh cam"; however, the difference is that "bánh rán "is covered with a sugar glaze after being deep-fried and its mung bean paste filling includes a jasmine essence. Europe. Austria. In Austria, doughnut equivalents are called "Krapfen". They are especially popular during Carnival season (Fasching), and do not have the typical ring shape, but instead are solid and usually filled with apricot jam (traditional) or vanilla cream ("Vanillekrapfen"). A second variant, called "Bauernkrapfen" are also made of yeast dough, and have a thick outside ring, but are very thin in the middle. Belgium. In Belgium, the "smoutebollen" in Dutch, or "croustillons" in French, are similar to the Dutch kind of "oliebollen", but they usually do not contain any fruit, except for apple chunks sometimes. They are typical carnival and fair snacks and are coated with powdered sugar. Czech Republic. U.S.-style doughnuts are available in the Czech Republic, but before they were solid shape and filled with jelly (strawberry or peach). The shape is similar to doughnuts in Germany or Poland. They are called "Kobliha" ("Koblihy" in plural). They may be filled with nougat or with vanilla custard. There are now many fillings; cut in half or non-filled knots with sugar and cinnamon on top. Denmark. In Denmark, U.S.-style doughnuts may be found at various stores, e.g. McDonald's and most gas stations. The Berliner, however, is also available in bakeries. Finland. in Finland, a sweet doughnut is called a "munkki" (the word also means "monk") and are commonly eaten in cafés and cafeteria restaurants. It is sold cold and sometimes filled with jam (like U.S. jelly donuts) or a vanilla sauce. A ring doughnut is also known as "donitsi". A savory form of doughnut is the "lihapiirakka" (literally "meat pie"). Made from a doughnut mixture and deep fried, the end product is more akin to a savory doughnut than any pie known in the English-speaking world. Former Yugoslavia. Doughnuts similar to the Berliner are prepared in the northern Balkans, particularly in Bosnia and Herzegovina, Croatia, North Macedonia and Serbia ("pokladnice" or "krofne"). They are also called "krofna", "krafna" or "krafne", a name derived from the Austrian "Krapfen" for this pastry. In Croatia, they are especially popular during Carneval season and do not have the typical ring shape, but instead are solid. Traditionally, they are filled with jam (apricot or plum). However, they can be filled with vanilla or chocolate cream. Other types of doughnuts are "uštipci" and "fritule". France. The French "beignet", literally "bump", is the French and New Orleans equivalent of a doughnut: a pastry made from deep-fried choux pastry. Germany. In parts of Germany, the doughnut equivalents are called "Berliner" (sg. and pl.), but not in the capital city of Berlin itself and neighboring areas, where they are called "Pfannkuchen" (which is often found misleading by people in the rest of Germany, who use the word "Pfannkuchen" to describe a pancake, which is also the literal translation of it). Both "Berliner" and "Pfannkuchen" are abbreviations of the term "Berliner Pfannkuchen", however. In middle Germany, doughnuts are called "Kreppel" or "Pfannkuchen". In southern Germany, they are also called "Krapfen" and are especially popular during Carnival season ("Karneval"/"Fasching") in southern and middle Germany and on New Year's Eve in northern Germany. A "Berliner" does not have the typical ring shape of a doughnut, but instead is solid and usually filled with jam, while a ring-shaped variant called "Kameruner" is common in Berlin and eastern Germany. "Bismarcks" and "Berlin doughnuts" are also found in Australia, Canada, Denmark, Finland, Switzerland and the United States. Today, U.S.-style doughnuts are also available in Germany, but are less popular than their native counterparts. Greece. In Greece, a doughnut-like snack called "loukoumas (λουκουμάς)", which is spherical and soaked in honey syrup, is available. It is often served with sprinkled cinnamon and grated walnuts or sesame seeds. Hungary. Fánk is a sweet traditional Hungarian cake. The most commonly used ingredients are flour, yeast, butter, egg yolk, rum, salt, milk and oil for frying. The dough is allowed to rise for approximately 30 minutes, resulting in an extremely light pastry. "Fánk" is usually served with powdered sugar and lekvar. It is supposed that "Fánk" pastry is of the same origin as German Berliner, Dutch "oliebol", and Polish "pączki". Italy. Italian doughnuts include "ciambelle", krapfen from Trentino-Alto Adige, "zippuli" or "zeppole" from Calabria and Campania, "maritozzi" from Latium, above all Rome, "bomboloni" from Tuscany, "frittelle" from Veneto and many others. In the island of Sardinia there is a particular donut, a ring cake called "lorica". Lithuania. In Lithuania, a kind of doughnut called "spurgos" is widely known. Some spurgos are similar to Polish pączki, but some specific recipes, such as cottage cheese doughnuts ("varškės spurgos"), were invented independently. Netherlands. In the Netherlands, "oliebollen", referred to in cookbooks as "Dutch doughnuts", are a type of fritter, with or without raisins or currants, and usually sprinkled with powdered sugar. Variations of the recipe contain slices of apple or other fruits. They are traditionally eaten as part of New Year celebrations. Norway. In Norway, smultring is the prevailing type of doughnut traditionally sold in bakeries, shops, and stalls. However, U.S.-style doughnuts are widely available in larger supermarkets, McDonald's restaurants, 7-elevens and bakeries. The Berliner is more common than U.S.-style doughnut, and sold in most supermarkets and bakeries alongside "smultring" doughnuts. Poland. In Poland and parts of the U.S. with a large Polish community, like Chicago and Detroit, the round, jam-filled doughnuts eaten especially—though not exclusively—during the Carnival are called pączki (). Pączki have been known in Poland at least since the Middle Ages. Jędrzej Kitowicz has described that during the reign of the Augustus III under influence of French cooks who came to Poland at that time, pączki dough fried in Poland has been improved, so that pączki became lighter, spongier, and more resilient. Portugal. The malasada is a common type of holeless donut created in Portugal. They are made of fried dough. In Madeira and the Azores they are eaten on Fat Tuesday. It is also popular in Hawaii and Cape Cod. The malasada arrived after immigrants came in. Romania. The Romanian dessert "gogoși" are fried dough balls similar to filled doughnuts. They are stuffed with chocolate, jam, cheese and other combinations and may be dusted with icing sugar. Russia. In Russia and the other Post-Soviet countries, "ponchiki" (, plural form of пончик, "ponchik") or (, especially in St. Petersburg) are a very popular sweet doughnut, with many fast and simple recipes available in Russian cookbooks for making them at home as a breakfast or coffee pastry. Slovenia. In Slovenia, a jam-filled doughnut known as "krofi", is very popular. It is the typical sweet during Carnival time, but is to be found in most bakeries during the whole year. The most famous "krofi" come from the village of Trojane in central Slovenia, and are originally filled with apricot jam filling. Spain. In Spain, there are two different types of doughnuts. The first one, simply called "donuts", or more traditionally "berlinesas", is a U.S.-style doughnut, i.e., a deep-fried, sweet, soft, ring of flour dough. The second type of doughnut is a traditional pastry called "rosquilla" or "rosquete" (the latter name is typical in the Canary Islands), made of fermented dough and fried or baked in an oven. "Rosquillas" were purportedly introduced in Spain by the Romans. In Spain, there are several variants of them depending on the region where they are prepared and the time of the year they are sold. In some regions they are considered a special pastry prepared only for Easter. Although overall they are more tightly textured and less sweet than U.S.-style doughnuts, they differ greatly in shape, size and taste from one region to another. The "churro" is a sweet pastry of deep-fried dough similar to a doughnut but shaped as a long, thin, ribbed cylinder rather than a ring or sphere. "Churros" are commonly served dusted in sugar as a snack or with a cup of hot chocolate. Switzerland. In Switzerland, there are "Zigerkrapfen", "Berliner" and "tortelli di San Giuseppe". Sweden. Similar to the Finnish "munkki", the Swedish "munk" is a sweet doughnut commonly eaten as "fika" along with coffee. It is sold cold and is sometimes filled with jam (U.S. jelly) or a vanilla sauce. A ring doughnut is also known as simply "munk". Ukraine. In Ukraine doughnuts are called "pampushky" (). "Pampushky" are made of yeast dough containing wheat, rye or buckwheat flour. Traditionally they are baked, but may also be fried. According to William Pokhlyobkin, the technology of making "pampushky" points to German cuisine, and these buns were possibly created by German colonists in Ukraine. United Kingdom. In some parts of Scotland, ring doughnuts are referred to as "doughrings", with the 'doughnut' name being reserved exclusively for the nut-shaped variety. Glazed, twisted rope-shaped doughnuts are known as "yum-yums". It is also possible to buy fudge doughnuts in certain regions of Scotland. Fillings include jam, custard, cream, sweet mincemeat, chocolate and apple. Common ring toppings are sprinkle-iced and chocolate. In Northern Ireland, ring doughnuts are known as "gravy rings", "gravy" being an archaic term for hot cooking oil. North America. Caribbean region. A "kurma" is a small, sweet, fried cube-shaped or rectangular doughnut which originated in Eastern India but is sold in Trinidad and Tobago. Costa Rica. A traditional Puntarenas cream-filled doughnut is round and robust, managing to keep the cream inside liquified. They are popular in Costa Rica. Mexico. The Mexican "donas" are similar to doughnuts, including the name; the dona is a fried-dough pastry-based snack, commonly covered with powdered brown sugar and cinnamon, white sugar or chocolate. United States and Canada. Frosted, glazed, powdered, Boston cream, coconut, sour cream, cinnamon, chocolate, and jelly are some of the varieties eaten in the United States and Canada. There are also potato doughnuts (sometimes referred to as spudnuts). Doughnuts are ubiquitous in the United States and can be found in most grocery stores, as well as in specialty . They are equally popular in Canada. Canadians eat more doughnuts per capita than any other nation and has more doughnuts shops per capita than any other nation. A popular doughnut in Hawaii is the malasada. Malasadas were brought to the Hawaiian Islands by early Portuguese settlers, and are a variation on Portugal's filhós. They are small, eggy balls of yeast dough deep-fried and coated in sugar. Immigrants have brought various doughnut varieties to the United States. To celebrate Fat Tuesday in eastern Pennsylvania, churches sell a potato starch doughnut called a Fastnacht (or Fasnacht). The treats are so popular there that Fat Tuesday is often called Fastnacht Day. The Polish doughnut, the pączki, is popular in U.S. cities with large Polish communities such as Chicago, Milwaukee, and Detroit. In regions of the country where apples are widely grown, especially the Northeast and Midwest states, cider doughnuts are a harvest season specialty, especially at orchards open to tourists, where they can be served fresh. Cider doughnuts are a cake doughnut with apple cider in the batter. The use of cider affects both the texture and flavor, resulting in a denser, moister product. They are often coated with either granulated, powdered sugar, or cinnamon sugar. In southern Louisiana, a popular variety of the doughnut is the beignet, a fried, square doughnut served traditionally with powdered sugar. Perhaps the most well-known purveyor of beignets is New Orleans restaurant Cafe Du Monde. In Quebec, homemade doughnuts called "beignes de Noël" are traditional Christmas desserts. Middle East and North Africa. Iran. The Persian "zoolbia" and "bamiyeh" are fritters of various shapes and sizes coated in a sugar syrup. Doughnuts are also made in the home in Iran, referred to as doughnut, even in the plural. Israel. Jelly doughnuts, known as "sufganiyah" (סופגניה, pl. sufganiyot סופגניות) in Israel, have become a traditional Hanukkah food in the recent era, as they are cooked in oil, associated with the holiday account of the miracle of the oil. Traditional "sufganiyot" are filled with red jelly and topped with icing sugar. However, many other varieties exist, with some being filled with "dulce de leche" (particularly common after the South American aliyah early in the 21st century). Morocco. In Morocco, "Sfenj" is a similar pastry eaten sprinkled with sugar or soaked in honey. Tunisia. In Tunisia, traditional pastries similar to doughnuts are "yo-yos". They come in different versions both as balls and in shape of doughnuts. They are deep-fried and covered in a honey syrup or a kind of frosting. Sesame seeds are also used for flavor and decoration along with orange juice and vanilla. Oceania. Australia. In Australia, the doughnut is a popular snack food. Jam doughnuts are particularly popular, especially in Melbourne, Victoria and the Queen Victoria Market, where they are a tradition. Jam doughnuts are similar to a Berliner, but are served hot: red jam (raspberry or strawberry) is injected into the bun before it is deep-fried, and then it is coated with either sugar or sugar mixed with cinnamon as soon as it has been cooked. Jam doughnuts are sometimes also bought frozen. In South Australia, they are known as Berliner or Kitchener and often served in cafes. Popular variants include custard-filled doughnuts, and more recently Nutella-filled doughnuts. Mobile vans that serve doughnuts, traditional or jam, are often seen at spectator events, markets, carnivals and fetes, and by the roadside near high-traffic areas like airports and the car parks of large shopping centres. Traditional cinnamon doughnuts are readily available in Australia from specialized retailers and convenience stores. Doughnuts are a popular choice for schools and other not-for-profit groups to cook and sell as a fundraiser. New Zealand. In New Zealand, the doughnut is a popular food snack available in corner dairies. They are in the form of a long sweet bread roll with a deep cut down its long axis. In this cut is placed a long dollop of sweetened clotted cream and on top of this is a spot of strawberry jam. Doughnuts are of two varieties: fresh cream or mock cream. The rounded variety is widely available as well. South America. Brazil. In Brazil, bakeries, grocery stores and pastry shops sell ball-shaped doughnuts popularly known as "sonhos" (lit. dreams). The dessert was brought to Brazil by Portuguese colonizers that had contact with Dutch and German traders. They are the equivalent of nowadays "bolas de Berlim" (lit. balls of Berlin) in Portugal, but the traditional Portuguese yellow cream was substituted by local dairy and fruit products. They are made of a special type of bread filled with "goiabada" (guava jelly) or milk cream, and covered by white sugar. Chile. The "Berlin" (plural "Berlines") doughnut is popular in Chile because of the large German community. It may be filled with jam or with "manjar", the Chilean version of "dulce de leche". Peru. Peruvian cuisine includes picarones which are doughnut-shaped fritters made with a squash and sweet potato base. These snacks are almost always served with a drizzle of sweet molasses-based sauce. Sub-Saharan Africa. South Africa. In South Africa, an Afrikaans variation known as the "koeksister" is popular. Another variation, similar in name, is the Cape Malay "koesister" being soaked in a spiced syrup and coated in coconut. It has a texture similar to more traditional doughnuts as opposed to the Afrikaans variety. A further variation is the "vetkoek", which is also dough deep fried in oil. It is served with mince, syrup, honey or jam. In popular culture. The doughnut has made an appearance in popular culture, particularly in the United States and Australia. References extend to objects or actions that are doughnut-shaped. In film, the doughnut has inspired "Dora's Dunking Doughnuts" (1933), "The Doughnuts" (1963) and . In video games, the doughnut has appeared in games like "The Simpsons Game" and "Donut Dilemma". In the cartoon "¡Mucha Lucha!", there are four things that make up the code of mask wrestling: honor, family, tradition, and doughnuts. Also, in the television sitcom "The Simpsons", Homer Simpson's love affair with doughnuts is a prominent ongoing joke as well as the focal point of more than a few episodes. There is also a children's book "Arnie the Doughnut" and music albums "The Doughnut in Granny's Greenhouse". In films, TV shows, and other popular culture references, police officers are associated with doughnuts, depicted as enjoying them during their coffee break or office hours. This cliché has been parodied in the film "", where Officer Zed is instructing new recruits how to "properly" consume their doughnuts with coffee. It is also parodied in the television series "Twin Peaks", where the police station is always in large supply. In the video game "Neuromancer", there is a "Donut World" shop, where only policemen are allowed. During a citywide "lockdown" after the Boston Marathon bombing, a handful of selected Dunkin' Donuts locations were ordered to remain open to serve police and first responders despite the closing of the vast majority of city businesses. Cops &amp; Doughnuts, a doughnut shop in Clare, Michigan, is notable for being owned and operated by current and former members of the city's police force. Tim Hortons is the most popular Canadian doughnut and coffee franchise, and one of the most successful quick service restaurants in the country. In the Second City Television sketch comedy "The Great White North" featuring the fictional stereotypically Canadian brothers Bob and Doug MacKenzie (and in their film "Strange Brew"), doughnuts play a role in the duo's comedy. Industry by country. Australia. Donut King is Australia's largest retailer of doughnuts. A Guinness Book of Records largest doughnut made up of 90,000 individual doughnuts was set in Sydney in 2007 as part of a celebration for the release of "The Simpsons Movie". Canada. Per capita, Canadians consume the most doughnuts, and Canada has the most doughnut stores per capita. United States. Within the United States, the Providence metropolitan area was cited as having the most doughnut shops per capita (25.3 doughnut shops per 100,000 people) as of 13 January 2010. National Doughnut Day celebrates the doughnut's history and role in popular culture. There is a race in Staunton, Illinois, featuring doughnuts, called the Tour de Donut. Pink boxes. In the US, especially in Southern California, fresh doughnuts sold by the dozen at local doughnut shops are typically packaged in generic pink boxes. This phenomenon has been attributed to Ted Ngoy and Ning Yen, refugees of the Cambodian genocide who began to transform the local doughnut shop industry in 1976. They proved so adept at the business and in training fellow Chinese Cambodian refugees to follow suit that these local doughnut shops soon dominated native franchises such as Winchell's Donuts. Ngoy and Yen allegedly planned to purchase boxes of a lucky red color rather than the standard white, but settled on a leftover pink stock because of its lower cost. In the mid-1970s, pink doughnut boxes were already a common sight in the eastern and midwestern United States, due to the fact that Dunkin' Donuts used a solid pink color for its boxes at that time. (It switched to a different box design sometime after 1975.) But the chain did not begin to establish a major presence in California until the 2010s. Owing to the success of Ngoy and Yen's business, the color soon became a recognizable standard in California. Due to the locality of Hollywood, the pink boxes frequently appeared as film and television props and were thus transmitted into popular culture. Holidays and festivals. National Doughnut Day. National Doughnut Day, also known as National Donut Day, celebrated in the United States of America, is on the first Friday of June each year, succeeding the Doughnut Day event created by The Salvation Army in 1938 to honor those of their members who served doughnuts to soldiers during World War I. About 250 Salvation Army volunteers went to France. Because of the difficulties of providing freshly baked goods from huts established in abandoned buildings near the front lines, the two Salvation Army volunteers (Ensign Margaret Sheldon and Adjutant Helen Purviance) came up with the idea of providing doughnuts. These are reported to have been an "instant hit", and "soon many soldiers were visiting The Salvation Army huts". Margaret Sheldon wrote of one busy day: "Today I made 22 pies, 300 doughnuts, 700 cups of coffee." Soon, the women who did this work became known by the servicemen as "Doughnut Dollies". See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau = k D^n" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "D" }, { "math_id": 4, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=74803
74805735
Consumer-resource model
Class of ecological models In theoretical ecology and nonlinear dynamics, consumer-resource models (CRMs) are a class of ecological models in which a community of consumer species compete for a common pool of resources. Instead of species interacting directly, all species-species interactions are mediated through resource dynamics. Consumer-resource models have served as fundamental tools in the quantitative development of theories of niche construction, coexistence, and biological diversity. These models can be interpreted as a quantitative description of a single trophic level. A general consumer-resource model consists of "M" resources whose abundances are formula_0 and "S" consumer species whose populations are formula_1. A general consumer-resource model is described by the system of coupled ordinary differential equations,formula_2 where formula_3, depending only on resource abundances, is the per-capita growth rate of species formula_4, and formula_5 is the growth rate of resource formula_6. An essential feature of CRMs is that species growth rates and populations are mediated through resources and there are no explicit species-species interactions. Through resource interactions, there are emergent inter-species interactions. Originally introduced by Robert H. MacArthur and Richard Levins, consumer-resource models have found success in formalizing ecological principles and modeling experiments involving microbial ecosystems. Models. Niche models. Niche models are a notable class of CRMs which are described by the system of coupled ordinary differential equations, formula_7 where formula_8 is a vector abbreviation for resource abundances, formula_3 is the per-capita growth rate of species formula_4, formula_9 is the growth rate of species formula_6 in the absence of consumption, and formula_10 is the rate per unit species population that species formula_4 depletes the abundance of resource formula_6 through consumption. In this class of CRMs, consumer species' impacts on resources are not explicitly coordinated; however, there are implicit interactions. MacArthur consumer-resource model (MCRM). The MacArthur consumer-resource model (MCRM), named after Robert H. MacArthur, is a foundational CRM for the development of niche and coexistence theories. The MCRM is given by the following set of coupled ordinary differential equations:formula_11where formula_12 is the relative preference of species formula_4 for resource formula_6 and also the relative amount by which resource formula_6 is depleted by the consumption of consumer species formula_4; formula_13 is the steady-state carrying capacity of resource formula_6 in absence of consumption (i.e., when formula_12 is zero); formula_14 and formula_15 are time-scales for species and resource dynamics, respectively; formula_16 is the quality of resource formula_6; and formula_17 is the natural mortality rate of species formula_4. This model is said to have self-replenishing resource dynamics because when formula_18, each resource exhibits independent logistic growth. Given positive parameters and initial conditions, this model approaches a unique uninvadable steady state (i.e., a steady state in which the re-introduction of a species which has been driven to extinction or a resource which has been depleted leads to the re-introduced species or resource dying out again). Steady states of the MCRM satisfy the competitive exclusion principle: the number of coexisting species is less than or equal to the number of non-depleted resources. In other words, the number of simultaneously occupiable ecological niches is equal to the number of non-depleted resources. Externally supplied resources model. The externally supplied resource model is similar to the MCRM except the resources are provided at a constant rate from an external source instead of being self-replenished. This model is also sometimes called the linear resource dynamics model. It is described by the following set of coupled ordinary differential equations:formula_19where all the parameters shared with the MCRM are the same, and formula_20 is the rate at which resource formula_6 is supplied to the ecosystem. In the eCRM, in the absence of consumption, formula_21 decays to formula_20 exponentially with timescale formula_15. This model is also known as a chemostat model. Tilman consumer-resource model (TCRM). The Tilman consumer-resource model (TCRM), named after G. David Tilman, is similar to the externally supplied resources model except the rate at which a species depletes a resource is no longer proportional to the present abundance of the resource. The TCRM is the foundational model for Tilman's R* rule. It is described by the following set of coupled ordinary differential equations:formula_22where all parameters are shared with the MCRM. In the TCRM, resource abundances can become nonphysically negative. Microbial consumer-resource model (MiCRM). The microbial consumer resource model describes a microbial ecosystem with externally supplied resources where consumption can produce metabolic byproducts, leading to potential cross-feeding. It is described by the following set of coupled ODEs:formula_23where all parameters shared with the MCRM have similar interpretations; formula_24 is the fraction of the byproducts due to consumption of resource formula_25 which are converted to resource formula_6 and formula_26 is the "leakage fraction" of resource formula_6 governing how much of the resource is released into the environment as metabolic byproducts. Symmetric interactions and optimization. MacArthur's Minimization Principle. For the MacArthur consumer resource model (MCRM), MacArthur introduced an optimization principle to identify the uninvadable steady state of the model (i.e., the steady state so that if any species with zero population is re-introduced, it will fail to invade, meaning the ecosystem will return to said steady state). To derive the optimization principle, one assumes resource dynamics become sufficiently fast (i.e., formula_27) that they become entrained to species dynamics and are constantly at steady state (i.e., formula_28) so that formula_21 is expressed as a function of formula_29. With this assumption, one can express species dynamics as, formula_30 where formula_31 denotes a sum over resource abundances which satisfy formula_32. The above expression can be written as formula_33, where,formula_34 At un-invadable steady state formula_35 for all surviving species formula_4 and formula_36 for all extinct species formula_4. Minimum Environmental Perturbation Principle (MEPP). MacArthur's Minimization Principle has been extended to the more general Minimum Environmental Perturbation Principle (MEPP) which maps certain niche CRM models to constrained optimization problems. When the population growth conferred upon a species by consuming a resource is related to the impact the species' consumption has on the resource's abundance through the equation,formula_37 species-resource interactions are said to be "symmetric". In the above equation formula_38 and formula_39 are arbitrary functions of resource abundances. When this symmetry condition is satisfied, it can be shown that there exists a function formula_40 such that:formula_41After determining this function formula_42, the steady-state uninvadable resource abundances and species populations are the solution to the constrained optimization problem:formula_43The species populations are the Lagrange multipliers for the constraints on the second line. This can be seen by looking at the KKT conditions, taking formula_29 to be the Lagrange multipliers:formula_44Lines 1, 3, and 4 are the statements of feasibility and uninvadability: if formula_45, then formula_46 must be zero otherwise the system would not be at steady state, and if formula_47, then formula_46 must be non-positive otherwise species formula_48 would be able to invade. Line 2 is the stationarity condition and the steady-state condition for the resources in nice CRMs. The function formula_49 can be interpreted as a distance by defining the point in the state space of resource abundances at which it is zero, formula_50, to be its minimum. The Lagrangian for the dual problem which leads to the above KKT conditions is,formula_51 In this picture, the unconstrained value of formula_52 that minimizes formula_40 (i.e., the steady-state resource abundances in the absence of any consumers) is known as the resource supply vector. Geometric perspectives. The steady states of consumer resource models can be analyzed using geometric means in the space of resource abundances. Zero net-growth isoclines (ZNGIs). For a community to satisfy the uninvisibility and steady-state conditions, the steady-state resource abundances (denoted formula_53) must satisfy, formula_54 for all species formula_55. The inequality is saturated if and only if species formula_55 survives. Each of these conditions specifies a region in the space of possible steady-state resource abundances, and the realized steady-state resource abundance is restricted to the intersection of these regions. The boundaries of these regions, specified by formula_56, are known as the zero net-growth isoclines (ZNGIs). If species formula_57survive, then the steady-state resource abundances must satisfy, formula_58. The structure and locations of the intersections of the ZNGIs thus determine what species and feasibly coexist; the realized steady-state community is dependent on the supply of resources and can be analyzed by examining coexistence cones. Coexistence cones. The structure of ZNGI intersections determines what species can feasibly coexist but does not determine what set of coexisting species will be realized. Coexistence cones determine what species determine what species will survive in an ecosystem given a resource supply vector. A coexistence cone generated by a set of species formula_59 is defined to be the set of possible resource supply vectors which will lead to a community containing precisely the species formula_60. To see the cone structure, consider that in the MacArthur or Tilman models, the steady-state non-depleted resource abundances must satisfy,formula_61 where formula_62 is a vector containing the carrying capacities/supply rates, and formula_63 is the formula_64th row of the consumption matrix formula_65, considered as a vector. As the surviving species are exactly those with positive abundances, the sum term becomes a sum only over surviving species, and the right-hand side resembles the expression for a convex cone with apex formula_66 and whose generating vectors are the formula_67 for the surviving species formula_64. Complex ecosystems. In an ecosystem with many species and resources, the behavior of consumer-resource models can be analyzed using tools from statistical physics, particularly mean-field theory and the cavity method. In the large ecosystem limit, there is an explosion of the number of parameters. For example, in the MacArthur model, formula_68 parameters are needed. In this limit, parameters may be considered to be drawn from some distribution which leads to a distribution of steady-state abundances. These distributions of steady-state abundances can then be determined by deriving mean-field equations for random variables representing the steady-state abundances of a randomly selected species and resource. MacArthur consumer resource model cavity solution. In the MCRM, the model parameters can be taken to be random variables with means and variances:formula_69 With this parameterization, in the thermodynamic limit (i.e., formula_70 with formula_71), the steady-state resource and species abundances are modeled as a random variable, formula_72, which satisfy the self-consistent mean-field equations,formula_73 where formula_74 are all moments which are determined self-consistently, formula_75 are independent standard normal random variables, and formula_76 and formula_77 are average susceptibilities which are also determined self-consistently. This mean-field framework can determine the moments and exact form of the abundance distribution, the average susceptibilities, and the fraction of species and resources that survive at a steady state. Similar mean-field analyses have been performed for the externally supplied resources model, the Tilman model, and the microbial consumer-resource model. These techniques were first developed to analyze the random generalized Lotka–Volterra model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_1,\\dots,R_M" }, { "math_id": 1, "text": "N_1,\\dots,N_S" }, { "math_id": 2, "text": "\n\\begin{align}\n\\frac{\\mathrm dN_i}{\\mathrm dt}\n&=\nN_i g_i(R_1,\\dots,R_M),\n&&\\qquad i =1 ,\\dots,S,\n\\\\\n\\frac{\\mathrm{d}R_\\alpha}{\\mathrm{d}t}\n&=\nf_\\alpha(R_1,\\dots,R_M,N_1,\\dots,N_S), &&\\qquad \\alpha = 1,\\dots,M\n\\end{align}\n" }, { "math_id": 3, "text": "g_i" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "f_\\alpha" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "\n\\begin{align}\n\\frac{\\mathrm dN_i}{\\mathrm dt} \n&=\nN_i g_i(\\mathbf R), &&\\qquad i =1,\\dots,S,\\\\\n\\frac{\\mathrm dR_\\alpha}{\\mathrm dt}\n&=\nh_\\alpha(\\mathbf R) + \\sum_{i=1}^S N_i q_{i\\alpha}(\\mathbf R),\n&&\\qquad \\alpha = 1,\\dots,M,\n\\end{align}\n" }, { "math_id": 8, "text": "\\mathbf R \\equiv (R_1,\\dots,R_M)" }, { "math_id": 9, "text": "h_\\alpha" }, { "math_id": 10, "text": "-q_{i\\alpha}" }, { "math_id": 11, "text": "\\begin{align}\n\\frac{\\mathrm dN_i}{\\mathrm dt} &= \n\\tau_i^{-1} N_i \\left(\n\\sum_{\\alpha = 1}^M w_\\alpha c_{i\\alpha} R_\\alpha - m_i\n\\right),\n&&\\qquad i = 1,\\dots,S,\n\\\\\n\\frac{\\mathrm dR_\\alpha}{\\mathrm dt} &=\n\\frac{r_\\alpha}{K_\\alpha} \\left(\nK_\\alpha - R_\\alpha \n\\right)R_\\alpha\n-\n\\sum_{i=1}^S N_i c_{i\\alpha}R_\\alpha,\n&&\n\\qquad \\alpha = 1,\\dots,M,\n\\end{align}" }, { "math_id": 12, "text": "c_{i\\alpha}" }, { "math_id": 13, "text": "K_\\alpha" }, { "math_id": 14, "text": "\\tau_i" }, { "math_id": 15, "text": "r_\\alpha^{-1}" }, { "math_id": 16, "text": "w_\\alpha" }, { "math_id": 17, "text": "m_i" }, { "math_id": 18, "text": "c_{i\\alpha} = 0" }, { "math_id": 19, "text": "\\begin{align}\n\\frac{\\mathrm dN_i}{\\mathrm dt} &= \n\\tau_i^{-1} N_i \\left(\n\\sum_{\\alpha = 1}^M w_\\alpha c_{i\\alpha} R_\\alpha - m_i\n\\right),\n&&\\qquad i = 1,\\dots,S,\n\\\\\n\\frac{\\mathrm dR_\\alpha}{\\mathrm dt} &=\nr_\\alpha (\\kappa_\\alpha - R_\\alpha)\n-\n\\sum_{i=1}^S N_i c_{i\\alpha}R_\\alpha,\n&&\n\\qquad \\alpha = 1,\\dots,M,\n\\end{align}" }, { "math_id": 20, "text": "\\kappa_\\alpha" }, { "math_id": 21, "text": "R_\\alpha" }, { "math_id": 22, "text": "\\begin{align}\n\\frac{\\mathrm dN_i}{\\mathrm dt} &= \n\\tau_i^{-1} N_i \\left(\n\\sum_{\\alpha = 1}^M w_\\alpha c_{i\\alpha} R_\\alpha - m_i\n\\right),\n&&\\qquad i = 1,\\dots,S,\n\\\\\n\\frac{\\mathrm dR_\\alpha}{\\mathrm dt} &=\nr_\\alpha (K_\\alpha - R_\\alpha)\n-\n\\sum_{i=1}^S N_i c_{i\\alpha},\n&&\n\\qquad \\alpha = 1,\\dots,M,\n\\end{align}" }, { "math_id": 23, "text": "\\begin{align}\n\\frac{\\mathrm dN_i}{\\mathrm dt} &= \n\\tau_i^{-1} N_i \\left(\n\\sum_{\\alpha = 1}^M (1-l_\\alpha) w_\\alpha c_{i\\alpha} R_\\alpha - m_i\n\\right),\n&&\\qquad i = 1,\\dots,S,\n\\\\\n\\frac{\\mathrm dR_\\alpha}{\\mathrm dt} &=\n\\kappa_\\alpha - r R_\\alpha\n-\n\\sum_{i=1}^S N_i c_{i\\alpha}R_\\alpha\n+\n\\sum_{i=1}^S\\sum_{\\beta = 1}^M\nN_i D_{\\alpha\\beta} l_\\beta \\frac{w_\\beta}{w_\\alpha} c_{i\\beta} R_\\beta,\n&&\n\\qquad \\alpha = 1,\\dots,M, \n\\end{align}" }, { "math_id": 24, "text": "D_{\\alpha\\beta}" }, { "math_id": 25, "text": "\\beta" }, { "math_id": 26, "text": "l_\\alpha" }, { "math_id": 27, "text": "r_\\alpha \\gg 1" }, { "math_id": 28, "text": "{\\mathrm d}R_\\alpha/{\\mathrm d}t = 0" }, { "math_id": 29, "text": "N_i" }, { "math_id": 30, "text": "\n\\frac{\\mathrm dN_i}{\\mathrm dt}\n=\n\\tau_i^{-1}\nN_i\n\\left[\n\\sum_{\\alpha \\in M^\\ast} r_\\alpha^{-1} K_\\alpha w_\\alpha c_{i\\alpha}\\left(r_\\alpha - \\sum_{j=1}^S N_j c_{j\\alpha}\n\\right)\n-m_i\n\\right],\n" }, { "math_id": 31, "text": "\\sum_{\\alpha \\in M^\\ast}" }, { "math_id": 32, "text": "R_\\alpha = r_\\alpha - \\sum_{j=1}^S N_j c_{j\\alpha} \\geq 0" }, { "math_id": 33, "text": "\\mathrm{d}N_i/\\mathrm{d}t=-\\tau_i^{-1}N_i \\,\\partial Q/\\partial N_i" }, { "math_id": 34, "text": "\nQ(\\{N_i\\})\n=\n\\frac{1}{2}\n\\sum_{\\alpha \\in M^\\ast}\nr_\\alpha^{-1}K_\\alpha w_\\alpha\n\\left(\nr_\\alpha - \\sum_{j=1}^S c_{j\\alpha} N_j\n\\right)^2\n+\n\\sum_{i=1}^S m_i N_i.\n" }, { "math_id": 35, "text": "\\partial Q/\\partial N_i = 0" }, { "math_id": 36, "text": "\\partial Q/\\partial N_i > 0" }, { "math_id": 37, "text": "q_{i\\alpha}(\\mathbf R) = - a_i(\\mathbf R)b_\\alpha(\\mathbf R) \\frac{\\partial g_i}{\\partial R_\\alpha}\n," }, { "math_id": 38, "text": "a_i" }, { "math_id": 39, "text": "b_\\alpha" }, { "math_id": 40, "text": "d(\\mathbf R)" }, { "math_id": 41, "text": "\\frac{\\partial d}{\\partial R_\\alpha}\n=\n-\\frac{h_\\alpha(\\mathbf R)}{b_\\alpha (\\mathbf R)}." }, { "math_id": 42, "text": "d" }, { "math_id": 43, "text": "\\begin{align}\n\\min_{\\mathbf R}& \\; d(\\mathbf R)&&\\\\\n\\text{s.t.,}&\\; g_i(\\mathbf R) \\leq 0,&&\\qquad i=1,\\dots,S,\\\\\n&\\; R_\\alpha \\geq 0,&&\\qquad \\alpha =1 ,\\dots ,M.\n\\end{align}" }, { "math_id": 44, "text": "\\begin{align}\n0 &= N_i g_i(\\mathbf R), && \\qquad i =1,\\dots,S,\\\\\n0 &= \\frac{\\partial d}{\\partial R_\\alpha} - \\sum_{i=1}^S N_i \\frac{\\partial g_i}{\\partial R_\\alpha},&&\\qquad \\alpha = 1,\\dots,M,\\\\\n0 &\\geq g_i(\\mathbf R), && \\qquad i =1,\\dots,S,\\\\\n0 &\\leq N_i ,&& \\qquad i =1,\\dots,S.\n\\end{align}" }, { "math_id": 45, "text": "\\overline N_i > 0" }, { "math_id": 46, "text": "g_i(\\mathbf R)" }, { "math_id": 47, "text": "\\overline N_i = 0 " }, { "math_id": 48, "text": "i " }, { "math_id": 49, "text": "d(\\mathbf R) " }, { "math_id": 50, "text": "\\mathbf R_0 " }, { "math_id": 51, "text": "L(\\mathbf R,\\{N_i\\}) = \nd(\\mathbf R) - \\sum_{i = 1}^S N_i g_i(\\mathbf R). " }, { "math_id": 52, "text": "\\mathbf R" }, { "math_id": 53, "text": "\n\\mathbf R^\\star \n" }, { "math_id": 54, "text": "\ng_i(\\mathbf R^\\star) \\leq 0,\n" }, { "math_id": 55, "text": "\ni\n" }, { "math_id": 56, "text": "\ng_i(\\mathbf R^\\star) = 0\n" }, { "math_id": 57, "text": "\ni = 1,\\dots,S^\\star \n" }, { "math_id": 58, "text": "\ng_1(\\mathbf R^\\star),\\ldots, g_{S^\\star}(\\mathbf R^\\star) = 0 \n" }, { "math_id": 59, "text": " i = 1,\\ldots, S^\\star " }, { "math_id": 60, "text": " i =1,\\ldots,S^\\star " }, { "math_id": 61, "text": " \\mathbf K = \\mathbf R^\\star + \\sum_{i=1}^S N_i \\mathbf C_i," }, { "math_id": 62, "text": " \\mathbf K" }, { "math_id": 63, "text": " \\mathbf C_i = (c_{i1},\\ldots,c_{iM^\\star})" }, { "math_id": 64, "text": " i" }, { "math_id": 65, "text": " c_{i\\alpha\n}" }, { "math_id": 66, "text": " \\mathbf R^\\star" }, { "math_id": 67, "text": " \\mathbf C_i" }, { "math_id": 68, "text": "O(SM)" }, { "math_id": 69, "text": "\\langle c_{i\\alpha}\\rangle = \\mu/M,\\quad \\operatorname{var}(c_{i\\alpha}) = \\sigma^2/M,\n\\quad \\langle m_i \\rangle = m, \\quad \\operatorname{var}(m_i) = \\sigma_m^2,\n\\quad \\langle K_\\alpha\\rangle = K,\\quad\\operatorname{var}(K_\\alpha) = \\sigma_K^2." }, { "math_id": 70, "text": "M,S \\to \\infty " }, { "math_id": 71, "text": "S/M = \\Theta(1)" }, { "math_id": 72, "text": "N, R" }, { "math_id": 73, "text": "\\begin{aligned}\n0 &= R(K - \\mu \\tfrac{S}{M} \\langle N\\rangle - R + \\sqrt{\\sigma_K^2 + \\tfrac{S}{M} \\sigma^2 \\langle N^2\\rangle} Z_R + \\sigma^2 \\tfrac{S}{M} \\nu R ), \\\\\n0 &= N(\\mu \\langle R\\rangle - m - \\sigma^2 \\chi N + \\sqrt{\\sigma^2 \\langle R^2\\rangle + \\sigma_m^2} Z_N ),\n\\end{aligned}" }, { "math_id": 74, "text": "\\langle N\\rangle, \\langle N^2\\rangle, \\langle R\\rangle, \\rangle R^2\\rangle" }, { "math_id": 75, "text": "Z_R,Z_N" }, { "math_id": 76, "text": "\\nu = \\langle \\partial N/\\partial m \\rangle" }, { "math_id": 77, "text": "\\chi = \\langle \\partial R/\\partial K \\rangle" } ]
https://en.wikipedia.org/wiki?curid=74805735
74806003
Gaoyong Zhang
American mathematician Gaoyong Zhang is an American mathematician. He is professor at the Courant Institute of Mathematical Sciences at New York University in New York City. His main research interests are convex geometry and its connections with analysis and information theory. Biography. Gaoyong Zhang graduated from Temple University in Philadelphia with a PhD in 1995. His advisor was Eric Grinberg. Before he became a professor at the Courant Institute at NYU, he was a member of the Institute of Advanced Study. Zhang became an Inaugural Fellow of the American Mathematical Society in 2012. He is a member of the editorial board at Advanced Nonlinear Studies (De Gruyter) and at the Proceedings of the American Mathematical Society. Work. Gaoyong Zhang is known for his inverse of the Petty projection inequality, one of the few inequalities in convex geometry where simplices were proved to be extremals. He obtained a positive solution for the Busemann–Petty problem in formula_0. He is known for his contributions (in collaboration with Erwin Lutwak and Deane Yang) to the Lp Brunn Minkowski Theory and, in particular, his solution to the logarithmic Minkowski problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R^4" } ]
https://en.wikipedia.org/wiki?curid=74806003
7482029
Completely distributive lattice
In the mathematical area of order theory, a completely distributive lattice is a complete lattice in which arbitrary joins distribute over arbitrary meets. Formally, a complete lattice "L" is said to be completely distributive if, for any doubly indexed family formula_0 where "F" is the set of choice functions "f" choosing for each index "j" of "J" some index "f"("j") in "K""j". Complete distributivity is a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices. Alternative characterizations. Various different characterizations exist. For example, the following is an equivalent law that avoids the use of choice functions. For any set "S" of sets, we define the set "S"# to be the set of all subsets "X" of the complete lattice that have non-empty intersection with all members of "S". We then can define complete distributivity via the statement formula_1 The operator ( )# might be called the crosscut operator. This version of complete distributivity only implies the original notion when admitting the Axiom of Choice. Properties. In addition, it is known that the following statements are equivalent for any complete lattice "L": Direct products of [0,1], i.e. sets of all functions from some set "X" to [0,1] ordered pointwise, are also called "cubes". Free completely distributive lattices. Every poset "C" can be completed in a completely distributive lattice. A completely distributive lattice "L" is called the free completely distributive lattice over a poset "C" if and only if there is an order embedding formula_2 such that for every completely distributive lattice "M" and monotonic function formula_3, there is a unique complete homomorphism formula_4 satisfying formula_5. For every poset "C", the free completely distributive lattice over a poset "C" exists and is unique up to isomorphism. This is an instance of the concept of free object. Since a set "X" can be considered as a poset with the discrete order, the above result guarantees the existence of the free completely distributive lattice over the set "X".
[ { "math_id": 0, "text": "\\bigwedge_{j\\in J}\\bigvee_{k\\in K_j} x_{j,k} = \n \\bigvee_{f\\in F}\\bigwedge_{j\\in J} x_{j,f(j)}" }, { "math_id": 1, "text": "\\begin{align}\\bigwedge \\{ \\bigvee Y \\mid Y\\in S\\} = \\bigvee\\{ \\bigwedge Z \\mid Z\\in S^\\# \\}\\end{align}" }, { "math_id": 2, "text": "\\phi:C\\rightarrow L" }, { "math_id": 3, "text": "f:C\\rightarrow M" }, { "math_id": 4, "text": "f^*_\\phi:L\\rightarrow M" }, { "math_id": 5, "text": "f=f^*_\\phi\\circ\\phi" }, { "math_id": 6, "text": "(\\mathcal{P}(X),\\subseteq)" } ]
https://en.wikipedia.org/wiki?curid=7482029
74820340
Random generalized Lotka–Volterra model
Model in theoretical ecology and statistical mechanics The random generalized Lotka–Volterra model (rGLV) is an ecological model and random set of coupled ordinary differential equations where the parameters of the generalized Lotka–Volterra equation are sampled from a probability distribution, analogously to quenched disorder. The rGLV models dynamics of a community of species in which each species' abundance grows towards a carrying capacity but is depleted due to competition from the presence of other species. It is often analyzed in the many-species limit using tools from statistical physics, in particular from spin glass theory. The rGLV has been used as a tool to analyze emergent macroscopic behavior in microbial communities with dense, strong interspecies interactions. The model has served as a context for theoretical investigations studying diversity-stability relations in community ecology and properties of static and dynamic coexistence. Dynamical behavior in the rGLV has been mapped experimentally in community microcosms. The rGLV model has also served as an object of interest for the spin glass and disordered systems physics community to develop new techniques and numerical methods. Definition. The random generalized Lotka–Volterra model is written as the system of coupled ordinary differential equations,formula_0where formula_1 is the abundance of species formula_2, formula_3 is the number of species, formula_4 is the carrying capacity of species formula_2 in the absence of interactions, formula_5 sets a timescale, and formula_6 is a random matrix whose entries are random variables with mean formula_7, variance formula_8, and correlations formula_9 for formula_10 where formula_11. The interaction matrix, formula_6, may be parameterized as,formula_12where formula_13 are standard random variables (i.e., zero mean and unit variance) with formula_14 for formula_10. The matrix entries may have any distribution with common finite first and second moments and will yield identical results in the large formula_3 limit due to the central limit theorem. The carrying capacities may also be treated as random variables with formula_15 Analyses by statistical physics-inspired methods have revealed phase transitions between different qualitative behaviors of the model in the many-species limit. In some cases, this may include transitions between the existence of a unique globally-attractive fixed point and chaotic, persistent fluctuations. Steady-state abundances in the thermodynamic limit. In the thermodynamic limit (i.e., the community has a very large number of species) where a unique globally-attractive fixed point exists, the distribution of species abundances can be computed using the cavity method while assuming the system is self-averaging. The self-averaging assumption means that the distribution of any one species' abundance between samplings of model parameters matches the distribution of species abundances within a single sampling of model parameters. In the cavity method, an additional mean-field species formula_16 is introduced and the response of the system is approximated linearly. The cavity calculation yields a self-consistent equation describing the distribution of species abundances as a mean-field random variable, formula_17. When formula_18, the mean-field equation is,formula_19where formula_20, and formula_21 is a standard normal random variable. Only ecologically uninvadable solutions are taken (i.e., the largest solution for formula_22 in the quadratic equation is selected). The relevant susceptibility and moments of formula_22, which has a truncated normal distribution, are determined self-consistently. Dynamical phases. In the thermodynamic limit where there is an asymptotically large number of species (i.e., formula_23), there are three distinct phases: one in which there is a unique fixed point (UFP), another with a multiple attractors (MA), and a third with unbounded growth. In the MA phase, depending on whether species abundances are replenished at a small rate, may approach arbitrarily small population sizes, or are removed from the community when the population falls below some cutoff, the resulting dynamics may be chaotic with persistent fluctuations or approach an initial conditions-dependent steady state. The transition from the UFP to MA phase is signaled by the cavity solution becoming unstable to disordered perturbations. When formula_24, the phase transition boundary occurs when the parameters satisfy,formula_25In the formula_26 case, the phase boundary can still be calculated analytically, but no closed-form solution has been found; numerical methods are necessary to solve the self-consistent equations determining the phase boundary. The transition to the unbounded growth phase is signaled by the divergence of formula_27 as computed in the cavity calculation. Dynamical mean-field theory. The cavity method can also be used to derive a dynamical mean-field theory model for the dynamics. The cavity calculation yields a self-consistent equation describing the dynamics as a Gaussian process defined by the self-consistent equation (for formula_28),formula_29where formula_30, formula_31 is a zero-mean Gaussian process with autocorrelation formula_32, and formula_33 is the dynamical susceptibility defined in terms of a functional derivative of the dynamics with respect to a time-dependent perturbation of the carrying capacity. Using dynamical mean-field theory, it has been shown that at long times, the dynamics exhibit aging in which the characteristic time scale defining the decay of correlations increases linearly in the duration of the dynamics. That is, formula_34 when formula_35 is large, where formula_36 is the autocorrelation function of the dynamics and formula_37 is a common scaling collapse function. When a small immigration rate formula_38 is added (i.e., a small constant is added to the right-hand side of the equations of motion) the dynamics reach a time transitionally invariant state. In this case, the dynamics exhibit jumps between formula_39 and formula_40 abundances. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\mathrm dN_i}{\\mathrm dt} = \\frac{r_i}{K_i}N_i \\left(K_i - N_i - \\sum_{j (\\neq i)} \\alpha_{ij} N_j\\right),\n\\qquad i = 1,\\dots,S," }, { "math_id": 1, "text": "N_i" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "K_i" }, { "math_id": 5, "text": "r_i" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "\\langle \\alpha_{ij}\\rangle = \\mu_\\alpha/S" }, { "math_id": 8, "text": "\\mathrm{var}(\\alpha_{ij}) = \\sigma_\\alpha^2/S" }, { "math_id": 9, "text": "\\mathrm{corr}(\\alpha_{ij}, \\alpha_{ji}) =\\gamma " }, { "math_id": 10, "text": "i \\neq j" }, { "math_id": 11, "text": "-1\\leq \\gamma \\leq 1" }, { "math_id": 12, "text": "\\alpha_{ij} = \\frac{\\mu_\\alpha}{S} + \\frac{\\sigma_\\alpha}{\\sqrt{S}} a_{ij}," }, { "math_id": 13, "text": "a_{ij}" }, { "math_id": 14, "text": "\\langle a_{ij} a_{ji}\\rangle = \\gamma" }, { "math_id": 15, "text": "\\langle K_i \\rangle = K,\\,\\operatorname{var}(K_i) =\\sigma_K^2. " }, { "math_id": 16, "text": "i = 0" }, { "math_id": 17, "text": "N_0" }, { "math_id": 18, "text": "\\sigma_K =0" }, { "math_id": 19, "text": "\n0 = N_0 \\left( K - \\mu_\\alpha m- N_0 +\\sqrt{q\\left(\\mu_\\alpha^2 + \\gamma \\sigma_\\alpha^2\\right)} Z + \\sigma_\\alpha^2 \\gamma\\chi N_0\\right),\n" }, { "math_id": 20, "text": "\nm = \\langle N_0\\rangle ,\\,q=\\langle N_0^2\\rangle, \\,\\chi = \\langle \\partial N_0/\\partial K_0\\rangle \n" }, { "math_id": 21, "text": "\nZ \\sim \\mathcal{N}(0,1)\n" }, { "math_id": 22, "text": "\nN_0\n" }, { "math_id": 23, "text": "S \\to \\infty" }, { "math_id": 24, "text": "\n\\sigma_K = 0 \n" }, { "math_id": 25, "text": "\n\\sigma_\\alpha = \\frac{\\sqrt{2}}{1+\\gamma}. \n" }, { "math_id": 26, "text": "\\sigma_K > 0" }, { "math_id": 27, "text": "\n\\langle N_0 \\rangle \n" }, { "math_id": 28, "text": "\\sigma_K = 0" }, { "math_id": 29, "text": "\\frac{\\mathrm dN_0}{\\mathrm d t}\n=\nN_0(t)\n\\left[\nK_0 - N_0(t) - \\mu_\\alpha m(t) - \\sigma_\\alpha \\eta(t) \n+ \\gamma \\sigma_\\alpha^2 \\int_0^t\\mathrm dt'\\, \\chi(t,t') N_0(t')\n\\right]," }, { "math_id": 30, "text": "m(t) = \\langle N_0(t)\\rangle " }, { "math_id": 31, "text": "\\eta" }, { "math_id": 32, "text": "\\langle \\eta(t)\\eta(t')\\rangle = \\langle N_0(t)N_0(t')\\rangle " }, { "math_id": 33, "text": "\\chi(t,t') = \\langle \\left.\\delta N_0(t)/\\delta K_0(t')\\right|_{K_0(t') = K_0} \\rangle" }, { "math_id": 34, "text": "C_N(t,t+t\\tau) \\to f(\\tau)" }, { "math_id": 35, "text": "t" }, { "math_id": 36, "text": "C_N(t,t') = \\langle N(t)N(t')\\rangle " }, { "math_id": 37, "text": "f(\\tau)" }, { "math_id": 38, "text": "\\lambda \\ll 1" }, { "math_id": 39, "text": "O(1)" }, { "math_id": 40, "text": "O(\\lambda)" } ]
https://en.wikipedia.org/wiki?curid=74820340
7482797
Morphism of schemes
In algebraic geometry, a morphism of schemes generalizes a morphism of algebraic varieties just as a scheme generalizes an algebraic variety. It is, by definition, a morphism in the category of schemes. A morphism of algebraic stacks generalizes a morphism of schemes. Definition. By definition, a morphism of schemes is just a morphism of locally ringed spaces. A scheme, by definition, has open affine charts and thus a morphism of schemes can also be described in terms of such charts (compare the definition of morphism of varieties). Let ƒ:"X"→"Y" be a morphism of schemes. If "x" is a point of "X", since ƒ is continuous, there are open affine subsets "U" = Spec "A" of "X" containing "x" and "V" = Spec "B" of "Y" such that ƒ("U") ⊆ "V". Then ƒ: "U" → "V" is a morphism of affine schemes and thus is induced by some ring homomorphism "B" → "A" (cf. #Affine case.) In fact, one can use this description to "define" a morphism of schemes; one says that ƒ:"X"→"Y" is a morphism of schemes if it is locally induced by ring homomorphisms between coordinate rings of affine charts. that sends the unique point to "s" and that comes with formula_1.) More conceptually, the definition of a morphism of schemes needs to capture "Zariski-local nature" or localization of rings; this point of view (i.e., a local-ringed space) is essential for a generalization (topos). Let "f" : "X" → "Y" be a morphism of schemes with formula_2. Then, for each point "x" of "X", the homomorphism on the stalks: formula_3 is a local ring homomorphism: i.e., formula_4 and so induces an injective homomorphism of residue fields formula_5. For each scheme "X", there is a natural morphism formula_6 which is an isomorphism if and only if "X" is affine; θ is obtained by gluing "U" → target which come from restrictions to open affine subsets "U" of "X". This fact can also be stated as follows: for any scheme "X" and a ring "A", there is a natural bijection: formula_7 Moreover, this fact (adjoint relation) can be used to characterize an affine scheme: a scheme "X" is affine if and only if for each scheme "S", the natural map formula_9 is bijective. (Proof: if the maps are bijective, then formula_10 and "X" is isomorphic to formula_11 by Yoneda's lemma; the converse is clear.) A morphism as a relative scheme. Fix a scheme "S", called a base scheme. Then a morphism formula_12 is called a scheme over "S" or an "S"-scheme; the idea of the terminology is that it is a scheme "X" together with a map to the base scheme "S". For example, a vector bundle "E" → "S" over a scheme "S" is an "S"-scheme. An "S"-morphism from "p":"X" →"S" to "q":"Y" →"S" is a morphism ƒ:"X" →"Y" of schemes such that "p" = "q" ∘ ƒ. Given an "S"-scheme formula_13, viewing "S" as an "S"-scheme over itself via the identity map, an "S"-morphism formula_14 is called a "S"-section or just a section. All the "S"-schemes form a category: an object in the category is an "S"-scheme and a morphism in the category an "S"-morphism. (This category is the slice category of the category of schemes with the base object "S".) Affine case. Let formula_15 be a ring homomorphism and let formula_16 be the induced map. Then Let "f": Spec "A" → Spec "B" be a morphism of schemes between affine schemes with the pullback map formula_18: "B" → "A". That it is a morphism of locally ringed spaces translates to the following statement: if formula_20 is a point of Spec "A", formula_21. Hence, each ring homomorphism "B" → "A" defines a morphism of schemes Spec "A" → Spec "B" and, conversely, all morphisms between them arise this fashion. Examples. Graph morphism. Given a morphism of schemes formula_39 over a scheme "S", the morphism formula_40 induced by the identity formula_41 and "f" is called the graph morphism of "f". The graph morphism of the identity is called the diagonal morphism. Types of morphisms. Finite type. Morphisms of finite type are one of the basic tools for constructing families of varieties. A morphism formula_42 is of finite type if there exists a cover formula_43 such that the fibers formula_44 can be covered by finitely many affine schemes formula_45 making the induced ring morphisms formula_46 into finite-type morphisms. A typical example of a finite-type morphism is a family of schemes. For example, formula_47 is a morphism of finite type. A simple non-example of a morphism of finite-type is formula_48 where formula_49 is a field. Another is an infinite disjoint union formula_50 Closed immersion. A morphism of schemes formula_51 is a closed immersion if the following conditions hold: This condition is equivalent to the following: given an affine open formula_55 there exists an ideal formula_56 such that formula_57 Examples. Of course, any (graded) quotient formula_58 defines a subscheme of formula_59 (formula_60). Consider the quasi-affine scheme formula_61 and the subset of the formula_62-axis contained in formula_63. Then if we take the open subset formula_64 the ideal sheaf is formula_65 while on the affine open formula_66 there is no ideal since the subset does not intersect this chart. Separated. Separated morphisms define families of schemes which are "Hausdorff". For example, given a separated morphism formula_13 in formula_67 the associated analytic spaces formula_68 are both Hausdorff. We say a morphism of scheme formula_69 is separated if the diagonal morphism formula_70 is a closed immersion. In topology, an analogous condition for a space formula_63 to be Hausdorff is if the diagonal set formula_71 is a closed subset of formula_72. Nevertheless, most schemes are not Hausdorff as topological spaces, as the Zariski topology is in general highly non-Hausdorff. Examples. Most morphisms encountered in scheme theory will be separated. For example, consider the affine scheme formula_73 over formula_74 Since the product scheme is formula_75 the ideal defining the diagonal is generated by formula_76 showing the diagonal scheme is affine and closed. This same computation can be used to show that projective schemes are separated as well. Non-examples. The only time care must be taken is when you are gluing together a family of schemes. For example, if we take the diagram of inclusions formula_77 then we get the scheme-theoretic analogue of the classical line with two-origins. Proper. A morphism formula_42 is called proper if The last condition means that given a morphism formula_78 the base change morphism formula_79 is a closed immersion. Most known examples of proper morphisms are in fact projective; but, examples of proper varieties which are not projective can be found using toric geometry. Projective. Projective morphisms define families of projective varieties over a fixed base scheme. Note that there are two definitions: Hartshornes which states that a morphism formula_42 is called projective if there exists a closed immersion formula_80 and the EGA definition which states that a scheme formula_81 is projective if there is a quasi-coherent formula_82-module of finite type such that there is a closed immersion formula_83. The second definition is useful because an exact sequence of formula_82 modules can be used to define projective morphisms. Projective morphism over a point. A projective morphism formula_84 defines a projective scheme. For example, formula_85 defines a projective curve of genus formula_86 over formula_87. Family of projective hypersurfaces. If we let formula_88 then the projective morphism formula_89 defines a family of Calabi-Yau manifolds which degenerate. Lefschetz pencil. Another useful class of examples of projective morphisms are Lefschetz Pencils: they are projective morphisms formula_90 over some field formula_49. For example, given smooth hypersurfaces formula_91 defined by the homogeneous polynomials formula_92 there is a projective morphism formula_93 giving the pencil. EGA projective. A nice classical example of a projective scheme is by constructing projective morphisms which factor through rational scrolls. For example, take formula_94 and the vector bundle formula_95. This can be used to construct a formula_96-bundle formula_97 over formula_98. If we want to construct a projective morphism using this sheaf we can take an exact sequence, such as formula_99 which defines the structure sheaf of the projective scheme formula_63 in formula_100 Flat. Intuition. Flat morphisms have an algebraic definition but have a very concrete geometric interpretation: flat families correspond to families of varieties which vary "continuously". For example, formula_101 is a family of smooth affine quadric curves which degenerate to the normal crossing divisor formula_102 at the origin. Properties. One important property that a flat morphism must satisfy is that the dimensions of the fibers should be the same. A simple non-example of a flat morphism then is a blowup since the fibers are either points or copies of some formula_103. Definition. Let formula_69 be a morphism of schemes. We say that formula_104 is flat at a point formula_105 if the induced morphism formula_106 yields an exact functor formula_107 Then, formula_104 is flat if it is flat at every point of formula_63. It is also faithfully flat if it is a surjective morphism. Non-example. Using our geometric intuition it obvious that formula_108 is not flat since the fiber over formula_109 is formula_110 with the rest of the fibers are just a point. But, we can also check this using the definition with local algebra: Consider the ideal formula_111 Since formula_112 we get a local algebra morphism formula_113 If we tensor formula_114 with formula_115, the map formula_116 has a non-zero kernel due the vanishing of formula_117. This shows that the morphism is not flat. Unramified. A morphism formula_118 of affine schemes is unramified if formula_119. We can use this for the general case of a morphism of schemes formula_120. We say that formula_104 is unramified at formula_121 if there is an affine open neighborhood formula_122 and an affine open formula_123 such that formula_124 and formula_125 Then, the morphism is unramified if it is unramified at every point in formula_63. Geometric example. One example of a morphism which is flat and generically unramified, except for at a point, is formula_126 We can compute the relative differentials using the sequence formula_127 showing formula_128 if we take the fiber formula_129, then the morphism is ramified since formula_130 otherwise we have formula_131 showing that it is unramified everywhere else. Etale. A morphism of schemes formula_118 is called étale if it is flat and unramfied. These are the algebro-geometric analogue of covering spaces. The two main examples to think of are covering spaces and finite separable field extensions. Examples in the first case can be constructed by looking at branched coverings and restricting to the unramified locus. Morphisms as points. By definition, if "X", "S" are schemes (over some base scheme or ring "B"), then a morphism from "S" to "X" (over "B") is an "S"-point of "X" and one writes: formula_132 for the set of all "S"-points of "X". This notion generalizes the notion of solutions to a system of polynomial equations in classical algebraic geometry. Indeed, let "X" = Spec("A") with formula_133. For a "B"-algebra "R", to give an "R"-point of "X" is to give an algebra homomorphism "A" →"R", which in turn amounts to giving a homomorphism formula_134 that kills "f""i"'s. Thus, there is a natural identification: formula_135 Example: If "X" is an "S"-scheme with structure map π: "X" → "S", then an "S"-point of "X" (over "S") is the same thing as a section of π. In category theory, Yoneda's lemma says that, given a category "C", the contravariant functor formula_136 is fully faithful (where formula_137 means the category of presheaves on "C"). Applying the lemma to "C" = the category of schemes over "B", this says that a scheme over "B" is determined by its various points. It turns out that in fact it is enough to consider "S"-points with only affine schemes "S", precisely because schemes and morphisms between them are obtained by gluing affine schemes and morphisms between them. Because of this, one usually writes "X"("R") = "X"(Spec "R") and view "X" as a functor from the category of commutative "B"-algebras to Sets. Example: Given "S"-schemes "X", "Y" with structure maps "p", "q", formula_138. Example: With "B" still denoting a ring or scheme, for each "B"-scheme "X", there is a natural bijection formula_139 { the isomorphism classes of line bundles "L" on "X" together with "n" + 1 global sections generating "L". }; in fact, the sections "s""i" of "L" define a morphism formula_140. (See also Proj construction#Global Proj.) Remark: The above point of view (which goes under the name functor of points and is due to Grothendieck) has had a significant impact on the foundations of algebraic geometry. For example, working with a category-valued (pseudo-)functor instead of a set-valued functor leads to the notion of a stack, which allows one to keep track of morphisms between points (i.e., morphisms between morphisms). Rational map. A rational map of schemes is defined in the same way for varieties. Thus, a rational map from a reduced scheme "X" to a separated scheme "Y" is an equivalence class of a pair formula_141 consisting of an open dense subset "U" of "X" and a morphism formula_142. If "X" is irreducible, a rational function on "X" is, by definition, a rational map from "X" to the affine line formula_110 or the projective line formula_143 A rational map is dominant if and only if it sends the generic point to the generic point. A ring homomorphism between function fields need not induce a dominant rational map (even just a rational map). For example, Spec "k"["x"] and Spec "k"("x") and have the same function field (namely, "k"("x")) but there is no rational map from the former to the latter. However, it is true that any inclusion of function fields of algebraic varieties induces a dominant rational map (see morphism of algebraic varieties#Properties.) Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Spec} k(x) \\to \\operatorname{Spec} k[y]_{(y)} = \\{ \\eta = (0), s = (y) \\}" }, { "math_id": 1, "text": "k[y]_{(y)} \\to k(x), \\, y \\mapsto x" }, { "math_id": 2, "text": "\\phi: \\mathcal{O}_Y \\to f_*\\mathcal{O}_X" }, { "math_id": 3, "text": "\\phi: \\mathcal{O}_{Y, f(x)} \\to \\mathcal{O}_{X, x}" }, { "math_id": 4, "text": "\\phi(\\mathfrak{m}_{f(x)}) \\subseteq \\mathfrak{m}_x" }, { "math_id": 5, "text": "\\phi: k(f(x)) \\hookrightarrow k(x)" }, { "math_id": 6, "text": "\\theta: X \\to \\operatorname{Spec} \\Gamma(X, \\mathcal{O}_X)," }, { "math_id": 7, "text": "\\operatorname{Mor}(X, \\operatorname{Spec}(A)) \\cong \\operatorname{Hom}(A, \\Gamma(X, \\mathcal O_X))." }, { "math_id": 8, "text": "\\phi \\mapsto \\operatorname{Spec}(\\phi) \\circ \\theta" }, { "math_id": 9, "text": "\\operatorname{Mor}(S, X) \\to \\operatorname{Hom}(\\Gamma(X, \\mathcal{O}_X), \\Gamma(S, \\mathcal{O}_S))" }, { "math_id": 10, "text": "\\operatorname{Mor}(-, X) \\simeq \\operatorname{Mor}(-, \\operatorname{Spec} \\Gamma(X, \\mathcal{O}_X))" }, { "math_id": 11, "text": "\\operatorname{Spec} \\Gamma(X, \\mathcal{O}_X)" }, { "math_id": 12, "text": "p: X \\to S" }, { "math_id": 13, "text": "X \\to S" }, { "math_id": 14, "text": "S \\to X" }, { "math_id": 15, "text": "\\varphi: B \\to A" }, { "math_id": 16, "text": "\\begin{cases} \\varphi^a: \\operatorname{Spec} A \\to \\operatorname{Spec} B, \\\\ \\mathfrak{p} \\mapsto \\varphi^{-1}(\\mathfrak{p})\\end{cases}" }, { "math_id": 17, "text": "\\varphi^a" }, { "math_id": 18, "text": "\\varphi" }, { "math_id": 19, "text": "\\overline{\\varphi^a(V(I))} = V(\\varphi^{-1}(I))." }, { "math_id": 20, "text": "x = \\mathfrak{p}_x" }, { "math_id": 21, "text": "\\mathfrak{p}_{f(x)} = \\varphi^{-1}(\\mathfrak{p}_x)" }, { "math_id": 22, "text": "\\mathfrak{p}_x" }, { "math_id": 23, "text": "\\mathfrak{m}_x" }, { "math_id": 24, "text": "g(f(x)) = 0 \\Rightarrow \\varphi(g) \\in \\varphi(\\mathfrak{m}_{f(x))}) \\subseteq \\mathfrak{m}_x \\Rightarrow g \\in \\varphi^{-1}(\\mathfrak{m}_x)" }, { "math_id": 25, "text": "g(f(x)) \\ne 0" }, { "math_id": 26, "text": "g" }, { "math_id": 27, "text": "\\varphi(g)" }, { "math_id": 28, "text": "\\Z." }, { "math_id": 29, "text": "R[t] \\to A" }, { "math_id": 30, "text": "t \\mapsto f" }, { "math_id": 31, "text": "A = \\operatorname{Hom}_{R-\\text{alg}}(R[t], A)" }, { "math_id": 32, "text": "A = \\Gamma(X, \\mathcal{O}_X)" }, { "math_id": 33, "text": "\\Gamma(X, \\mathcal{O}_X) = \\operatorname{Mor}_S(X, \\mathbb{A}^1_S)" }, { "math_id": 34, "text": "\\mathbb{A}^1_S = \\operatorname{Spec}(R[t])" }, { "math_id": 35, "text": "\\Gamma(X, \\mathcal{O}_X^*) = \\operatorname{Mor}_S(X, \\mathbb{G}_m)" }, { "math_id": 36, "text": "\\mathbb{G}_m = \\operatorname{Spec}(R[t, t^{-1}])" }, { "math_id": 37, "text": "\\text{Proj}\\left(\\frac{\\Complex[x,y][a,b,c]}{(ax^2 +bxy + cy^2)}\\right) \\to \\text{Proj}(\\Complex[a,b,c]) = \\mathbb{P}^2_{a,b,c}" }, { "math_id": 38, "text": "\\mathbb{P}^1" }, { "math_id": 39, "text": "f: X \\to Y" }, { "math_id": 40, "text": "X \\to X \\times_S Y" }, { "math_id": 41, "text": "1_X : X \\to X" }, { "math_id": 42, "text": "f:X\\to S" }, { "math_id": 43, "text": "\\operatorname{Spec}(A_i) \\to S" }, { "math_id": 44, "text": "X \\times_S \\operatorname{Spec}(A_i)" }, { "math_id": 45, "text": "\\operatorname{Spec}(B_{ij})" }, { "math_id": 46, "text": "A_i \\to B_{ij}" }, { "math_id": 47, "text": "\\operatorname{Spec}\\left(\\frac{\\Z[x,y,z]}{(x^n + zy^n, z^5-1)} \\right) \\to \\operatorname{Spec}\\left(\\frac{\\Z[z]}{(z^5-1)} \\right)" }, { "math_id": 48, "text": "\\operatorname{Spec}(k[x_1,x_2,x_3,\\ldots])) \\to \\operatorname{Spec}(k)" }, { "math_id": 49, "text": "k" }, { "math_id": 50, "text": "\\coprod^\\infty X \\to X" }, { "math_id": 51, "text": "i:Z \\to X" }, { "math_id": 52, "text": "i" }, { "math_id": 53, "text": "Z" }, { "math_id": 54, "text": "i^\\#: \\mathcal{O}_X \\to i_*\\mathcal{O}_Z" }, { "math_id": 55, "text": "\\operatorname{Spec}(R) = U \\subseteq X" }, { "math_id": 56, "text": "I \\subseteq R" }, { "math_id": 57, "text": "i^{-1}(U) = \\operatorname{Spec}(R/I)" }, { "math_id": 58, "text": "R/I" }, { "math_id": 59, "text": "\\operatorname{Spec}(R)" }, { "math_id": 60, "text": "\\operatorname{Proj}(R)" }, { "math_id": 61, "text": "\\mathbb{A}^2 - \\{0 \\}" }, { "math_id": 62, "text": "x" }, { "math_id": 63, "text": "X" }, { "math_id": 64, "text": "\\operatorname{Spec}(k[x,y,y^{-1}])" }, { "math_id": 65, "text": "(x)" }, { "math_id": 66, "text": "\\operatorname{Spec}(k[x,y,x^{-1}])" }, { "math_id": 67, "text": "\\text{Sch}/\\Complex" }, { "math_id": 68, "text": "X(\\Complex)^{an} \\to S(\\Complex)^{an}" }, { "math_id": 69, "text": "f:X \\to S" }, { "math_id": 70, "text": "\\Delta_{X/S}:X \\to X\\times_SX" }, { "math_id": 71, "text": "\\Delta = \\{(x,x) \\in X\\times X \\}" }, { "math_id": 72, "text": "X\\times X" }, { "math_id": 73, "text": "X = \\operatorname{Spec}\\left( \\frac{\\Complex [x,y]}{(f)} \\right)" }, { "math_id": 74, "text": "\\operatorname{Spec}(\\Complex)." }, { "math_id": 75, "text": "X \\times_\\Complex X = \\operatorname{Spec}\\left(\\frac{\\Complex[x,y]}{(f)}\\otimes_\\Complex \\frac{\\Complex[x,y]}{(f)} \\right)" }, { "math_id": 76, "text": "x\\otimes 1 - 1 \\otimes x, y \\otimes 1 - 1 \\otimes y" }, { "math_id": 77, "text": "\\begin{matrix}\n\\operatorname{Spec}(R[x,x^{-1}]) & &\\\\\n &\\searrow & \\\\\n& & \\operatorname{Spec}(R[x]) \\\\\n &\\nearrow & \\\\\n\\operatorname{Spec}(R[x,x^{-1}]) & &\n\\end{matrix}" }, { "math_id": 78, "text": "S' \\to S" }, { "math_id": 79, "text": "S'\\times_SX" }, { "math_id": 80, "text": "X \\to \\mathbb{P}^n_S = \\mathbb{P}^n\\times S" }, { "math_id": 81, "text": "X \\in \\text{Sch}/S" }, { "math_id": 82, "text": "\\mathcal{O}_S" }, { "math_id": 83, "text": "X \\to \\mathbb{P}_S(\\mathcal{E})" }, { "math_id": 84, "text": "f:X \\to \\{ *\\}" }, { "math_id": 85, "text": "\\text{Proj}\\left( \\frac{\\Complex[x,y,z]}{(x^n + y^n - z^n)} \\right) \\to \\operatorname{Spec}(\\Complex )" }, { "math_id": 86, "text": "(n-1)(n-1)/2" }, { "math_id": 87, "text": "\\Complex " }, { "math_id": 88, "text": "S = \\mathbb{A}^1_t" }, { "math_id": 89, "text": "\\underline{\\operatorname{Proj}}_S\\left( \\frac{\\mathcal{O}_S[x_0,x_1,x_2,x_3,x_4]}{\\left(x_0^5 + \\cdots + x_4^5 - t x_0 x_1 x_2 x_3 x_4\\right)} \\right) \\to S" }, { "math_id": 90, "text": "\\pi:X \\to \\mathbb{P}^1_k = \\operatorname{Proj}(k[s,t])" }, { "math_id": 91, "text": "X_1, X_2 \\subseteq \\mathbb{P}^n_k" }, { "math_id": 92, "text": "f_1,f_2" }, { "math_id": 93, "text": "\\underline{\\operatorname{Proj}}_{\\mathbb{P}^1}\\left( \\frac{\\mathcal{O}_{\\mathbb{P}^1}[x_0,\\ldots,x_n]}{(sf_1 + tf_2)} \\right)\\to \\mathbb{P}^1 " }, { "math_id": 94, "text": "S = \\mathbb{P}^1" }, { "math_id": 95, "text": "\\mathcal{E} = \\mathcal{O}_S \\oplus \\mathcal{O}_S \\oplus \\mathcal{O}_S(3)" }, { "math_id": 96, "text": "\\mathbb{P}^2" }, { "math_id": 97, "text": "\\mathbb{P}_S(\\mathcal{E})" }, { "math_id": 98, "text": "S" }, { "math_id": 99, "text": "\\mathcal{O}_S(-d)\\oplus\\mathcal{O}_S(-e) \\to \\mathcal{E} \\to \\mathcal{O}_X \\to 0" }, { "math_id": 100, "text": "\\mathbb{P}_S(\\mathcal{E})." }, { "math_id": 101, "text": "\\operatorname{Spec}\\left(\\frac{\\Complex[x,y,t]}{(xy-t))} \\right) \\to \\operatorname{Spec}(\\Complex[t])" }, { "math_id": 102, "text": "\\operatorname{Spec}\\left(\\frac{\\Complex[x,y]}{(xy)} \\right)" }, { "math_id": 103, "text": "\\mathbb{P}^n" }, { "math_id": 104, "text": "f" }, { "math_id": 105, "text": "x \\in X" }, { "math_id": 106, "text": "\\mathcal{O}_{f(x)} \\to \\mathcal{O}_x" }, { "math_id": 107, "text": "-\\otimes_{\\mathcal{O}_{f(x)}} \\mathcal{O}_x." }, { "math_id": 108, "text": "f:\\operatorname{Spec}(\\Complex [x,y]/(xy)) \\to \\operatorname{Spec}(\\Complex [x])" }, { "math_id": 109, "text": "0" }, { "math_id": 110, "text": "\\mathbb{A}^1" }, { "math_id": 111, "text": "\\mathfrak{p} = (x) \\in \\operatorname{Spec}(\\Complex [x,y]/(xy))." }, { "math_id": 112, "text": "f(\\mathfrak{p}) = (x) \\in \\operatorname{Spec}(\\Complex[x])" }, { "math_id": 113, "text": "f_{\\mathfrak{p}}: \\left( \\Complex [x] \\right)_{(x)} \\to \\left(\\Complex [x,y]/(xy) \\right)_{(x)}" }, { "math_id": 114, "text": "0 \\to \\Complex[x]_{(x)} \\overset{\\cdot x}{\\longrightarrow} \\Complex[x]_{(x)}" }, { "math_id": 115, "text": "(\\Complex[x,y]/(xy))_{(x)}" }, { "math_id": 116, "text": "(\\Complex[x,y]/(xy)))_{(x)} \\xrightarrow{\\cdot x} (\\Complex[x,y]/(xy))_{(x)}" }, { "math_id": 117, "text": "xy" }, { "math_id": 118, "text": "f:X \\to Y" }, { "math_id": 119, "text": "\\Omega_{X/Y} = 0" }, { "math_id": 120, "text": "f:X\\to Y" }, { "math_id": 121, "text": "x\\in X" }, { "math_id": 122, "text": "x \\in U" }, { "math_id": 123, "text": "V\\subseteq Y" }, { "math_id": 124, "text": "f(U)\\subseteq V" }, { "math_id": 125, "text": "\\Omega_{U/V} = 0." }, { "math_id": 126, "text": " \\operatorname{Spec}\\left( \\frac{\\Complex [t,x]}{(x^n-t)} \\right) \\to \\operatorname{Spec}(\\Complex[t])" }, { "math_id": 127, "text": "\\frac{\\Complex[t,x]}{(x^n-t)}\\otimes_{\\Complex[t]}\\Complex[t]dt \\to \\left(\\frac{\\Complex[t,x]}{(x^n-t)}dt \\oplus \\frac{\\Complex[t,x]}{(x^n-t)}dx\\right) / (nx^{n-1}dx - dt) \\to \\Omega_{X/Y} \\to 0" }, { "math_id": 128, "text": "\\Omega_{X/Y} \\cong \\left(\\frac{\\Complex [t,x]}{(x^n-t)}dx\\right)/(x^{n-1}dx) \\neq 0" }, { "math_id": 129, "text": "t = 0" }, { "math_id": 130, "text": "\\Omega_{X_0/\\Complex } = \\left(\\frac{\\Complex[x]}{x^{n}}dx\\right)/(x^{n-1}dx)" }, { "math_id": 131, "text": "\\Omega_{X_\\alpha/\\Complex} = \\left(\\frac{\\Complex[x]}{(x^n-\\alpha)}dx\\right)/(x^{n-1}dx) \\cong \\frac{\\Complex[x]}{(\\alpha)}dx \\cong 0" }, { "math_id": 132, "text": "X(S) = \\{ f \\mid f: S \\to X \\text{ over } B \\}" }, { "math_id": 133, "text": "A =B[t_1, \\dots, t_n]/(f_1, \\dots, f_m)" }, { "math_id": 134, "text": "B[t_1, \\dots, t_n] \\to R, \\, t_i \\mapsto r_i" }, { "math_id": 135, "text": "X(\\operatorname{Spec} R) = \\{ (r_1, \\dots, r_n) \\in R^n | f_1(r_1, \\dots, r_n) = \\cdots = f_m(r_1, \\dots, r_n) = 0 \\}." }, { "math_id": 136, "text": "C \\to \\mathcal{P}(C) = \\operatorname{Fct}(C^{\\text{op}}, \\mathbf{Sets}), \\, X \\mapsto \\operatorname{Mor}(-, X)" }, { "math_id": 137, "text": "\\mathcal{P}(C)" }, { "math_id": 138, "text": "(X \\times_S Y)(R) = X(R) \\times_{S(R)} Y(R) = \\{ (x, y) \\in X(R) \\times Y(R) \\mid p(x) = q(y) \\}" }, { "math_id": 139, "text": "\\mathbf{P}^n_B(X) = " }, { "math_id": 140, "text": "X \\to \\mathbf{P}^n_B, \\, x \\mapsto (s_0(x) : \\dots : s_n(x))" }, { "math_id": 141, "text": "(U, f_U)" }, { "math_id": 142, "text": "f_U : U \\to Y" }, { "math_id": 143, "text": "\\mathbb{P}^1." } ]
https://en.wikipedia.org/wiki?curid=7482797
7483985
Atkin–Lehner theory
Part of the theory of modular forms In mathematics, Atkin–Lehner theory is part of the theory of modular forms describing when they arise at a given integer "level" "N" in such a way that the theory of Hecke operators can be extended to higher levels. Atkin–Lehner theory is based on the concept of a newform, which is a cusp form 'new' at a given "level" "N", where the levels are the nested congruence subgroups: formula_0 of the modular group, with "N" ordered by divisibility. That is, if "M" divides "N", Γ0("N") is a subgroup of Γ0("M"). The oldforms for Γ0("N") are those modular forms "f(τ)" of level "N" of the form "g"("d τ") for modular forms "g" of level "M" with "M" a proper divisor of "N", where "d" divides "N/M". The newforms are defined as a vector subspace of the modular forms of level "N", complementary to the space spanned by the oldforms, i.e. the orthogonal space with respect to the Petersson inner product. The Hecke operators, which act on the space of all cusp forms, preserve the subspace of newforms and are self-adjoint and commuting operators (with respect to the Petersson inner product) when restricted to this subspace. Therefore, the algebra of operators on newforms they generate is a finite-dimensional C*-algebra that is commutative; and by the spectral theory of such operators, there exists a basis for the space of newforms consisting of eigenforms for the full Hecke algebra. Atkin–Lehner involutions. Consider a Hall divisor "e" of "N", which means that not only does "e" divide "N", but also "e" and "N"/"e" are relatively prime (often denoted "e"||"N"). If "N" has "s" distinct prime divisors, there are 2"s" Hall divisors of "N"; for example, if "N" = 360 = 23⋅32⋅51, the 8 Hall divisors of "N" are 1, 23, 32, 51, 23⋅32, 23⋅51, 32⋅51, and 23⋅32⋅51. For each Hall divisor "e" of "N", choose an integral matrix "W""e" of the form formula_1 with det "W""e" = "e". These matrices have the following properties: We can summarize these properties as follows. Consider the subgroup of GL(2,Q) generated by Γ0("N") together with the matrices "W""e"; let Γ0("N")+ denote its quotient by positive scalar matrices. Then Γ0("N") is a normal subgroup of Γ0("N")+ of index 2"s" (where "s" is the number of distinct prime factors of "N"); the quotient group is isomorphic to (Z/2Z)s and acts on the cusp forms via the Atkin–Lehner involutions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma_0(N) = \\left\\{ \\begin{pmatrix} a & b \\\\ c & d \\end{pmatrix} \\in \\text{SL}(2, \\mathbf{Z}): c \\equiv 0 \\pmod{N} \\right\\}" }, { "math_id": 1, "text": "W_e = \\begin{pmatrix}ae & b \\\\ cN & de \\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=7483985
7485373
Aitoff projection
Pseudoazimuthal compromise map projection The Aitoff projection is a modified azimuthal map projection proposed by David A. Aitoff in 1889. Based on the equatorial form of the azimuthal equidistant projection, Aitoff first halves longitudes, then projects according to the azimuthal equidistant, and then stretches the result horizontally into a 2:1 ellipse to compensate for having halved the longitudes. Expressed simply: formula_0 where azeq"x" and azeq"y" are the "x" and "y" components of the equatorial azimuthal equidistant projection. Written out explicitly, the projection is: formula_1 where formula_2 and sinc "α" is the unnormalized sinc function with the discontinuity removed. In all of these formulas, "λ" is the longitude from the central meridian and "φ" is the latitude. Three years later, Ernst Hermann Heinrich Hammer suggested the use of the Lambert azimuthal equal-area projection in the same manner as Aitoff, producing the Hammer projection. While Hammer was careful to cite Aitoff, some authors have mistakenly referred to the Hammer projection as the Aitoff projection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x = 2 \\operatorname{azeq}_x\\left(\\frac{\\lambda}{2}, \\varphi\\right), \\qquad\ny = \\operatorname{azeq}_y \\left(\\frac\\lambda 2, \\varphi \\right)" }, { "math_id": 1, "text": "x = \\frac{2 \\cos\\varphi \\sin\\frac{\\lambda}{2}}{\\operatorname{sinc}\\alpha}, \\qquad\ny = \\frac{\\sin\\varphi}{\\operatorname{sinc}\\alpha}" }, { "math_id": 2, "text": "\\alpha = \\arccos\\left(\\cos\\varphi\\cos\\frac{\\lambda}{2}\\right)\\," } ]
https://en.wikipedia.org/wiki?curid=7485373
748686
Lubell–Yamamoto–Meshalkin inequality
In combinatorial mathematics, the Lubell–Yamamoto–Meshalkin inequality, more commonly known as the LYM inequality, is an inequality on the sizes of sets in a Sperner family, proved by , , , and . It is named for the initials of three of its discoverers. To include the initials of all four discoverers, it is sometimes referred to as the YBLM inequality. This inequality belongs to the field of combinatorics of sets, and has many applications in combinatorics. In particular, it can be used to prove Sperner's theorem. Its name is also used for similar inequalities. Statement of the theorem. Let "U" be an "n"-element set, let "A" be a family of subsets of "U" such that no set in "A" is a subset of another set in "A", and let "ak" denote the number of sets of size "k" in "A". Then formula_0 Lubell's proof. proves the Lubell–Yamamoto–Meshalkin inequality by a double counting argument in which he counts the permutations of "U" in two different ways. First, by counting all permutations of "U" identified with {1, …, "n" } directly, one finds that there are "n"! of them. But secondly, one can generate a permutation (i.e., an order) of the elements of "U" by selecting a set "S" in "A" and choosing a map that sends {1, … , |"S" | } to "S". If |"S" | = "k", the set "S" is associated in this way with "k"!("n" − "k")! permutations, and in each of them the image of the first "k" elements of "U" is exactly "S". Each permutation may only be associated with a single set in "A", for if two prefixes of a permutation both formed sets in "A" then one would be a subset of the other. Therefore, the number of permutations that can be generated by this procedure is formula_1 Since this number is at most the total number of all permutations, formula_2 Finally dividing the above inequality by "n"! leads to the result.
[ { "math_id": 0, "text": "\\sum_{k=0}^n\\frac{a_k}{{n \\choose k}} \\le 1." }, { "math_id": 1, "text": "\\sum_{S\\in A}|S|!(n-|S|)!=\\sum_{k=0}^n a_k k! (n-k)!." }, { "math_id": 2, "text": "\\sum_{k=0}^n a_k k! (n-k)!\\le n!." } ]
https://en.wikipedia.org/wiki?curid=748686
748730
Sperner family
In combinatorics, a Sperner family (or Sperner system; named in honor of Emanuel Sperner), or clutter, is a family F of subsets of a finite set "E" in which none of the sets contains another. Equivalently, a Sperner family is an antichain in the inclusion lattice over the power set of "E". A Sperner family is also sometimes called an independent system or irredundant set. Sperner families are counted by the Dedekind numbers, and their size is bounded by Sperner's theorem and the Lubell–Yamamoto–Meshalkin inequality. They may also be described in the language of hypergraphs rather than set families, where they are called clutters. Dedekind numbers. The number of different Sperner families on a set of "n" elements is counted by the Dedekind numbers, the first few of which are 2, 3, 6, 20, 168, 7581, 7828354, 2414682040998, 56130437228687557907788 (sequence in the OEIS). Although accurate asymptotic estimates are known for larger values of "n", it is unknown whether there exists an exact formula that can be used to compute these numbers efficiently. The collection of all Sperner families on a set of "n" elements can be organized as a free distributive lattice, in which the join of two Sperner families is obtained from the union of the two families by removing sets that are a superset of another set in the union. Bounds on the size of a Sperner family. Sperner's theorem. The "k"-element subsets of an "n"-element set form a Sperner family, the size of which is maximized when "k" = "n"/2 (or the nearest integer to it). Sperner's theorem states that these families are the largest possible Sperner families over an "n"-element set. Formally, the theorem states that, for every Sperner family "S" over an "n"-element set, formula_0 LYM inequality. The Lubell–Yamamoto–Meshalkin inequality provides another bound on the size of a Sperner family, and can be used to prove Sperner's theorem. It states that, if "ak" denotes the number of sets of size "k" in a Sperner family over a set of "n" elements, then formula_1 Clutters. A clutter is a family of subsets of a finite set such that none contains any other; that is, it is a Sperner family. The difference is in the questions typically asked. Clutters are an important structure in the study of combinatorial optimization. In more complicated language, a clutter is a hypergraph formula_2 with the added property that formula_3 whenever formula_4 and formula_5 (i.e. no edge properly contains another). An opposite notion to a clutter is an abstract simplicial complex, where every subset of an edge is contained in the hypergraph; this is an order ideal in the poset of subsets of "V". If formula_6 is a clutter, then the blocker of "H", denoted by formula_7, is the clutter with vertex set "V" and edge set consisting of all minimal sets formula_8 so that formula_9 for every formula_10. It can be shown that formula_11 , so blockers give us a type of duality. We define formula_12 to be the size of the largest collection of disjoint edges in "H" and formula_13 to be the size of the smallest edge in formula_7. It is easy to see that formula_14. Minors. There is a minor relation on clutters which is similar to the minor relation on graphs. If formula_6 is a clutter and formula_19, then we may delete "v" to get the clutter formula_20 with vertex set formula_21 and edge set consisting of all formula_10 which do not contain "v". We contract "v" to get the clutter formula_22. These two operations commute, and if "J" is another clutter, we say that "J" is a minor of "H" if a clutter isomorphic to "J" may be obtained from "H" by a sequence of deletions and contractions.
[ { "math_id": 0, "text": "|S| \\le \\binom{n}{\\lfloor n/2\\rfloor}." }, { "math_id": 1, "text": "\\sum_{k=0}^n\\frac{a_k}{{n \\choose k}} \\le 1." }, { "math_id": 2, "text": "(V,E)" }, { "math_id": 3, "text": "A \\not\\subseteq B" }, { "math_id": 4, "text": "A,B \\in E" }, { "math_id": 5, "text": "A \\neq B" }, { "math_id": 6, "text": "H = (V,E)" }, { "math_id": 7, "text": "b(H)" }, { "math_id": 8, "text": "B \\subseteq V" }, { "math_id": 9, "text": "B \\cap A \\neq \\varnothing" }, { "math_id": 10, "text": "A \\in E" }, { "math_id": 11, "text": "b(b(H)) = H" }, { "math_id": 12, "text": "\\nu(H)" }, { "math_id": 13, "text": "\\tau(H)" }, { "math_id": 14, "text": "\\nu(H) \\le \\tau(H)" }, { "math_id": 15, "text": "H = (V(G),E(G))" }, { "math_id": 16, "text": "\\nu(H) = \\tau(H)" }, { "math_id": 17, "text": "s,t \\in V(G)" }, { "math_id": 18, "text": "E(G)" }, { "math_id": 19, "text": "v \\in V" }, { "math_id": 20, "text": "H \\setminus v" }, { "math_id": 21, "text": "\nV \\setminus \\{v\\}" }, { "math_id": 22, "text": "H / v = b(b(H) \\setminus v)" } ]
https://en.wikipedia.org/wiki?curid=748730
74883647
Conley conjecture
Mathematical conjecture The Conley conjecture, named after mathematician Charles Conley, is a mathematical conjecture in the field of symplectic geometry, a branch of differential geometry. Background. Let formula_0 be a compact symplectic manifold. A vector field formula_1 on formula_2 is called a Hamiltonian vector field if the 1-form formula_3 is exact (i.e., equals to the differential of a function formula_4. A Hamiltonian diffeomorphism formula_5 is the integration of a 1-parameter family of Hamiltonian vector fields formula_6. In dynamical systems one would like to understand the distribution of fixed points or periodic points. A periodic point of a Hamiltonian diffeomorphism formula_7 (of periodic formula_8) is a point formula_9 such that formula_10. A feature of Hamiltonian dynamics is that Hamiltonian diffeomorphisms tend to have infinitely many periodic points. Conley first made such a conjecture for the case that formula_2 is a torus. The Conley conjecture is false in many simple cases. For example, a rotation of a round sphere formula_11 by an angle equal to an irrational multiple of formula_12, which is a Hamiltonian diffeomorphism, has only 2 geometrically different periodic points. On the other hand, it is proved for various types of symplectic manifolds. History of studies. The Conley conjecture was proved by Franks and Handel for surfaces with positive genus. The case of higher dimensional torus was proved by Hingston. Hingston's proof inspired the proof of Ginzburg of the Conley conjecture for symplectically aspherical manifolds. Later Ginzburg--Gurel and Hein proved the Conley conjecture for manifolds whose first Chern class vanishes on spherical classes. Finally, Ginzburg--Gurel proved the Conley conjecture for negatively monotone symplectic manifolds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(M, \\omega)" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "\\omega( V, \\cdot)" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "\\phi: M \\to M" }, { "math_id": 6, "text": "V_t, t \\in [0, 1]" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "x \\in M" }, { "math_id": 10, "text": "\\phi^k(x) = x " }, { "math_id": 11, "text": "S^2" }, { "math_id": 12, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=74883647
748844
Sperner's theorem
Sperner's theorem, in discrete mathematics, describes the largest possible families of finite sets none of which contain any other sets in the family. It is one of the central results in extremal set theory. It is named after Emanuel Sperner, who published it in 1928. This result is sometimes called Sperner's lemma, but the name "Sperner's lemma" also refers to an unrelated result on coloring triangulations. To differentiate the two results, the result on the size of a Sperner family is now more commonly known as Sperner's theorem. Statement. A family of sets in which none of the sets is a strict subset of another is called a Sperner family, or an antichain of sets, or a clutter. For example, the family of "k"-element subsets of an "n"-element set is a Sperner family. No set in this family can contain any of the others, because a containing set has to be strictly bigger than the set it contains, and in this family all sets have equal size. The value of "k" that makes this example have as many sets as possible is "n"/2 if "n" is even, or either of the nearest integers to "n"/2 if "n" is odd. For this choice, the number of sets in the family is formula_0. Sperner's theorem states that these examples are the largest possible Sperner families over an "n"-element set. Formally, the theorem states that, Partial orders. Sperner's theorem can also be stated in terms of partial order width. The family of all subsets of an "n"-element set (its power set) can be partially ordered by set inclusion; in this partial order, two distinct elements are said to be incomparable when neither of them contains the other. The width of a partial order is the largest number of elements in an antichain, a set of pairwise incomparable elements. Translating this terminology into the language of sets, an antichain is just a Sperner family, and the width of the partial order is the maximum number of sets in a Sperner family. Thus, another way of stating Sperner's theorem is that the width of the inclusion order on a power set is formula_4. A graded partially ordered set is said to have the Sperner property when one of its largest antichains is formed by a set of elements that all have the same rank. In this terminology, Sperner's theorem states that the partially ordered set of all subsets of a finite set, partially ordered by set inclusion, has the Sperner property. Proof. There are many proofs of Sperner's theorem, each leading to different generalizations (see ). The following proof is due to . Let "sk" denote the number of "k"-sets in "S". For all 0 ≤ "k" ≤ "n", formula_5 and, thus, formula_6 Since "S" is an antichain, we can sum over the above inequality from "k" = 0 to "n" and then apply the LYM inequality to obtain formula_7 which means formula_8 This completes the proof of part 1. To have equality, all the inequalities in the preceding proof must be equalities. Since formula_9 if and only if formula_10 or formula_11 we conclude that equality implies that "S" consists only of sets of sizes formula_12 or formula_13 For even "n" that concludes the proof of part 2. For odd "n" there is more work to do, which we omit here because it is complicated. See , pp. 3–4. Generalizations. There are several generalizations of Sperner's theorem for subsets of formula_14 the poset of all subsets of "E". No long chains. A chain is a subfamily formula_15 that is totally ordered, i.e., formula_16 (possibly after renumbering). The chain has "r" + 1 members and length "r". An "r-chain-free family (also called an "r-family) is a family of subsets of "E" that contains no chain of length "r". proved that the largest size of an "r"-chain-free family is the sum of the "r" largest binomial coefficients formula_17. The case "r" = 1 is Sperner's theorem. "p"-compositions of a set. In the set formula_18 of "p"-tuples of subsets of "E", we say a "p"-tuple formula_19 is ≤ another one, formula_20 if formula_21 for each "i" = 1,2...,"p". We call formula_19 a "p"-composition of "E" if the sets formula_22 form a partition of "E". proved that the maximum size of an antichain of "p"-compositions is the largest "p"-multinomial coefficient formula_23 that is, the coefficient in which all "n""i" are as nearly equal as possible (i.e., they differ by at most 1). Meshalkin proved this by proving a generalized LYM inequality. The case "p" = 2 is Sperner's theorem, because then formula_24 and the assumptions reduce to the sets formula_25 being a Sperner family. No long chains in "p"-compositions of a set. combined the Erdös and Meshalkin theorems by adapting Meshalkin's proof of his generalized LYM inequality. They showed that the largest size of a family of "p"-compositions such that the sets in the "i"-th position of the "p"-tuples, ignoring duplications, are "r"-chain-free, for every formula_26 (but not necessarily for "i" = "p"), is not greater than the sum of the formula_27 largest "p"-multinomial coefficients. Projective geometry analog. In the finite projective geometry PG("d", "F""q") of dimension "d" over a finite field of order "q", let formula_28 be the family of all subspaces. When partially ordered by set inclusion, this family is a lattice. proved that the largest size of an antichain in formula_28 is the largest Gaussian coefficient formula_29 this is the projective-geometry analog, or "q"-analog, of Sperner's theorem. They further proved that the largest size of an "r"-chain-free family in formula_28 is the sum of the "r" largest Gaussian coefficients. Their proof is by a projective analog of the LYM inequality. No long chains in "p"-compositions of a projective space. obtained a Meshalkin-like generalization of the Rota–Harper theorem. In PG("d", "F""q"), a Meshalkin sequence of length "p" is a sequence formula_30 of projective subspaces such that no proper subspace of PG("d", "F""q") contains them all and their dimensions sum to formula_31. The theorem is that a family of Meshalkin sequences of length "p" in PG("d", "F""q"), such that the subspaces appearing in position "i" of the sequences contain no chain of length "r" for each formula_32 is not more than the sum of the largest formula_27 of the quantities formula_33 where formula_34 (in which we assume that formula_35) denotes the "p"-Gaussian coefficient formula_36 and formula_37 the second elementary symmetric function of the numbers formula_38
[ { "math_id": 0, "text": "\\tbinom{n}{\\lfloor n/2\\rfloor}" }, { "math_id": 1, "text": "|S| \\le \\binom{n}{\\lfloor n/2\\rfloor}," }, { "math_id": 2, "text": "\\lfloor n/2\\rfloor" }, { "math_id": 3, "text": "\\lceil n/2\\rceil" }, { "math_id": 4, "text": "\\binom{n}{\\lfloor n/2\\rfloor}" }, { "math_id": 5, "text": "{n \\choose \\lfloor{n/2}\\rfloor} \\ge {n \\choose k}" }, { "math_id": 6, "text": "{s_k \\over {n \\choose \\lfloor{n/2}\\rfloor}} \\le {s_k \\over {n \\choose k}}." }, { "math_id": 7, "text": "\\sum_{k=0}^n{s_k \\over {n \\choose \\lfloor{n/2}\\rfloor}} \\le \\sum_{k=0}^n{s_k \\over {n \\choose k}} \\le 1," }, { "math_id": 8, "text": " |S| = \\sum_{k=0}^n s_k \\le {n \\choose \\lfloor{n/2}\\rfloor}." }, { "math_id": 9, "text": "{n \\choose \\lfloor{n/2}\\rfloor} = {n \\choose k}" }, { "math_id": 10, "text": "k = \\lfloor{n/2}\\rfloor" }, { "math_id": 11, "text": "\\lceil{n/2}\\rceil," }, { "math_id": 12, "text": "\\lfloor{n/2}\\rfloor" }, { "math_id": 13, "text": "\\lceil{n/2}\\rceil." }, { "math_id": 14, "text": "\\mathcal P(E)," }, { "math_id": 15, "text": "\\{S_0,S_1,\\dots,S_r\\} \\subseteq \\mathcal P(E)" }, { "math_id": 16, "text": "S_0 \\subset S_1\\subset \\dots\\subset S_r" }, { "math_id": 17, "text": "\\binom{n}{i}" }, { "math_id": 18, "text": "\\mathcal P(E)^p" }, { "math_id": 19, "text": "(S_1,\\dots,S_p)" }, { "math_id": 20, "text": "(T_1,\\dots,T_p)," }, { "math_id": 21, "text": "S_i \\subseteq T_i" }, { "math_id": 22, "text": "S_1,\\dots,S_p" }, { "math_id": 23, "text": "\\binom{n}{n_1\\ n_2\\ \\dots\\ n_p}," }, { "math_id": 24, "text": "S_2 = E \\setminus S_1" }, { "math_id": 25, "text": "S_1" }, { "math_id": 26, "text": "i = 1,2,\\dots,p-1" }, { "math_id": 27, "text": "r^{p-1}" }, { "math_id": 28, "text": "\\mathcal L(p,F_q)" }, { "math_id": 29, "text": "\\begin{bmatrix} d+1 \\\\ k\\end{bmatrix};" }, { "math_id": 30, "text": "(A_1,\\ldots,A_p)" }, { "math_id": 31, "text": "d-p+1" }, { "math_id": 32, "text": "i = 1,2,\\dots,p-1," }, { "math_id": 33, "text": "\\begin{bmatrix} d+1 \\\\ n_1\\ n_2\\ \\dots\\ n_p \\end{bmatrix} q^{s_2(n_1,\\ldots,n_p)}," }, { "math_id": 34, "text": "\\begin{bmatrix} d+1 \\\\ n_1\\ n_2\\ \\dots\\ n_p \\end{bmatrix}" }, { "math_id": 35, "text": "d+1 = n_1+\\cdots+n_p" }, { "math_id": 36, "text": "\\begin{bmatrix} d+1 \\\\ n_1 \\end{bmatrix} \\begin{bmatrix} d+1-n_1 \\\\ n_2 \\end{bmatrix} \\cdots \\begin{bmatrix} d+1-(n_1+\\cdots+n_{p-1} )\\\\ n_p \\end{bmatrix}" }, { "math_id": 37, "text": "s_2(n_1,\\ldots,n_p) := n_1n_2 + n_1n_3 + n_2n_3 + n_1n_4 + \\cdots + n_{p-1}n_p," }, { "math_id": 38, "text": "n_1, n_2, \\dots, n_p." } ]
https://en.wikipedia.org/wiki?curid=748844
74890981
Kreiss matrix theorem
In matrix analysis, Kreiss matrix theorem relates the so-called Kreiss constant of a matrix with the power iterates of this matrix. It was originally introduced by Heinz-Otto Kreiss to analyze the stability of finite difference methods for partial difference equations. Kreiss constant of a matrix. Given a matrix "A", the Kreiss constant 𝒦("A") (with respect to the closed unit circle) of "A" is defined as formula_0 while the Kreiss constant 𝒦lhp("A") with respect to the left-half plane is given by formula_1 Statement of Kreiss matrix theorem. Let "A" be a square matrix of order "n" and "e" be the Euler's number. The modern and sharp version of Kreiss matrix theorem states that the inequality below is tight formula_4 and it follows from the application of Spijker's lemma. There also exists an analogous result in terms of the Kreiss constant with respect to the left-half plane and the matrix exponential: formula_5 Consequences and applications. The value formula_6 (respectively, formula_7) can be interpreted as the "maximum transient growth" of the discrete-time system formula_8 (respectively, continuous-time system formula_9). Thus, the Kreiss matrix theorem gives both upper and lower bounds on the transient behavior of the system with dynamics given by the matrix "A": a large (and finite) Kreiss constant indicates that the system will have an accentuated transient phase before decaying to zero. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{K}(\\mathbf{A})=\\sup _{|z|>1}(|z|-1)\\left\\|(z-\\mathbf{A})^{-1}\\right\\|," }, { "math_id": 1, "text": "\\mathcal{K}_{\\textrm{lhp}}(\\mathbf{A})=\\sup _{\\Re(z)>0}(\\Re(z))\\left\\|(z-\\mathbf{A})^{-1}\\right\\|." }, { "math_id": 2, "text": "\\mathcal{K}(A)=\\sup _{\\varepsilon>0} \\frac{\\rho_{\\varepsilon}(A)-1}{\\varepsilon}" }, { "math_id": 3, "text": "\\mathcal{K}_{\\textrm{lhp}}(A)=\\sup _{\\varepsilon>0} \\frac{\\alpha_{\\varepsilon}(A)}{\\varepsilon}" }, { "math_id": 4, "text": "\\mathcal{K}(\\mathbf{A}) \\leq \\sup_{k \\geq 0}\\left\\|\\mathbf{A}^k\\right\\| \\leq e\\, n\\, \\mathcal{K}(\\mathbf{A})," }, { "math_id": 5, "text": "\\mathcal{K}_{\\mathrm{lhp}}(\\mathbf{A}) \\leq \\sup _{t \\geq 0}\\left\\|\\mathrm{e}^{t \\mathbf{A}}\\right\\| \\leq e \\, n \\, \\mathcal{K}_{\\mathrm{lhp}}(\\mathbf{A})" }, { "math_id": 6, "text": "\\sup_{k \\geq 0}\\left\\|\\mathbf{A}^k\\right\\|" }, { "math_id": 7, "text": "\\sup _{t \\geq 0}\\left\\|\\mathrm{e}^{t \\mathbf{A}}\\right\\|" }, { "math_id": 8, "text": "x_{k+1}=A x_k" }, { "math_id": 9, "text": "\\dot{x}=A x" } ]
https://en.wikipedia.org/wiki?curid=74890981
748976
Ernst Heinrich Weber
German psychologist Ernst Heinrich Weber (24 June 1795 – 26 January 1878) was a German physician who is considered one of the founders of experimental psychology. He was an influential and important figure in the areas of physiology and psychology during his lifetime and beyond. His studies on sensation and touch, along with his emphasis on good experimental techniques led to new directions and areas of study for future psychologists, physiologists, and anatomists. Ernst Weber was born into an academic background, with his father serving as a professor at the University of Wittenberg. Weber became a doctor, specializing in anatomy and physiology. Two of his younger brothers, Wilhelm and Eduard, were also influential in academia, both as scientists with one specializing in physics and the other in anatomy. Ernst became a lecturer and a professor at the University of Leipzig and stayed there until his retirement. Early life and education. Ernst Heinrich Weber was born on 24 June 1795 in Wittenberg, Saxony, Holy Roman Empire. He was son to Michael Weber, a professor of theology at the University of Wittenberg. At a young age, Weber became interested in physics and the sciences after being heavily influenced by Ernst Chladni, a physicist often referred to as the “father of acoustics”. Weber completed secondary school at Meissen and began studying medicine at the University of Wittenberg in 1811. He went on to receive his MD in 1815 from the University of Halle. University career. Weber spent his entire academic career at the University of Leipzig. He completed his "Habilitation" in 1817 and became an assistant in J.C. Clarus’ medical clinic in the same year. He became professor of comparative anatomy in 1818 and chair of human anatomy at the university in 1821. Ernst Weber’s first direct contribution to psychology came in 1834 when trying to describe the sensation of touch ("De Pulsu, Resorptione, Auditu et Tactu". Leipzig 1834). He was professor of physiology and anatomy from 1840 to 1866, and returned to the position of professor of anatomy from 1866 to 1871. In his later life, Weber became less involved in testing and experimenting, although he was still interested in sensory physiology. Ernst Heinrich Weber retired from the University of Leipzig in 1871. He continued to work with his brother, Eduard and their work with nerve stimulation and muscle suppression lead to inhibitory responses as a popular therapy of the time. Ernst Weber died in 1878 in Leipzig, Germany. Contributions. Just-Noticeable Difference. Weber described the just-noticeable difference or jnd as follows: “in observing the disparity between things that are compared, we perceive not the difference between the things, but the ratio of this difference to the magnitude of things compared.” In other words, we are able to distinguish the relative difference, not the absolute difference between items. Or, we can distinguish between stimuli having a constant ratio, not a constant difference. This ratio is known as the Weber fraction. Weber’s first work with the jnd had to do with differences in weight. He stated that the jnd is the "minimum amount of difference between two weights necessary to tell them apart". He found that the finest discrimination between weights was when they differed by 8–10%. For example, if you were holding a 100 g block, the second block would need to weigh at least 108 g in order to be distinguishable. Weber also suspected that a constant fraction applied for all senses, but is different for each sense. When comparing the differences in line length, there must be at least 0.01 difference in order to distinguish the two. When comparing music pitch, there must be at least 0.006 vibrations per second difference. So for every sense, some increase in intensity is needed in order to tell a difference. Weber's Law. Weber’s Law, as labeled by Gustav Theodor Fechner, established that sensory events can be related mathematically to measurable relative changes in physical stimulus values. formula_0 formula_1: amount of stimulation that needs to be added to produce a jnd formula_2: amount of existing stimulation (formula_2 from the German "Reiz", meaning stimulus) formula_3: constant (different for each sense) Weber’s law is invalid when the stimulus approaches the upper or lower limits of a sensory modality. Fechner took inspiration from Weber’s Law and developed what we know today as Fechner’s Law, claiming that there was a logarithmic relation between stimulus intensity and perceived intensity. Fechner’s Law was more advanced than Weber's Law, partly because Fechner had developed new methods for measuring just-noticeable differences in different sense modalities, making the measured results more accurate. Experimental Psychology. For most of his career, Weber worked with his brothers, Wilhelm and Eduard, and partner Gustav Theodor Fechner. Throughout these working relationships, Weber completed research on the central nervous system, auditory system, anatomy and function of brain, circulation, etc., and a large portion of research on sensory physiology and psychology. The following items are part of Weber’s contributions the experimental psychology: Experimental Wave Theory. Studied flow and movement of waves in liquids and elastic tubes. Hydrodynamics. Weber discovered laws and applied them to circulation. In 1821, Weber launched a series of experiments on the physics of fluids with his younger brother Wilhelm. This research was the first detailed account of hydrodynamic principles in the circulation of blood. Weber continued his research on blood and in 1827, he made another significant finding. Weber explained the elasticity of blood vessels in the movement of blood in the aorta in a continuous flow to the capillaries and arterioles. Two-point Threshold Technique. This technique helped map sensitivity and touch acuity on the body using compass technique. Points of a compass would be set at varying distances in order to see at what distance are the points of the compass perceived as two separate points instead of one single point. Weber also wrote about and tested other ideas on sensation including a terminal threshold, which is the highest intensity an individual could sense before the sensation could not be detected any longer. Weber’s Illusion. Weber’s Illusion is an "experience of divergence of two points when stimulation is moved over insensitive areas and convergence of two points when moved over sensitive areas". Weber’s use of multivariate experiment, precise measurements, and research on sensory psychology and sensory physiology laid the groundwork for accepting experimental psychology as a field and providing new ideas for fellow 19th century psychologists to expand. Publications. Weber's work on the tactile senses was published in Latin as "De Subtilitate Tactus" (1834), and in German as "Der Tastsinn und das Gemeingefühl" in 1846. Both works were translated into English by Ross and Murray as "E.H.Weber: The Sense of Touch" (Academic Press, 1978) and reprinted as "E.H.Weber on the Tactile Senses" (Erlbaum, Taylor &amp; Francis, 1996). Weber proposed there was a threshold of sensation in each individual. The two-point threshold, the smallest distance between two points where a person determines that it is two points and not one, was Weber’s first discovery. Weber’s work made a significant impact on the field of experimental psychology, as he was one of the first scientist to test his ideas on humans. His meticulous notes and new ideas of testing subjects described in his book "Der tastsinn und das gemeingefühl" (English: "The sense of touch and the common sensibility") led E. B. Titchener to call the work "the foundation stone of experimental psychology". The book that described blood circulation research, "Wellenlehre, auf Experimenten gegrϋndet" (English: "Wave Theory, Founded on Experiments") became instantly recognized as very important to physics and physiology. This research lead the way for future investigating, although it was not formally published until 1850 with the culmination of the rest of his research on blood in a book entitled, "Ueber die Anwendung der Wellenlehre auf die Lehre vom Kreislauf des Blutes und insbesondere auf die Pulslehre" (English: "Concerning the application of the wave theory to the theory of the circulation of the blood and, in particular, on the pulse teaching"). Joint works with his brothers Wilhelm Eduard Weber and Eduard Friedrich Weber: Legacy and influence. Weber is often cited as the pioneer or father of experimental psychology. He was the first to conduct true psychological experiments that held validity. While most psychologists of the time conducted work from behind a desk, Weber was actively conducting experiments, manipulating only one variable at a time in order to gain more accurate results. This paved the way for the field of psychology as an experimental science and opened the way for the development of even more accurate and intense research methods. One of Weber’s greatest influences was on Gustav Fechner. Weber was appointed the Dozent of Psychology at the University of Leipzig the same year that Fechner enrolled. Weber’s work with sensation inspired Fechner to further the work and go on to develop Weber’s law. At the time of his sensation work, Weber did not fully realize the implications that his experiments would have on understanding of sensory stimulus and response. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\Delta R}{R} = k" }, { "math_id": 1, "text": "\\Delta R" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=748976
749012
Current source
Electronic component delivering stable electric current regardless of voltage A current source is an electronic circuit that delivers or absorbs an electric current which is independent of the voltage across it. A current source is the dual of a voltage source. The term "current sink" is sometimes used for sources fed from a negative voltage supply. Figure 1 shows the schematic symbol for an ideal current source driving a resistive load. There are two types. An "independent current source" (or sink) delivers a constant current. A "dependent current source" delivers a current which is proportional to some other voltage or current in the circuit. Background. An ideal current source generates a current that is independent of the voltage changes across it. An ideal current source is a mathematical model, which real devices can approach very closely. If the current through an ideal current source can be specified independently of any other variable in a circuit, it is called an "independent" current source. Conversely, if the current through an ideal current source is determined by some other voltage or current in a circuit, it is called a dependent or controlled current source. Symbols for these sources are shown in Figure 2. The internal resistance of an ideal current source is infinite. An independent current source with zero current is identical to an ideal open circuit. The voltage across an ideal current source is completely determined by the circuit it is connected to. When connected to a short circuit, there is zero voltage and thus zero power delivered. When connected to a load resistance, the current source manages the voltage in such a way as to keep the current constant; so in an ideal current source the voltage across the source approaches infinity as the load resistance approaches infinity (an open circuit). No physical current source is ideal. For example, no physical current source can operate when applied to an open circuit. There are two characteristics that define a current source in real life. One is its internal resistance and the other is its compliance voltage. The compliance voltage is the maximum voltage that the current source can supply to a load. Over a given load range, it is possible for some types of real current sources to exhibit nearly infinite internal resistance. However, when the current source reaches its compliance voltage, it abruptly stops being a current source. In circuit analysis, a current source having finite internal resistance is modeled by placing the value of that resistance across an ideal current source (the Norton equivalent circuit). However, this model is only useful when a current source is operating within its compliance voltage. Implementations. Passive current source. The simplest non-ideal current source consists of a voltage source in series with a resistor. The amount of current available from such a source is given by the ratio of the voltage across the voltage source to the resistance of the resistor (Ohm's law; "I" = "V"/"R"). This value of current will only be delivered to a load with zero voltage drop across its terminals (a short circuit, an uncharged capacitor, a charged inductor, a virtual ground circuit, etc.) The current delivered to a load with nonzero voltage (drop) across its terminals (a linear or nonlinear resistor with a finite resistance, a charged capacitor, an uncharged inductor, a voltage source, etc.) will always be different. It is given by the ratio of the voltage drop across the resistor (the difference between the exciting voltage and the voltage across the load) to its resistance. For a nearly ideal current source, the value of the resistor should be very large but this implies that, for a specified current, the voltage source must be very large (in the limit as the resistance and the voltage go to infinity, the current source will become ideal and the current will not depend at all on the voltage across the load). Thus, efficiency is low (due to power loss in the resistor) and it is usually impractical to construct a 'good' current source this way. Nonetheless, it is often the case that such a circuit will provide adequate performance when the specified current and load resistance are small. For example, a 5 V voltage source in series with a 4.7 kΩ resistor will provide an "approximately" constant current of 1 mA ± 5% to a load resistance in the range of 50 to 450 Ω. A Van de Graaff generator is an example of such a high voltage current source. It behaves as an almost constant current source because of its very high output voltage coupled with its very high output resistance and so it supplies the same few microamperes at any output voltage up to hundreds of thousands of volts (or even tens of megavolts) for large laboratory versions. Active current sources without negative feedback. In these circuits the output current is not monitored and controlled by means of negative feedback. Current-stable nonlinear implementation. They are implemented by active electronic components (transistors) having current-stable nonlinear output characteristic when driven by steady input quantity (current or voltage). These circuits behave as dynamic resistors changing their present resistance to compensate current variations. For example, if the load increases its resistance, the transistor decreases its present output resistance (and "vice versa") to keep up a constant total resistance in the circuit. Active current sources have many important applications in electronic circuits. They are often used in place of ohmic resistors in analog integrated circuits (e.g., a differential amplifier) to generate a current that depends slightly on the voltage across the load. The common emitter configuration driven by a constant input current or voltage and common source (common cathode) driven by a constant voltage naturally behave as current sources (or sinks) because the output impedance of these devices is naturally high. The output part of the simple current mirror is an example of such a current source widely used in integrated circuits. The common base, common gate and common grid configurations can serve as constant current sources as well. A JFET can be made to act as a current source by tying its gate to its source. The current then flowing is the "I"DSS of the FET. These can be purchased with this connection already made and in this case the devices are called current regulator diodes or constant current diodes or current limiting diodes (CLD). Alternatively, an enhancement-mode N-channel MOSFET (metal–oxide–semiconductor field-effect transistor) could be used instead of a JFET in the circuits listed below for similar functionality. Following voltage implementation. An example: bootstrapped current source. Voltage compensation implementation. The simple resistor passive current source is ideal only when the voltage across it is zero; so voltage compensation by applying parallel negative feedback might be considered to improve the source. Operational amplifiers with feedback effectively work to minimise the voltage across their inputs. This results in making the inverting input a virtual ground, with the current running through the feedback, or load, and the passive current source. The input voltage source, the resistor, and the op-amp constitutes an "ideal" current source with value, "I"OUT "V"IN/"R". The transimpedance amplifier and an op-amp inverting amplifier are typical implementations of this idea. The floating load is a serious disadvantage of this circuit solution. Current compensation implementation. A typical example are Howland current source and its derivative Deboo integrator. In the last example (Fig. 1), the Howland current source consists of an input voltage source, "V"IN, a positive resistor, R, a load (the capacitor, C, acting as impedance "Z") and a negative impedance converter INIC (R1 R2 R3 R and the op-amp). The input voltage source and the resistor R constitute an imperfect current source passing current, "I"R through the load (Fig. 3 in the source). The INIC acts as a second current source passing "helping" current, "I"−R, through the load. As a result, the total current flowing through the load is constant and the circuit impedance seen by the input source is increased. However the Howland current source isn't widely used because it requires the four resistors to be perfectly matched, and its impedance drops at high frequencies. The grounded load is an advantage of this circuit solution. Current sources with negative feedback. They are implemented as a voltage follower with series negative feedback driven by a constant input voltage source (i.e., a "negative feedback voltage stabilizer"). The voltage follower is loaded by a constant (current sensing) resistor acting as a simple current-to-voltage converter connected in the feedback loop. The external load of this current source is connected somewhere in the path of the current supplying the current sensing resistor but out of the feedback loop. The voltage follower adjusts its output current "I"OUT flowing through the load so that to make the voltage drop "V"R "I"OUT"R" across the current sensing resistor R equal to the constant input voltage "V"IN. Thus the voltage stabilizer keeps up a constant voltage drop across a constant resistor; so, a constant current "I"OUT "V"R/"R" "V"IN/"R" flows through the resistor and respectively through the load. If the input voltage varies, this arrangement will act as a voltage-to-current converter (voltage-controlled current source, VCCS); it can be thought as a reversed (by means of negative feedback) current-to-voltage converter. The resistance R determines the transfer ratio (transconductance). Current sources implemented as circuits with series negative feedback have the disadvantage that the voltage drop across the current sensing resistor decreases the maximal voltage across the load (the "compliance voltage"). Simple transistor current sources. Constant current diode. The simplest constant-current source or sink is formed from one component: a JFET with its gate attached to its source. Once the drain-source voltage reaches a certain minimum value, the JFET enters saturation where current is approximately constant. This configuration is known as a constant-current diode, as it behaves much like a dual to the constant voltage diode (Zener diode) used in simple voltage sources. Due to the large variability in saturation current of JFETs, it is common to also include a source resistor (shown in the adjacent image) which allows the current to be tuned down to a desired value. Zener diode current source. In this bipolar junction transistor (BJT) implementation (Figure 4) of the general idea above, a "Zener voltage stabilizer" (R1 and DZ1) drives an "emitter follower" (Q1) loaded by a "constant emitter resistor" (R2) sensing the load current. The external (floating) load of this current source is connected to the collector so that almost the same current flows through it and the emitter resistor (they can be thought of as connected in series). The transistor, Q1, adjusts the output (collector) current so as to keep the voltage drop across the constant emitter resistor, R2, almost equal to the relatively constant voltage drop across the Zener diode, DZ1. As a result, the output current is almost constant even if the load resistance and/or voltage vary. The operation of the circuit is considered in details below. A Zener diode, when reverse biased (as shown in the circuit) has a constant voltage drop across it irrespective of the current flowing through it. Thus, as long as the Zener current ("I"Z) is above a certain level (called holding current), the voltage across the Zener diode ("V"Z) will be constant. Resistor, R1, supplies the Zener current and the base current ("I"B) of NPN transistor (Q1). The constant Zener voltage is applied across the base of Q1 and emitter resistor, R2. Voltage across "R"2 ("V"R2) is given by "V"Z − "V"BE, where "V"BE is the base-emitter drop of Q1. The emitter current of Q1 which is also the current through R2 is given by formula_0 Since "V"Z is constant and "V"BE is also (approximately) constant for a given temperature, it follows that "V"R2 is constant and hence "I"E is also constant. Due to transistor action, emitter current, "I"E, is very nearly equal to the collector current, "I"C, of the transistor (which in turn, is the current through the load). Thus, the load current is constant (neglecting the output resistance of the transistor due to the Early effect) and the circuit operates as a constant current source. As long as the temperature remains constant (or doesn't vary much), the load current will be independent of the supply voltage, R1 and the transistor's gain. R2 allows the load current to be set at any desirable value and is calculated by formula_1 where "V"BE is typically 0.65 V for a silicon device. ("I"R2 is also the emitter current and is assumed to be the same as the collector or required load current, provided "h"FE is sufficiently large). Resistance "R"1 is calculated as formula_2 where "K" = 1.2 to 2 (so that "R"R1 is low enough to ensure adequate "I"B), formula_3 and "h"FE,min is the lowest acceptable current gain for the particular transistor type being used. LED current source. The Zener diode can be replaced by any other diode; e.g., a light-emitting diode LED1 as shown in Figure 5. The LED voltage drop ("V"D) is now used to derive the constant voltage and also has the additional advantage of tracking (compensating) "V"BE changes due to temperature. "R"2 is calculated as formula_4 and "R"1 as formula_5, where "I"D is the LED current Transistor current source with diode compensation. Temperature changes will change the output current delivered by the circuit of Figure 4 because VBE is sensitive to temperature. Temperature dependence can be compensated using the circuit of Figure 6 that includes a standard diode, D, (of the same semiconductor material as the transistor) in series with the Zener diode as shown in the image on the left. The diode drop ("V"D) tracks the "V"BE changes due to temperature and thus significantly counteracts temperature dependence of the CCS. Resistance "R"2 is now calculated as formula_6 Since "V"D "V"BE 0.65 V, formula_7 "R"1 is calculated as formula_8 Note that this only works well if DZ1 is a reference diode or another stable voltage source. Together with 'normal' Zener diodes especially with lower Zener voltages (&lt;5V) the diode might even worsen overall temperature dependency. Current mirror with emitter degeneration. Series negative feedback is also used in the . Negative feedback is a basic feature in some current mirrors using multiple transistors, such as the Widlar current source and the Wilson current source. Constant current source with thermal compensation. One limitation with the circuits in Figures 5 and 6 is that the thermal compensation is imperfect. In bipolar transistors, as the junction temperature increases the Vbe drop (voltage drop from base to emitter) decreases. In the two previous circuits, a decrease in Vbe will cause an increase in voltage across the emitter resistor, which in turn will cause an increase in collector current drawn through the load. The end result is that the amount of 'constant' current supplied is at least somewhat dependent on temperature. This effect is mitigated to a large extent, but not completely, by corresponding voltage drops for the diode, D1, in Figure 6, and the LED, LED1, in Figure 5. If the power dissipation in the active device of the CCS is not small and/or insufficient emitter degeneration is used, this can become a non-trivial issue. Imagine in Figure 5, at power up, that the LED has 1 V across it driving the base of the transistor. At room temperature there is about 0.6 V drop across the "V"be junction and hence 0.4 V across the emitter resistor, giving an approximate collector (load) current of 0.4/Re amps. Now imagine that the power dissipation in the transistor causes it to heat up. This causes the "V"be drop (which was 0.6 V at room temperature) to drop to, say, 0.2 V. Now the voltage across the emitter resistor is 0.8 V, twice what it was before the warmup. This means that the collector (load) current is now twice the design value! This is an extreme example of course, but serves to illustrate the issue. The circuit to the left overcomes the thermal problem (see also, current limiting). To see how the circuit works, assume the voltage has just been applied at V+. Current runs through R1 to the base of Q1, turning it on and causing current to begin to flow through the load into the collector of Q1. This same load current then flows out of Q1's emitter and consequently through Rsense to ground. When this current through Rsense to ground is sufficient to cause a voltage drop that is equal to the Vbe drop of Q2, Q2 begins to turn on. As Q2 turns on it pulls more current through its collector resistor, R1, which diverts some of the injected current in the base of Q1, causing Q1 to conduct less current through the load. This creates a negative feedback loop within the circuit, which keeps the voltage at Q1's emitter almost exactly equal to the Vbe drop of Q2. Since Q2 is dissipating very little power compared to Q1 (since all the load current goes through Q1, not Q2), Q2 will not heat up any significant amount and the reference (current setting) voltage across Rsense will remain steady at ≈0.6 V, or one diode drop above ground, regardless of the thermal changes in the Vbe drop of Q1. The circuit is still sensitive to changes in the ambient temperature in which the device operates as the BE voltage drop in Q2 varies slightly with temperature. Op-amp current sources. The simple transistor current source from Figure 4 can be improved by inserting the base-emitter junction of the transistor in the feedback loop of an op-amp (Figure 7). Now the op-amp increases its output voltage to compensate for the VBE drop. The circuit is actually a buffered non-inverting amplifier driven by a constant input voltage. It keeps up this constant voltage across the constant sense resistor. As a result, the current flowing through the load is constant as well; it is exactly the Zener voltage divided by the sense resistor. The load can be connected either in the emitter (Figure 7) or in the collector (Figure 4) but in both the cases it is floating as in all the circuits above. The transistor is not needed if the required current doesn't exceed the sourcing ability of the op-amp. The article on current mirror discusses another example of these so-called "gain-boosted" current mirrors. Voltage regulator current sources. The general negative feedback arrangement can be implemented by an IC voltage regulator (LM317 voltage regulator on Figure 8). As with the bare emitter follower and the precise op-amp follower above, it keeps up a constant voltage drop (1.25 V) across a constant resistor (1.25 Ω); so, a constant current (1 A) flows through the resistor and the load. The LED is on when the voltage across the load exceeds 1.8 V (the indicator circuit introduces some error). The grounded load is an important advantage of this solution. Curpistor tubes. Nitrogen-filled glass tubes with two electrodes and a calibrated Becquerel (decays per second) amount of 226Ra offer a constant number of charge carriers per second for conduction, which determines the maximum current the tube can pass over a voltage range from 25 to 500 V. Current and voltage source comparison. Most sources of electrical energy (mains electricity, a battery, etc.) are best modeled as voltage sources, however some (notably solar cells) are better modeled using current sources. Sometimes it is easier to view a current source as a voltage source and vice versa (see conversion in Figure 9) using Norton's and Thévenin's theorems. Voltage sources provide an almost-constant output voltage as long as the current drawn from the source is within the source's capabilities. An ideal voltage source loaded by an open circuit (i.e., an infinite impedance) will provide no current (and hence no power). But when the load resistance approaches zero (a short circuit), the current (and thus power) approach infinity. Such a theoretical device has a zero ohm output impedance in series with the source. Real-world voltage sources instead have a non-zero output impedance, which is preferably very low (often much less than 1 ohm). Conversely, a current source provides a constant current, as long as the impedance of the load is sufficiently lower than the current source's parallel impedance (which is preferably very high and ideally infinite). In the case of transistor current sources, impedances of a few megohms (at low frequencies) are typical. Because power is current squared times resistance, as a load resistance connected to a current source approaches zero (a short circuit), the current and thus power both approach zero. "Ideal" current sources don't exist. Hypothetically connecting one to an "ideal" open circuit would create the paradox of running a constant, non-zero current (from the current source) through an element with a defined zero current (the open circuit). As the load resistance of an ideal current source approaches infinity (an open circuit), the voltage across the load would approach infinity (because voltage equals current times resistance), and hence the power drawn would also approach infinity. The current of a real current source connected to an open circuit would instead flow through the current source's internal parallel impedance (and be wasted as heat). Similarly, "ideal" voltage sources don't exist. Hypothetically connecting one to an "ideal" short circuit would result a similar paradox of finite non-zero voltage across an element with defined zero voltage (the short circuit). Just like how voltage sources should not be connected in parallel to another voltage source with different voltages, a current source also should not be connected in series to another current source. Note, some circuits use elements that are similar "but not identical" to voltage or current sources and may work when connected in these manners that are disallowed for actual current or voltage sources. Also, just like voltage sources may be connected in series to add their voltages, current sources may be connected in parallel to add their currents. Charging a capacitor. Because the charge on a capacitor is equal to the integral of current with respect to time, an ideal constant current source charges a capacitor linearly with time, regardless of any series resistance. The Wilkinson analog-to-digital converter, for instance, uses this linear behavior to measure an unknown voltage by measuring the amount of time it takes a current source to charge a capacitor to that voltage. A voltage source instead charges a capacitor through a resistor non-linearly with time, because the charging current from the voltage source decreases exponentially with time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_\\text{R2} (= I_\\text{E} = I_\\text{C}) = \\frac{V_\\text{R2}}{R_\\text{2}} = \\frac{V_\\text{Z} - V_\\text{BE}}{R_\\text{2}}." }, { "math_id": 1, "text": "R_\\text{2} = \\frac{V_\\text{Z} - V_\\text{BE}}{I_\\text{R2}}" }, { "math_id": 2, "text": "R_\\text{1} = \\frac{V_\\text{S} - V_\\text{Z}}{I_\\text{Z} + K \\cdot I_\\text{B}}" }, { "math_id": 3, "text": "I_\\text{B} = \\frac{I_\\text{C}}{h_{FE,\\text{min}}}" }, { "math_id": 4, "text": "R_\\text{2} = \\frac {V_\\text{D} - V_\\text{BE}}{I_\\text{R2}}" }, { "math_id": 5, "text": "R_\\text{1} = \\frac{V_\\text{S} - V_\\text{D}}{I_\\text{D} + K \\cdot I_\\text{B}}" }, { "math_id": 6, "text": "R_2 = \\frac{V_\\text{Z} + V_\\text{D} - V_{BE}}{I_\\text{R2}}" }, { "math_id": 7, "text": "R_2 = \\frac{V_\\text{Z}}{I_\\text{R2}}" }, { "math_id": 8, "text": "R_1 = \\frac{V_\\text{S} - V_\\text{Z} - V_\\text{D}}{I_\\text{Z} + K \\cdot I_\\text{B}}" } ]
https://en.wikipedia.org/wiki?curid=749012
749033
Dilworth's theorem
In mathematics, in the areas of order theory and combinatorics, Dilworth's theorem states that, in any finite partially ordered set, the maximum size of an antichain of incomparable elements equals the minimum number of chains needed to cover all elements. This number is called the width of the partially order. The theorem is named for the mathematician Robert P. Dilworth, who published it in 1950. A version of the theorem for infinite partially ordered sets states that, when there exists a decomposition into finitely many chains, or when there exists a finite upper bound on the size of an antichain, the sizes of the largest antichain and of the smallest chain decomposition are again equal. Statement. An antichain in a partially ordered set is a set of elements no two of which are comparable to each other, and a chain is a set of elements every two of which are comparable. A chain decomposition is a partition of the elements of the order into disjoint chains. Dilworth's theorem states that, in any finite partially ordered set, the largest antichain has the same size as the smallest chain decomposition. Here, the size of the antichain is its number of elements, and the size of the chain decomposition is its number of chains. The width of the partial order is defined as the common size of the antichain and chain decomposition. Inductive proof. The following proof by induction on the size of the partially ordered set formula_0 is based on that of Galvin (1994). Let formula_0 be a finite partially ordered set. The theorem holds trivially if formula_0 is empty. So, assume that formula_0 has at least one element, and let formula_1 be a maximal element of formula_0. By induction, we assume that for some integer formula_2 the partially ordered set formula_3 can be covered by formula_2 disjoint chains formula_4 and has at least one antichain formula_5 of size formula_2. Clearly, formula_6 for formula_7. For formula_7, let formula_8 be the maximal element in formula_9 that belongs to an antichain of size formula_2 in formula_10, and set formula_11. We claim that formula_12 is an antichain. Let formula_13 be an antichain of size formula_2 that contains formula_8. Fix arbitrary distinct indices formula_14 and formula_15. Then formula_16. Let formula_17. Then formula_18, by the definition of formula_19. This implies that formula_20, since formula_21. By interchanging the roles of formula_14 and formula_15 in this argument we also have formula_22. This verifies that formula_12 is an antichain. We now return to formula_0. Suppose first that formula_23 for some formula_24. Let formula_25 be the chain formula_26. Then by the choice of formula_8, formula_27 does not have an antichain of size formula_2. Induction then implies that formula_27 can be covered by formula_28 disjoint chains since formula_29 is an antichain of size formula_30 in formula_31. Thus, formula_0 can be covered by formula_2 disjoint chains, as required. Next, if formula_32 for each formula_24, then formula_33 is an antichain of size formula_34 in formula_0 (since formula_1 is maximal in formula_0). Now formula_0 can be covered by the formula_34 chains formula_35, completing the proof. Proof via Kőnig's theorem. Like a number of other results in combinatorics, Dilworth's theorem is equivalent to Kőnig's theorem on bipartite graph matching and several other related theorems including Hall's marriage theorem. To prove Dilworth's theorem for a partial order "S" with "n" elements, using Kőnig's theorem, define a bipartite graph "G" = ("U","V","E") where "U" = "V" = "S" and where ("u","v") is an edge in "G" when "u" &lt; "v" in "S". By Kőnig's theorem, there exists a matching "M" in "G", and a set of vertices "C" in "G", such that each edge in the graph contains at least one vertex in "C" and such that "M" and "C" have the same cardinality "m". Let "A" be the set of elements of "S" that do not correspond to any vertex in "C"; then "A" has at least "n" - "m" elements (possibly more if "C" contains vertices corresponding to the same element on both sides of the bipartition) and no two elements of "A" are comparable to each other. Let "P" be a family of chains formed by including "x" and "y" in the same chain whenever there is an edge ("x","y") in "M"; then "P" has "n" - "m" chains. Therefore, we have constructed an antichain and a partition into chains with the same cardinality. To prove Kőnig's theorem from Dilworth's theorem, for a bipartite graph "G" = ("U","V","E"), form a partial order on the vertices of "G" in which "u" &lt; "v" exactly when "u" is in "U", "v" is in "V", and there exists an edge in "E" from "u" to "v". By Dilworth's theorem, there exists an antichain "A" and a partition into chains "P" both of which have the same size. But the only nontrivial chains in the partial order are pairs of elements corresponding to the edges in the graph, so the nontrivial chains in "P" form a matching in the graph. The complement of "A" forms a vertex cover in "G" with the same cardinality as this matching. This connection to bipartite matching allows the width of any partial order to be computed in polynomial time. More precisely, "n"-element partial orders of width "k" can be recognized in time "O"("kn"2). Extension to infinite partially ordered sets. Dilworth's theorem for infinite partially ordered sets states that a partially ordered set has finite width "w" if and only if it may be partitioned into "w" chains. For, suppose that an infinite partial order "P" has width "w", meaning that there are at most a finite number "w" of elements in any antichain. For any subset "S" of "P", a decomposition into "w" chains (if it exists) may be described as a coloring of the incomparability graph of "S" (a graph that has the elements of "S" as vertices, with an edge between every two incomparable elements) using "w" colors; every color class in a proper coloring of the incomparability graph must be a chain. By the assumption that "P" has width "w", and by the finite version of Dilworth's theorem, every finite subset "S" of "P" has a "w"-colorable incomparability graph. Therefore, by the De Bruijn–Erdős theorem, "P" itself also has a "w"-colorable incomparability graph, and thus has the desired partition into chains. However, the theorem does not extend so simply to partially ordered sets in which the width, and not just the cardinality of the set, is infinite. In this case the size of the largest antichain and the minimum number of chains needed to cover the partial order may be very different from each other. In particular, for every infinite cardinal number κ there is an infinite partially ordered set of width ℵ0 whose partition into the fewest chains has κ chains. discusses analogues of Dilworth's theorem in the infinite setting. Dual of Dilworth's theorem (Mirsky's theorem). A dual of Dilworth's theorem states that the size of the largest chain in a partial order (if finite) equals the smallest number of antichains into which the order may be partitioned. This is called Mirsky's theorem. Its proof is much simpler than the proof of Dilworth's theorem itself: for any element "x", consider the chains that have "x" as their largest element, and let "N"("x") denote the size of the largest of these "x"-maximal chains. Then each set "N"−1("i"), consisting of elements that have equal values of "N", is an antichain, and these antichains partition the partial order into a number of antichains equal to the size of the largest chain. Perfection of comparability graphs. A comparability graph is an undirected graph formed from a partial order by creating a vertex per element of the order, and an edge connecting any two comparable elements. Thus, a clique in a comparability graph corresponds to a chain, and an independent set in a comparability graph corresponds to an antichain. Any induced subgraph of a comparability graph is itself a comparability graph, formed from the restriction of the partial order to a subset of its elements. An undirected graph is perfect if, in every induced subgraph, the chromatic number equals the size of the largest clique. Every comparability graph is perfect: this is essentially just Mirsky's theorem, restated in graph-theoretic terms. By the perfect graph theorem of , the complement of any perfect graph is also perfect. Therefore, the complement of any comparability graph is perfect; this is essentially just Dilworth's theorem itself, restated in graph-theoretic terms . Thus, the complementation property of perfect graphs can provide an alternative proof of Dilworth's theorem. Width of special partial orders. The Boolean lattice "B""n" is the power set of an "n"-element set "X"—essentially {1, 2, …, "n"}—ordered by inclusion or, notationally, (2["n"], ⊆). Sperner's theorem states that a maximum antichain of "B""n" has size at most formula_36 In other words, a largest family of incomparable subsets of "X" is obtained by selecting the subsets of "X" that have median size. The Lubell–Yamamoto–Meshalkin inequality also concerns antichains in a power set and can be used to prove Sperner's theorem. If we order the integers in the interval [1, 2"n"] by divisibility, the subinterval ["n" + 1, 2"n"] forms an antichain with cardinality "n". A partition of this partial order into "n" chains is easy to achieve: for each odd integer "m" in [1,2"n"], form a chain of the numbers of the form "m"2"i". Therefore, by Dilworth's theorem, the width of this partial order is "n". The Erdős–Szekeres theorem on monotone subsequences can be interpreted as an application of Dilworth's theorem to partial orders of order dimension two. The "convex dimension" of an antimatroid is defined as the minimum number of chains needed to define the antimatroid, and Dilworth's theorem can be used to show that it equals the width of an associated partial order; this connection leads to a polynomial time algorithm for convex dimension. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "P':=P\\setminus\\{a\\}" }, { "math_id": 4, "text": "C_1,\\dots,C_k" }, { "math_id": 5, "text": "A_0" }, { "math_id": 6, "text": "A_0\\cap C_i\\ne\\emptyset" }, { "math_id": 7, "text": "i=1,2,\\dots,k" }, { "math_id": 8, "text": "x_i" }, { "math_id": 9, "text": "C_i" }, { "math_id": 10, "text": "P'" }, { "math_id": 11, "text": "A:=\\{x_1,x_2,\\dots,x_k\\}" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": "A_i" }, { "math_id": 14, "text": "i" }, { "math_id": 15, "text": "j" }, { "math_id": 16, "text": "A_i\\cap C_j\\ne\\emptyset" }, { "math_id": 17, "text": "y\\in A_i\\cap C_j" }, { "math_id": 18, "text": "y\\le x_j" }, { "math_id": 19, "text": "x_j" }, { "math_id": 20, "text": "x_i\\not \\ge x_j" }, { "math_id": 21, "text": "x_i\\not\\ge y" }, { "math_id": 22, "text": "x_j\\not\\ge x_i" }, { "math_id": 23, "text": "a\\ge x_i" }, { "math_id": 24, "text": "i\\in\\{1,2,\\dots,k\\}" }, { "math_id": 25, "text": "K" }, { "math_id": 26, "text": "\\{a\\}\\cup\\{z\\in C_i:z\\le x_i\\}" }, { "math_id": 27, "text": "P\\setminus K" }, { "math_id": 28, "text": "k-1" }, { "math_id": 29, "text": "A \\setminus \\{x_i \\}" }, { "math_id": 30, "text": "k - 1" }, { "math_id": 31, "text": "P \\setminus K" }, { "math_id": 32, "text": "a\\not\\ge x_i" }, { "math_id": 33, "text": "A\\cup\\{a\\}" }, { "math_id": 34, "text": "k+1" }, { "math_id": 35, "text": "\\{a\\},C_1,C_2,\\dots,C_k" }, { "math_id": 36, "text": "\\operatorname{width}(B_n) = {n \\choose \\lfloor{n/2}\\rfloor}." } ]
https://en.wikipedia.org/wiki?curid=749033
7490775
Shaping codes
Improved efficiency encoding In digital communications shaping codes are a method of encoding that changes the distribution of signals to improve efficiency. Description. Typical digital communication systems uses M-Quadrature Amplitude Modulation(QAM) to communicate through an analog channel (specifically a communication channel with Gaussian noise). For Higher bit rates(M) the minimum signal-to-noise ratio (SNR) required by a QAM system with Error Correcting Codes is about 1.53 dB higher than minimum SNR required by a Gaussian source(&gt;30% more transmitter power) as given in Shannon–Hartley theorem formula_0 where "C" is the channel capacity in bits per second; "B" is the bandwidth of the channel in hertz; "S" is the total signal power over the bandwidth and "N" is the total noise power over the bandwidth. "S/N" is the signal-to-noise ratio of the communication signal to the Gaussian noise interference expressed as a straight power ratio (not as decibels). This 1.53 dB difference is called the "shaping gap". Typically digital system will encode bits with uniform probability to maximize the entropy. Shaping code act as buffer between digital sources and modulator communication system. They will receive uniformly distributed data and convert it to Gaussian like distribution before presenting to the modulator. Shaping codes are helpful in reducing transmit power and thus reduce the cost of Power amplifier and the interference caused to other users in the vicinity. Application. Some of the methods used for shaping are described in the trellis shaping paper by Dr. G. D. Forney Jr. Shell mapping is used in V.34 modems to get a shaping gain of .8 dB. All the shaping schemes in the literature try to reduce the transmitted signal power. In future this may have find application in wireless networks where the interference from other nodes are becoming the major issue. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C = B \\log_2 \\left( 1+\\frac{S}{N} \\right) " } ]
https://en.wikipedia.org/wiki?curid=7490775
7491259
Hypothesis Theory
Hypothesis Theory is a psychological theory of learning developed during the 1960s and 1970s. Experimental Framework. In the basic experimental framework, the subject is presented with a series of multidimensional stimuli and provided feedback about the class of the stimulus on each trial. (Two class problems are typical.) The framework is thus in many ways similar to that of concept learning. In contrast to earlier association-type theories, the Hypothesis Theory argues that subjects solve this problem (i.e., learn the correct response to each stimulus), by testing a series of "hypotheses" about the relation of the cue values (stimulus features) to the class. For example, a candidate hypothesis for stimuli that vary along the three dimensions of shape, colour, and size might be formula_0 Because the subject is proposed to learn through the successive testing of hypotheses, the rate of learning should be highly dependent on the order in which hypotheses are tested, and on the particular hypotheses which are available to the learner. (It is conceivable that a given learner may not be able to formulate the hypothesis that would correctly classify the stimuli.) It is argued that as a result of the feature, Hypothesis theory can account for instances of poor learning that occur in some cases even when the statistical associational strength is high (). Formal Theories. The process by which a subject is proposed to go about forming such rules or hypothesis has been the topic of formal probabilistic modeling, a discussion of which can be found in the references. A conceptual framework for formal probabilistic modeling of hypotheses in cognitive research has been given by Rudolf Groner Status of Research. Hypothesis theory has fallen out of favor (along with many other rule-based models) in the wake of prototype and exemplar theories, both of which employ a notion of graded similarity rather than crisp set membership. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\{\\mathbf{Shape}=square, \\mathbf{Color}=blue, \\mathbf{Size}=small\\} \\Longrightarrow \\; \\{\\mathbf{Class}=good\\} " } ]
https://en.wikipedia.org/wiki?curid=7491259
74922764
Swarmalators
Mathematical model of swarms Swarmalators are generalizations of phase oscillators that swarm around in space as they synchronize in time. They were introduced to model the diverse real-world systems which both sync and swarm, such as vinegar eels, magnetic domain walls, and Japanese tree frogs. More formally, they are dynamical units with spatial degrees of freedom and internal degrees of freedom whose dynamics are coupled. Real world examples. Swarmalation occurs in diverse parts of Nature and technology some of which are discussed below. The Figure to the right plots some examples in a (discipline, number of particles) plot. Biological microswimmers. Sperm, vinegar eels and potentially other swimmers such as C elegans swarm through space via the rhythmic beating of their tails. This beating may synchronize with the beating of a neighboring swimmer via hydrodynamic coupling, which in turn causes spatial attraction; sync links to self-assembly. This can lead to vortex arrays, trains metachronal waves and other collective effects. Magnetic domain walls are key features in the field of magnetism and materials science, defined by the boundary between different magnetic domains in ferromagnetic materials. These domains are regions within a material where the magnetic moments of atoms are aligned in the same direction, creating a uniform magnetic field. The hold great promise as memory devices in next generation spintronics. In a simplified formula_0 model, a domain wall can be described by its center of mass formula_1 and the in-plane angle formula_2 of its magnetic dipole vector, thereby classifying them as swarmalators. Experiments reveal that the interaction between two such domain walls leads to rich spatiotemporal behaviors some of which is captured by the 1D swarmalator model listed above. Japanese Tree frogs. During courtship rituals, male Japanese Tree frogs attract the attention of females by croaking rhythmically. Neighboring males tend to alternate the croaking (croak formula_3 degree out of phase) so as to avoid "speaking over each other". Evidence suggests this (anti)-synchronization influences the inter-frog spatial dynamics, making them swarmalators. Janus particles are spherical particles with one hemisphere coated in a magnetic substance, the other remaining non-magnetic. They are named after the Roman God Janus who has two faces. This anisotropy gives the particles unusual magnetic properties. When subject to external magnetic fields, their magnetic dipoles vectors begins to oscillate which induces and couples to movements (thus qualifying as a swarmalators). The resultant "sync-selected self-assembly" gives rise novel superstructure with potential use in biomedicine contexts such as targeted drug delivery, bio imaging, and bio-sensing. Quincke rollers are a class of active particle that exhibits self-propelled motion in a fluid due to an electrohydrodynamic phenomenon known as the Quincke effect. This effect occurs when a dielectric (non-conducting) particle is subject to an electric field. The rotation of the particle, combined with frictional interactions with the surrounding fluid and surface, leads to a rolling motion. Thus, the particle has a phase formula_4 and a position formula_5 which couple, as required of swarmalator. Collections of Quincke rollers produce rich emergent behavior such as activity waves and shock waves. Embryonic cells are the foundational building blocks of an embryo, undergoing division and differentiation to form the complex structures of an organism. These cells exhibit remarkable plasticity, allowing them to transform into a wide range of specialized cell types. In the context of swarmalators, embryonic cells display a unique blend of synchronization and swarming behaviors. They coordinate their movements and genetic expression patterns in response to various cues, a process essential for proper tissue formation and organ development. This linking of sync and self-assembly make embryonic cells a compelling example of a real-world swarmalators. Robot swarms. Land based rovers as well as aerial drones programmed with swarmalator models have been created and has recreated the five collective states of the swarmalator model (see Mathematical Models section for the plot of these states). The linking of sync and swarming defines a new kind of bio-inspired algorithm which several potential applications. 2D swarmalator model. A mathematical model for swarmalators moving in 2D has been proposed. This 2D swarmalator model in generic form is formula_6 The spatial dynamics combine pairwise interaction formula_7 with pairwise formula_8, which produces swarming / aggregation. The novelty is the attraction is modified by a phase term formula_9; thus the aggregation becomes phase-dependent. Likewise, the phase dynamics contain a sync term formula_10 modified by a spatial term so the synchronization becomes position dependent. In short, the swarmalators model the interaction between self-synchronization and self-assembly in space. While in general the position could be in 2D or 3D, the instance of the swarmalator model originally introduced is a 2D model formula_11 and the choices for formula_12 etc. were formula_13 There are two parameters formula_14 and formula_15 are parameters: formula_14 controls the strength of phase-space attraction/repulsion, while formula_15 describes the phase coupling strength. The above can be considered a blending of the aggregation model introduced from biological swarming (the spatial part) and the Kuramoto model of phase oscillators (the phase part). Phenomena. The model above produces 5 collective states depicted in the Figure 1 below. To demarcate where each state arises and disappears as a parameters are changed, the rainbow order parameters, formula_16 where formula_18 are used. Figure 2 plots formula_19 versus formula_15 for fixed formula_14. As can be seen, formula_20 in the rainbow-like static phased wave state (at formula_15 = 0), and then declines as formula_15 decreases. A second order parameter formula_17, defined as the fraction of swarmalators that have completed at least one cycle in space and phase after transients in also plotted, which can distinguish between the active phase wave and splintered phase wave states. Puzzles. There are several unresolved puzzles and open questions related to swarmalators: 1D swarmalator model. A simpler swarmalator model where the spatial motion is confined to a 1D ring formula_26 has also been proposed formula_27 where formula_28 are the (random) natural frequencies of the i-th swarmalator and are drawn from certain distributions formula_29. This 1D model corresponds to the angular component of the 2D swarmalator model. The restriction to this simpler topology allows for a greater analysis. For instance, the model with natural frequencies can be solved by defining the sum/difference coordinates formula_30 the model simplifies into a pair of linearly coupled Kuramoto models formula_31 where formula_32, formula_33 and the rainbow order parameters are the equivalent of the 2D model formula_34 For unimodal distribution of formula_35 such as the Cauchy distribution, the model exhibits four collective states depicted in the figure on the right. Note in each state, the swarmalators split into a locked/drifting sub-populations, just like the Kuramoto model. The locked population are the denser regions in the Figure, the drifters the light grey regions. The figure to the right compares the bifurcations of the Kuramoto model to those of the 1D swarmalator model. For the Kuramoto model (top row), the sync order parameter formula_43 bifurcates from the async state (formula_44) and then increases monontonically in the sync state (formula_45). For the 1D swarmalator model, the bifurcations are richer. Starting with the phase coupling formula_46 and increasing, formula_47 bifurcate from the async state (formula_48) to the phase wave (formula_49) then to the mixed state (formula_50) before finally ending up in the sync state (formula_51). Note we have taken formula_52 without loss of generality and formula_53 are constants that depend on formula_54. Expressions for formula_55 have been worked out, those for formula_56 in the mixed state are unknown (see ref [25]). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q,\\phi" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "\\phi" }, { "math_id": 3, "text": "\\pi" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "\n\\begin{array}{rcl}\n\\dot{x}_i & = & \\frac{1}{N} \\sum_j I_{att}(x_j-x_i) F(\\theta_j - \\theta_i) - I_{rep}(x_j - x_i) \\\\\n\\dot{\\theta}_i & = & \\frac{1}{N} \\sum_j G_{sync}(\\theta_j - \\theta_i) H(x_j - x_i)\n\\end{array}\n" }, { "math_id": 7, "text": " I_{att} " }, { "math_id": 8, "text": " I_{rep} " }, { "math_id": 9, "text": " F " }, { "math_id": 10, "text": " G_{sync} " }, { "math_id": 11, "text": " x \\in \\mathbb{R}^2 " }, { "math_id": 12, "text": " I_{att}(x), I_{rep}(x) " }, { "math_id": 13, "text": "\n\\begin{array}{rcl}\n\\dot{x}_i & = & \\frac{1}{N} \\sum_j \\frac{(x_j - x_i)}{|x_j - x_i|} \\left(1 + J \\cos(\\theta_j - \\theta_i)\\right) - \\frac{x_j - x_i}{|x_j - x_i|^2} \\\\\n\\dot{\\theta}_i & = & \\frac{K}{N} \\sum_j \\frac{\\sin (\\theta_j - \\theta_i)}{|x_j - x_i|}\n\\end{array}\n" }, { "math_id": 14, "text": " J " }, { "math_id": 15, "text": " K " }, { "math_id": 16, "text": "\n\\begin{array}{rcl}\nW_{\\pm} & = & S_{\\pm} e^{i \\phi_{\\pm}} = &\\frac{1}{N} \\sum_j e^{i(\\phi_j - \\theta_i)}\n\\end{array}\n" }, { "math_id": 17, "text": " \\gamma " }, { "math_id": 18, "text": " \\phi_i := \\arctan(y_i / x_i) " }, { "math_id": 19, "text": " S_{\\pm} " }, { "math_id": 20, "text": " S_+ = 1 " }, { "math_id": 21, "text": " K_m(J) " }, { "math_id": 22, "text": " K_s(J) " }, { "math_id": 23, "text": " K_s " }, { "math_id": 24, "text": " S_{\\pm}(K) " }, { "math_id": 25, "text": " N_c " }, { "math_id": 26, "text": " x_i \\in \\mathbb{S}^1 " }, { "math_id": 27, "text": "\n\\begin{array}{rcl}\n\\dot{x}_i & = \\nu_i + \\frac{J}{N} \\sum_j \\sin(x_j - x_i) \\cos(\\theta_j - \\theta_i) \\\\\n\\dot{\\theta}_i & = \\omega_i + \\frac{K}{N} \\sum_j \\sin(\\theta_j - \\theta_i) \\cos(x_j - x_i)\n\\end{array}\n" }, { "math_id": 28, "text": " \\nu_i, \\omega_i " }, { "math_id": 29, "text": " g(.) " }, { "math_id": 30, "text": " \\xi, \\eta = x_i \\pm \\theta_i " }, { "math_id": 31, "text": "\n\\begin{array}{rcl}\n\\dot{\\xi}_i &= \\omega_{+,i} + J_+ S_{+} \\sin(\\phi_+ - \\xi) + J_- S_- \\sin( \\phi_- - \\eta) \\\\\n\\dot{\\eta}_i &= \\omega_{-,i} + J_- S_{+} \\sin(\\phi_+ - \\xi) + J_+ S_- \\sin( \\phi_- - \\eta)\n\\end{array}\n" }, { "math_id": 32, "text": " J_{\\pm} = (J \\pm K)/2 " }, { "math_id": 33, "text": " \\omega_{\\pm} = \\nu \\pm \\omega " }, { "math_id": 34, "text": "\n\\begin{array}{rcl}\nW_{\\pm} = S_{\\pm} e^{i \\phi_{\\pm}} := N^{-1} \\sum_j e^{i (x_j \\pm \\theta_j)} \\\\\n\\end{array}\n" }, { "math_id": 35, "text": " \\omega, \\nu " }, { "math_id": 36, "text": " (S_+, S_-) = (0,0) " }, { "math_id": 37, "text": "S_+ = S_- = 0" }, { "math_id": 38, "text": " (S_+, S_-) = (S,0) " }, { "math_id": 39, "text": " x_i " }, { "math_id": 40, "text": " \\theta_i " }, { "math_id": 41, "text": " (S_+, S_-) = (S_1,S_2) " }, { "math_id": 42, "text": " (S_+, S_-) = (S,S) " }, { "math_id": 43, "text": "R :=\n |\\langle e^{i \\theta} \\rangle|" }, { "math_id": 44, "text": " R = 0 " }, { "math_id": 45, "text": " R > 0 " }, { "math_id": 46, "text": " K < K_c " }, { "math_id": 47, "text": " S_+, S_- " }, { "math_id": 48, "text": " S_+ = S_- = 0 " }, { "math_id": 49, "text": " S_+= S_{pw}, S_- = 0 " }, { "math_id": 50, "text": " S_+, S_- = S_1, S_2 " }, { "math_id": 51, "text": " S_+ = S_- = S_{sync} " }, { "math_id": 52, "text": " S_+ > S_- " }, { "math_id": 53, "text": " S, S_1, S_2 " }, { "math_id": 54, "text": " J,K \\Delta " }, { "math_id": 55, "text": " S_{pw}, S_{sync} " }, { "math_id": 56, "text": " S_1, S_2 " } ]
https://en.wikipedia.org/wiki?curid=74922764
7494162
KeeLoq
Block cipher KeeLoq is a proprietary hardware-dedicated block cipher that uses a non-linear feedback shift register (NLFSR). The uni-directional command transfer protocol was designed by Frederick Bruwer of Nanoteq (Pty) Ltd., the cryptographic algorithm was created by Gideon Kuhn at the University of Pretoria, and the silicon implementation was by Willem Smit at Nanoteq (Pty) Ltd (South Africa) in the mid-1980s. KeeLoq was sold to Microchip Technology Inc in 1995 for $10 million. It is used in 'hopping code' encoders and decoders such as NTQ105/106/115/125D/129D, HCS101/2XX/3XX/4XX/5XX and MCS31X2. KeeLoq has been used in many remote keyless entry systems by such companies like Chrysler, Daewoo, Fiat, Ford, GM, Honda, Mercedes-Benz, Toyota, Volvo, Volkswagen Group, Clifford, Shurlok, and Jaguar. Description. KeeLoq "code hopping" encoders encrypt a 0-filled 32-bit block with KeeLoq cipher to produce a 32-bit "hopping code". A 32-bit initialization vector is linearly added (XORed) to the 32 least significant bits of the key prior to encryption and after decryption. KeeLoq cipher accepts 64-bit keys and encrypts 32-bit blocks by executing its single-bit NLFSR for 528 rounds. The NLFSR feedback function is codice_0 or formula_0 KeeLoq uses bits 1, 9, 20, 26 and 31 of the NLFSR state as its inputs during encryption and bits 0, 8, 19, 25 and 30 during decryption. Its output is linearly combined (XORed) with two of the bits of the NLFSR state (bits 0 and 16 on encryption and bits 31 and 15 on decryption) and with a key bit (bit 0 of the key state on encryption and bit 15 of the key state on decryption) and is fed back into the NLFSR state on every round. Versions. This article describes the Classic KeeLoq protocol, but newer versions has been developed. The Ultimate KeeLoq system is a timer-based algorithm enhancing the Classic KeeLoq system. The goal of this newer version is to contain stronger, industry standard AES-128 cipher which replaces KeeLoq cipher algorithm, and have a timer-driven counter which continuously increments, which is the opposite of the Classic KeeLoq where the counter increments based on the button press. This provides protection against brute-force attack and capture and replay attack, known as RollJam for Samy Kamkar's work. Attacks. Replay attack. For simplicity, individual "code hopping" implementations typically do not use cryptographic nonces or timestamping. This makes the protocol inherently vulnerable to replay attacks: For example, by jamming the channel while intercepting the code, a thief can obtain a code that may still be usable at a later stage. This sort of "code grabber," while theoretically interesting, does not appear to be widely used by car thieves. A detailed description of an inexpensive prototype device designed and built by Samy Kamkar to exploit this technique appeared in 2015. The device about the size of a wallet could be concealed on or near a locked vehicle to capture a single keyless entry code to be used at a later time to unlock the vehicle. The device transmits a jamming signal to block the vehicle's reception of rolling code signals from the owner's fob, while recording these signals from both of his two attempts needed to unlock the vehicle. The recorded first code is forwarded to the vehicle only when the owner makes the second attempt, while the recorded second code is retained for future use. A demonstration was announced for DEF CON 23. Cryptanalysis. KeeLoq was first cryptanalyzed by Andrey Bogdanov using sliding techniques and efficient linear approximations. Nicolas Courtois attacked KeeLoq using sliding and algebraic methods. The attacks by Bogdanov and Courtois do not pose any threat to the actual implementations that seem to be much more vulnerable to simple brute-force of the key space that is reduced in all the code-hopping implementations of the cipher known to date. Some KeeLoq "code grabbers" use FPGA-based devices to break KeeLoq-based keys by brute force within about two weeks due to the reduced key length in the real world implementations. In 2007, researchers in the COSIC group at the university at Leuven, Belgium, (K.U.Leuven) in cooperation with colleagues from Israel found a new attack against the system. Using the details of the algorithm that were leaked in 2006, the researchers started to analyze the weaknesses. After determining the part of the key common to cars of a specific model, the unique bits of the key can be cracked with only sniffed communication between the key and the car. Microchip introduced in 1996 a version of KeeLoq ICs which use a 60-bit seed. If a 60-bit seed is being used, an attacker would require approximately 1011 days of processing on a dedicated parallel brute force attacking machine before the system is broken. Side-channel attacks. In March 2008, researchers from the Chair for Embedded Security of Ruhr University Bochum, Germany, presented a complete break of remote keyless entry systems based on the KeeLoq RFID technology. Their attack works on all known car and building access control systems that rely on the KeeLoq cipher. The attack by the Bochum team allows recovering the secret cryptographic keys embedded in both the receiver and the remote control. It is based on measuring the electric power consumption of a device during an encryption. Applying what is called side-channel analysis methods to the power traces, the researchers can extract the manufacturer key from the receivers, which can be regarded as a master key for generating valid keys for the remote controls of one particular manufacturer. Unlike the cryptanalytic attack described above which requires about 65536 chosen plaintext-ciphertext pairs and days of calculation on a PC to recover the key, the side-channel attack can also be applied to the so-called KeeLoq Code Hopping mode of operation (a.k.a. rolling code) that is widely used for keyless entry systems (cars, garages, buildings, etc.). The most devastating practical consequence of the side-channel analysis is an attack in which an attacker, having previously learned the system's master key, can clone any legitimate encoder by intercepting only two messages from this encoder from a distance of up to . Another attack allows one to reset the internal counter of the receiver (garage door, car door, etc.), which makes it impossible for a legitimate user to open the door. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(a,b,c,d,e) = d \\oplus e \\oplus ac \\oplus ae \\oplus bc \\oplus be \\oplus cd \\oplus de \\oplus ade \\oplus ace \\oplus abd \\oplus abc" } ]
https://en.wikipedia.org/wiki?curid=7494162
7495999
Adaptive grammar
Formal grammar An adaptive grammar is a formal grammar that explicitly provides mechanisms within the formalism to allow its own production rules to be manipulated. Overview. John N. Shutt defines adaptive grammar as a grammatical formalism that allows rule sets (aka sets of production rules) to be explicitly manipulated within a grammar. Types of manipulation include rule addition, deletion, and modification. Early history. The first description of grammar adaptivity (though not under that name) in the literature is generally taken to be in a paper by Alfonso Caracciolo di Forino published in 1963. The next generally accepted reference to an adaptive formalism ("extensible context-free grammars") came from Wegbreit in 1970 in the study of extensible programming languages, followed by the "dynamic syntax" of Hanford and Jones in 1973. Collaborative efforts. Until fairly recently, much of the research into the formal properties of adaptive grammars was uncoordinated between researchers, only first being summarized by Henning Christiansen in 1990 in response to a paper in "ACM SIGPLAN Notices" by Boris Burshteyn. The Department of Engineering at the University of São Paulo has its Adaptive Languages and Techniques Laboratory, specifically focusing on research and practice in adaptive technologies and theory. The LTA also maintains a page naming researchers in the field. Terminology and taxonomy. While early efforts made reference to "dynamic syntax" and "extensible", "modifiable", "dynamic", and "adaptable" grammars, more recent usage has tended towards the use of the term "adaptive" (or some variant such as "adaptativa", depending on the publication language of the literature). Iwai refers to her formalism as "adaptive grammars", but this specific use of simply "adaptive grammars" is not typically currently used in the literature without name qualification. Moreover, no standardization or categorization efforts have been undertaken between various researchers, although several have made efforts in this direction. The Shutt classification (and extensions). Shutt categorizes adaptive grammar models into two main categories: Jackson refines Shutt's taxonomy, referring to changes over time as global and changes over space as local, and adding a hybrid "time-space" category: Adaptive formalisms in the literature. Adaptive formalisms may be divided into two main categories: full grammar formalisms (adaptive grammars), and adaptive machines, upon which some grammar formalisms have been based. Adaptive grammar formalisms. The following is a list (by no means complete) of grammar formalisms that, by Shutt's definition above, are considered to be (or have been classified by their own inventors as being) adaptive grammars. They are listed in their historical order of first mention in the literature. Extensible context-free grammars (Wegbreit). Described in Wegbreit's doctoral dissertation in 1970, an extensible context-free grammar consists of a context-free grammar whose rule set is modified according to instructions output by a finite state transducer when reading the terminal prefix during a leftmost derivation. Thus, the rule set varies over position in the generated string, but this variation ignores the hierarchical structure of the syntax tree. Extensible context-free grammars were classified by Shutt as "imperative". Christiansen grammars (Christiansen). First introduced in 1985 as "Generative Grammars" and later more elaborated upon, Christiansen grammars (apparently dubbed so by Shutt, possibly due to conflict with Chomsky generative grammars) are an adaptive extension of attribute grammars. Christiansen grammars were classified by Shutt as "declarative". The redoubling language formula_0 is demonstrated as follows: &lt;program↓"G"&gt; → &lt;dcl↓"G"↑"w"&gt; &lt;body↓{"w-rule"}&gt; where "w-rule" = &lt;body↓"G"’&gt; → "w" &lt;dcl↓"G"↑"ch"•"w"&gt; → &lt;char↓"G"↑"ch"&gt; &lt;dcl↓"G"↑"w"&gt; &lt;dcl↓G↑&lt;» → &lt;ε&gt; &lt;char↓G↑a&gt; → a Bottom-up modifiable grammars, top-down modifiable grammars, and USSA (Burshteyn). First introduced in May 1990 and later expanded upon in December 1990, "modifiable grammars" explicitly provide a mechanism for the addition and deletion of rules during a parse. In response to the "ACM SIGPLAN Notices" responses, Burshteyn later modified his formalism and introduced his adaptive "Universal Syntax and Semantics Analyzer" (USSA) in 1992. These formalisms were classified by Shutt as "imperative". Recursive adaptive grammars (Shutt). Introduced in 1993, Recursive Adaptive Grammars (RAGs) were an attempt to introduce a Turing powerful formalism that maintained much of the elegance of context-free grammars. Shutt self-classifies RAGs as being a "declarative" formalism. Dynamic grammars (Boullier). Boullier's "dynamic grammars", introduced in 1994, appear to be the first adaptive grammar family of grammars to rigorously introduce the notion of a time continuum of a parse as part of the notation of the grammar formalism itself. Dynamic grammars are a sequence of grammars, with each grammar "Gi" differing in some way from other grammars in the sequence, over time. Boullier's main paper on dynamic grammars also defines a dynamic parser, the machine that effects a parse against these grammars, and shows examples of how his formalism can handle such things as type checking, extensible languages, polymorphism, and other constructs typically considered to be in the semantic domain of programming language translation. Adaptive grammars (Iwai). The work of Iwai in 2000 takes the adaptive automata of Neto further by applying adaptive automata to context-sensitive grammars. Iwai's adaptive grammars (note the qualifier by name) allow for three operations during a parse: ? query (similar in some respects to a syntactic predicate, but tied to inspection of rules from which modifications are chosen), + addition, and - deletion (which it shares with its predecessor adaptive automata). §-calculus (Jackson). Introduced in 2000 and most fully discussed in 2006, the §-Calculus (§ here pronounced "meta-ess") allows for the explicit addition, deletion, and modification of productions within a grammar, as well as providing for syntactic predicates. This formalism is self-classified by its creator as both "imperative" and "adaptive", or, more specifically, as a "time-space" adaptive grammar formalism, and was further classified by others as being an analytic formalism. The redoubling language formula_1 is demonstrated as follows: grammar ww { S ::= #phi(A.X&lt;-"") R; R ::= $C('[ab]') #phi(A.X&lt;-A.X C) #phi(N&lt;=A.X) N | R; Adaptive devices (Neto &amp; Pistori). First described by Neto in 2001, adaptive devices were later enhanced and expanded upon by Pistori in 2003. Adapser (Carmi). In 2002, Adam Carmi introduced an LALR(1)-based adaptive grammar formalism known as "Adapser". Specifics of the formalism do not appear to have been released. Adaptive CFGs with appearance checking (Bravo). In 2004, César Bravo introduced the notion of merging the concept of "appearance checking" with "adaptive context-free grammars", a restricted form of Iwai's adaptive grammars, showing these new grammars, called "Adaptive CFGs with Appearance Checking" to be Turing powerful. Adaptive machine formalisms. The formalisms listed below, while not grammar formalisms, either serve as the basis of full grammar formalisms, or are included here because they are adaptive in nature. They are listed in their historical order of first mention in the literature. Introduced in 1994 by Shutt and Rubinstein, Self-Modifying Finite State Automata (SMFAs) are shown to be, in a restricted form, Turing powerful. In 1994, Neto introduced the machine he called a "structured pushdown automaton", the core of adaptive automata theory as pursued by Iwai, Pistori, Bravo and others. This formalism allows for the operations of "inspection" (similar to syntactic predicates, as noted above relating to Iwai's adaptive grammars), "addition", and "deletion" of rules. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = \\{ww | w \\mbox{ is a letter}\\}" }, { "math_id": 1, "text": "L = \\{ww | w \\in \\{a,b\\}^+\\}" } ]
https://en.wikipedia.org/wiki?curid=7495999
74964
Gauss's law
Foundational law of electromagnetism relating electric field and charge distributions In physics (specifically electromagnetism), Gauss's law, also known as Gauss's flux theorem (or sometimes Gauss's theorem), is one of Maxwell's equations. It is an application of the divergence theorem, and it relates the distribution of electric charge to the resulting electric field. Definition. In its integral form, it states that the flux of the electric field out of an arbitrary closed surface is proportional to the electric charge enclosed by the surface, irrespective of how that charge is distributed. Even though the law alone is insufficient to determine the electric field across a surface enclosing any charge distribution, this may be possible in cases where symmetry mandates uniformity of the field. Where no such symmetry exists, Gauss's law can be used in its differential form, which states that the divergence of the electric field is proportional to the local density of charge. The law was first formulated by Joseph-Louis Lagrange in 1773, followed by Carl Friedrich Gauss in 1835, both in the context of the attraction of ellipsoids. It is one of Maxwell's equations, which forms the basis of classical electrodynamics. Gauss's law can be used to derive Coulomb's law, and vice versa. Qualitative description. In words, Gauss's law states: The net electric flux through any hypothetical closed surface is equal to 1/"ε"0 times the net electric charge enclosed within that closed surface. The closed surface is also referred to as Gaussian surface. Gauss's law has a close mathematical similarity with a number of laws in other areas of physics, such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any inverse-square law can be formulated in a way similar to Gauss's law: for example, Gauss's law itself is essentially equivalent to the Coulomb's law, and Gauss's law for gravity is essentially equivalent to the Newton's law of gravity, both of which are inverse-square laws. The law can be expressed mathematically using vector calculus in integral form and differential form; both are equivalent since they are related by the divergence theorem, also called Gauss's theorem. Each of these forms in turn can also be expressed two ways: In terms of a relation between the electric field E and the total electric charge, or in terms of the electric displacement field D and the "free" electric charge. Equation involving the E field. Gauss's law can be stated using either the electric field E or the electric displacement field D. This section shows some of the forms with E; the form with D is below, as are other forms with E. Integral form. Gauss's law may be expressed as: formula_0 where Φ"E" is the electric flux through a closed surface S enclosing any volume V, Q is the total charge enclosed within V, and "ε"0 is the electric constant. The electric flux Φ"E" is defined as a surface integral of the electric field: formula_1 formula_2 formula_3 where E is the electric field, dA is a vector representing an infinitesimal element of area of the surface, and · represents the dot product of two vectors. In a curved spacetime, the flux of an electromagnetic field through a closed surface is expressed as formula_4 formula_5 formula_6 where formula_7 is the speed of light; formula_8 denotes the time components of the electromagnetic tensor; formula_9 is the determinant of metric tensor; formula_10 is an orthonormal element of the two-dimensional surface surrounding the charge formula_11; indices formula_12 and do not match each other. Since the flux is defined as an "integral" of the electric field, this expression of Gauss's law is called the "integral form". In problems involving conductors set at known potentials, the potential away from them is obtained by solving Laplace's equation, either analytically or numerically. The electric field is then calculated as the potential's negative gradient. Gauss's law makes it possible to find the distribution of electric charge: The charge in any given region of the conductor can be deduced by integrating the electric field to find the flux through a small box whose sides are perpendicular to the conductor's surface and by noting that the electric field is perpendicular to the surface, and zero inside the conductor. The reverse problem, when the electric charge distribution is known and the electric field must be computed, is much more difficult. The total flux through a given surface gives little information about the electric field, and can go in and out of the surface in arbitrarily complicated patterns. An exception is if there is some symmetry in the problem, which mandates that the electric field passes through the surface in a uniform way. Then, if the total flux is known, the field itself can be deduced at every point. Common examples of symmetries which lend themselves to Gauss's law include: cylindrical symmetry, planar symmetry, and spherical symmetry. See the article Gaussian surface for examples where these symmetries are exploited to compute electric fields. Differential form. By the divergence theorem, Gauss's law can alternatively be written in the "differential form": formula_13 where ∇ · E is the divergence of the electric field, "ε"0 is the vacuum permittivity and ρ is the total volume charge density (charge per unit volume). Equivalence of integral and differential forms. The integral and differential forms are mathematically equivalent, by the divergence theorem. Here is the argument more specifically. &lt;templatestyles src="Math_proof/styles.css" /&gt;Outline of proof The integral form of Gauss's law is: formula_14 formula_3formula_15 for any closed surface S containing charge Q. By the divergence theorem, this equation is equivalent to: formula_16 for any volume V containing charge Q. By the relation between charge and charge density, this equation is equivalent to: formula_17 for any volume V. In order for this equation to be "simultaneously true" for "every" possible volume V, it is necessary (and sufficient) for the integrands to be equal everywhere. Therefore, this equation is equivalent to: formula_18 Thus the integral and differential forms are equivalent. Equation involving the D field. Free, bound, and total charge. The electric charge that arises in the simplest textbook situations would be classified as "free charge"—for example, the charge which is transferred in static electricity, or the charge on a capacitor plate. In contrast, "bound charge" arises only in the context of dielectric (polarizable) materials. (All materials are polarizable to some extent.) When such materials are placed in an external electric field, the electrons remain bound to their respective atoms, but shift a microscopic distance in response to the field, so that they're more on one side of the atom than the other. All these microscopic displacements add up to give a macroscopic net charge distribution, and this constitutes the "bound charge". Although microscopically all charge is fundamentally the same, there are often practical reasons for wanting to treat bound charge differently from free charge. The result is that the more fundamental Gauss's law, in terms of E (above), is sometimes put into the equivalent form below, which is in terms of D and the free charge only. Integral form. This formulation of Gauss's law states the total charge form: formula_19 where Φ"D" is the D-field flux through a surface S which encloses a volume V, and "Q"free is the free charge contained in V. The flux Φ"D" is defined analogously to the flux Φ"E" of the electric field E through S: formula_20 formula_14 formula_21 Differential form. The differential form of Gauss's law, involving free charge only, states: formula_22 where ∇ · D is the divergence of the electric displacement field, and "ρ"free is the free electric charge density. Equivalence of total and free charge statements. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof that the formulations of Gauss's law in terms of free charge are equivalent to the formulations involving total charge. In this proof, we will show that the equation formula_23 is equivalent to the equation formula_24 Note that we are only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the divergence theorem. We introduce the polarization density P, which has the following relation to E and D: formula_25 and the following relation to the bound charge: formula_26 Now, consider the three equations: formula_27 The key insight is that the sum of the first two equations is the third equation. This completes the proof: The first equation is true by definition, and therefore the second equation is true if and only if the third equation is true. So the second and third equations are equivalent, which is what we wanted to prove. Equation for linear materials. In homogeneous, isotropic, nondispersive, linear materials, there is a simple relationship between E and D: formula_28 where ε is the permittivity of the material. For the case of vacuum (aka free space), "ε" = "ε"0. Under these circumstances, Gauss's law modifies to formula_29 for the integral form, and formula_30 for the differential form. Relation to Coulomb's law. Deriving Gauss's law from Coulomb's law. Strictly speaking, Gauss's law cannot be derived from Coulomb's law alone, since Coulomb's law gives the electric field due to an individual, electrostatic point charge only. However, Gauss's law "can" be proven from Coulomb's law if it is assumed, in addition, that the electric field obeys the superposition principle. The superposition principle states that the resulting field is the vector sum of fields generated by each particle (or the integral, if the charges are distributed smoothly in space). &lt;templatestyles src="Math_proof/styles.css" /&gt;Outline of proof Coulomb's law states that the electric field due to a stationary point charge is: formula_31 where Using the expression from Coulomb's law, we get the total field at r by using an integral to sum the field at r due to the infinitesimal charge at each other point s in space, to give formula_32 where ρ is the charge density. If we take the divergence of both sides of this equation with respect to r, and use the known theorem formula_33 where "δ"(r) is the Dirac delta function, the result is formula_34 Using the "sifting property" of the Dirac delta function, we arrive at formula_35 which is the differential form of Gauss's law, as desired. Since Coulomb's law only applies to stationary charges, there is no reason to expect Gauss's law to hold for moving charges based on this derivation alone. In fact, Gauss's law does hold for moving charges, and, in this respect, Gauss's law is more general than Coulomb's law. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof (without Dirac Delta) Let formula_36 be a bounded open set, and formula_37 be the electric field, with formula_38 a continuous function (density of charge). It is true for all formula_39 that formula_40. Consider now a compact set formula_41 having a piecewise smooth boundary formula_42 such that formula_43. It follows that formula_44 and so, for the divergence theorem: formula_45 But because formula_44, formula_46 for the argument above (formula_47 and then formula_40) Therefore the flux through a closed surface generated by some charge density outside (the surface) is null. Now consider formula_48, and formula_49 as the sphere centered in formula_50 having formula_51 as radius (it exists because formula_52 is an open set). Let formula_53 and formula_54 be the electric field created inside and outside the sphere respectively. Then, formula_55, formula_56 and formula_57 formula_58 The last equality follows by observing that formula_59, and the argument above. The RHS is the electric flux generated by a charged sphere, and so: formula_60 with formula_61 Where the last equality follows by the mean value theorem for integrals. Using the squeeze theorem and the continuity of formula_62, one arrives at: formula_63 Deriving Coulomb's law from Gauss's law. Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of E (see Helmholtz decomposition and Faraday's law). However, Coulomb's law "can" be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). &lt;templatestyles src="Math_proof/styles.css" /&gt;Outline of proof Taking S in the integral form of Gauss's law to be a spherical surface of radius r, centered at the point charge Q, we have formula_64 By the assumption of spherical symmetry, the integrand is a constant which can be taken out of the integral. The result is formula_65 where r̂ is a unit vector pointing radially away from the charge. Again by spherical symmetry, E points in the radial direction, and so we get formula_66 which is essentially equivalent to Coulomb's law. Thus the inverse-square law dependence of the electric field in Coulomb's law follows from Gauss's law. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi_E = \\frac{Q}{\\varepsilon_0}" }, { "math_id": 1, "text": "\\Phi_E = " }, { "math_id": 2, "text": "\\scriptstyle _S" }, { "math_id": 3, "text": "\\mathbf{E} \\cdot \\mathrm{d}\\mathbf{A}" }, { "math_id": 4, "text": "\\Phi_E = c " }, { "math_id": 5, "text": " \\scriptstyle _S" }, { "math_id": 6, "text": " F^{\\kappa 0} \\sqrt {-g} \\, \\mathrm{d} S_\\kappa " }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "F^{\\kappa 0}" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": " \\mathrm{d} S_\\kappa = \\mathrm{d} S^{ij} = \\mathrm{d}x^i \\mathrm{d}x^j " }, { "math_id": 11, "text": "Q" }, { "math_id": 12, "text": " i,j,\\kappa = 1,2,3" }, { "math_id": 13, "text": "\\nabla \\cdot \\mathbf{E} = \\frac{\\rho}{\\varepsilon_0}" }, { "math_id": 14, "text": "{\\scriptstyle _S}" }, { "math_id": 15, "text": " = \\frac{Q}{\\varepsilon_0}" }, { "math_id": 16, "text": "\\iiint_V \\nabla \\cdot \\mathbf{E} \\, \\mathrm{d}V = \\frac{Q}{\\varepsilon_0}" }, { "math_id": 17, "text": "\\iiint_V \\nabla \\cdot \\mathbf{E} \\, \\mathrm{d}V = \\iiint_V \\frac{\\rho}{\\varepsilon_0} \\, \\mathrm{d}V" }, { "math_id": 18, "text": "\\nabla \\cdot \\mathbf{E} = \\frac{\\rho}{\\varepsilon_0}." }, { "math_id": 19, "text": "\\Phi_D = Q_\\mathrm{free}" }, { "math_id": 20, "text": "\\Phi_D = " }, { "math_id": 21, "text": "\\mathbf{D} \\cdot \\mathrm{d}\\mathbf{A} " }, { "math_id": 22, "text": "\\nabla \\cdot \\mathbf{D} = \\rho_\\mathrm{free}" }, { "math_id": 23, "text": "\\nabla\\cdot \\mathbf{E} = \\dfrac{\\rho}{\\varepsilon_0}" }, { "math_id": 24, "text": "\\nabla\\cdot\\mathbf{D} = \\rho_\\mathrm{free}" }, { "math_id": 25, "text": "\\mathbf{D}=\\varepsilon_0 \\mathbf{E} + \\mathbf{P}" }, { "math_id": 26, "text": "\\rho_\\mathrm{bound} = -\\nabla\\cdot \\mathbf{P}" }, { "math_id": 27, "text": "\\begin{align}\n\\rho_\\mathrm{bound} &= \\nabla\\cdot (-\\mathbf{P}) \\\\\n\\rho_\\mathrm{free} &= \\nabla\\cdot \\mathbf{D} \\\\\n\\rho &= \\nabla \\cdot(\\varepsilon_0\\mathbf{E})\n\\end{align}" }, { "math_id": 28, "text": "\\mathbf{D} = \\varepsilon \\mathbf{E} " }, { "math_id": 29, "text": "\\Phi_E = \\frac{Q_\\mathrm{free}}{\\varepsilon}" }, { "math_id": 30, "text": "\\nabla \\cdot \\mathbf{E} = \\frac{\\rho_\\mathrm{free}}{\\varepsilon}" }, { "math_id": 31, "text": "\\mathbf{E}(\\mathbf{r}) = \\frac{q}{4\\pi \\varepsilon_0} \\frac{\\mathbf{e}_r}{r^2}" }, { "math_id": 32, "text": "\\mathbf{E}(\\mathbf{r}) = \\frac{1}{4\\pi\\varepsilon_0} \\int \\frac{\\rho(\\mathbf{s})(\\mathbf{r}-\\mathbf{s})}{|\\mathbf{r}-\\mathbf{s}|^3} \\, \\mathrm{d}^3 \\mathbf{s}" }, { "math_id": 33, "text": "\\nabla \\cdot \\left(\\frac{\\mathbf{r}}{|\\mathbf{r}|^3}\\right) = 4\\pi \\delta(\\mathbf{r})" }, { "math_id": 34, "text": "\\nabla\\cdot\\mathbf{E}(\\mathbf{r}) = \\frac{1}{\\varepsilon_0} \\int \\rho(\\mathbf{s})\\, \\delta(\\mathbf{r}-\\mathbf{s})\\, \\mathrm{d}^3 \\mathbf{s}" }, { "math_id": 35, "text": "\\nabla\\cdot\\mathbf{E}(\\mathbf{r}) = \\frac{\\rho(\\mathbf{r})}{\\varepsilon_0}," }, { "math_id": 36, "text": "\\Omega \\subseteq R^3 " }, { "math_id": 37, "text": "\\mathbf E_0(\\mathbf r) = \\frac {1}{4 \\pi \\varepsilon_0} \\int_{\\Omega} \\rho(\\mathbf r')\\frac {\\mathbf r - \\mathbf r'} {\\left \\| \\mathbf r - \\mathbf r' \\right \\|^3} \\mathrm{d}\\mathbf{r}'\n\\equiv \\frac {1}{4 \\pi \\varepsilon_0} \\int_{\\Omega} e(\\mathbf{r, \\mathbf{r}'}){\\mathrm{d}\\mathbf{r}'}" }, { "math_id": 38, "text": "\\rho(\\mathbf r')" }, { "math_id": 39, "text": "\\mathbf{r} \\neq \\mathbf{r'}" }, { "math_id": 40, "text": "\\nabla_\\mathbf{r} \\cdot \\mathbf{e}(\\mathbf{r, r'}) = 0" }, { "math_id": 41, "text": "V \\subseteq R^3" }, { "math_id": 42, "text": "\\partial V" }, { "math_id": 43, "text": "\\Omega \\cap V = \\emptyset" }, { "math_id": 44, "text": "e(\\mathbf{r, \\mathbf{r}'}) \\in C^1(V \\times \\Omega)" }, { "math_id": 45, "text": "\\oint_{\\partial V} \\mathbf{E}_0 \\cdot d\\mathbf{S} = \\int_V \\mathbf{\\nabla} \\cdot \\mathbf{E}_0 \\, dV" }, { "math_id": 46, "text": "\\mathbf{\\nabla} \\cdot \\mathbf{E}_0(\\mathbf{r}) = \\frac {1}{4 \\pi \\varepsilon_0} \\int_{\\Omega} \\nabla_\\mathbf{r} \\cdot e(\\mathbf{r, \\mathbf{r}'}){\\mathrm{d}\\mathbf{r}'} = 0 " }, { "math_id": 47, "text": "\\Omega \\cap V = \\emptyset \\implies \\forall \\mathbf{r} \\in V \\ \\ \\forall \\mathbf{r'} \\in \\Omega \\ \\ \\ \\mathbf{r} \\neq \\mathbf{r'} " }, { "math_id": 48, "text": "\\mathbf{r}_0 \\in \\Omega" }, { "math_id": 49, "text": "B_R(\\mathbf{r}_0)\\subseteq \\Omega" }, { "math_id": 50, "text": "\\mathbf{r}_0" }, { "math_id": 51, "text": "R" }, { "math_id": 52, "text": "\\Omega" }, { "math_id": 53, "text": "\\mathbf{E}_{B_R}" }, { "math_id": 54, "text": "\\mathbf{E}_C" }, { "math_id": 55, "text": "\\mathbf{E}_{B_R} = \\frac {1}{4 \\pi \\varepsilon_0} \\int_{B_R(\\mathbf{r}_0)} e(\\mathbf{r, \\mathbf{r}'}){\\mathrm{d}\\mathbf{r}'}" }, { "math_id": 56, "text": "\\mathbf{E}_C = \\frac {1}{4 \\pi \\varepsilon_0} \\int_{\\Omega \\setminus B_R(\\mathbf{r}_0)} e(\\mathbf{r, \\mathbf{r}'}){\\mathrm{d}\\mathbf{r}'}" }, { "math_id": 57, "text": "\\mathbf{E}_{B_R} + \\mathbf{E}_C = \\mathbf{E}_0 " }, { "math_id": 58, "text": "\\Phi(R) =\n\\oint_{\\partial B_R(\\mathbf{r}_0)} \\mathbf{E}_0 \\cdot d\\mathbf{S} =\n\\oint_{\\partial B_R(\\mathbf{r}_0)} \\mathbf{E}_{B_R} \\cdot d\\mathbf{S} +\n\\oint_{\\partial B_R(\\mathbf{r}_0)} \\mathbf{E}_C \\cdot d\\mathbf{S} =\n\\oint_{\\partial B_R(\\mathbf{r}_0)} \\mathbf{E}_{B_R} \\cdot d\\mathbf{S} " }, { "math_id": 59, "text": "(\\Omega \\setminus B_R(\\mathbf{r}_0)) \\cap B_R(\\mathbf{r}_0) = \\emptyset" }, { "math_id": 60, "text": "\\Phi(R) =\\frac {Q(R)}{\\varepsilon_0} = \\frac {1}{\\varepsilon_0} \\int_{B_R(\\mathbf{r}_0)} \\rho(\\mathbf r'){\\mathrm{d}\\mathbf{r}'} =\n\\frac {1}{\\varepsilon_0} \\rho(\\mathbf r'_c)|B_R(\\mathbf{r}_0)| " }, { "math_id": 61, "text": " r'_c \\in \\ B_R(\\mathbf{r}_0)" }, { "math_id": 62, "text": " \\rho " }, { "math_id": 63, "text": "\\mathbf{\\nabla} \\cdot \\mathbf{E}_0(\\mathbf{r}_0) =\n\\lim_{R \\to 0} \\frac{1}{|B_R(\\mathbf{r}_0)|}\\Phi(R) = \n\\frac {1}{\\varepsilon_0} \\rho(\\mathbf r_0) " }, { "math_id": 64, "text": "\\oint_S\\mathbf{E}\\cdot d\\mathbf{A} = \\frac{Q}{\\varepsilon_0} " }, { "math_id": 65, "text": "4\\pi r^2\\hat{\\mathbf{r}}\\cdot\\mathbf{E}(\\mathbf{r}) = \\frac{Q}{\\varepsilon_0}" }, { "math_id": 66, "text": "\\mathbf{E}(\\mathbf{r}) = \\frac{Q}{4\\pi \\varepsilon_0} \\frac{\\hat{\\mathbf{r}}}{r^2}" } ]
https://en.wikipedia.org/wiki?curid=74964
7497745
Glare (vision)
Bright light which impairs vision Glare is difficulty of seeing in the presence of bright light such as direct or reflected sunlight or artificial light such as car headlamps at night. Because of this, some cars include mirrors with automatic anti-glare functions and in buildings, blinds or louvers are often used to protect occupants. Glare is caused by a significant ratio of luminance between the task (that which is being looked at) and the glare source. Factors such as the angle between the task and the glare source and eye adaptation have significant impacts on the experience of glare. Discomfort and disability. Glare can be generally divided into two types, discomfort glare and disability glare. Discomfort glare is a psychological sensation caused by high brightness (or brightness contrast) within the field of view, which does not necessarily impair vision. In buildings, discomfort glare can originate from small artificial lights (e.g. ceiling fixtures) that have brightnesses that are significantly greater than their surrounding. When the luminous source occupies a much greater portion of the visual field (e.g. daylit windows), discomfort caused by glare can be linked to a saturating effect. Since observers will not always look directly at a bright illuminated source, discomfort glare usually arises when an observer is focusing on a visual task (e.g. a computer-screen) and the bright source is within their peripheral visual field. Disability glare impairs the vision of objects without necessarily causing discomfort. This could arise for instance when driving westward at sunset. Disability glare is often caused by the inter-reflection of light within the eyeball, reducing the contrast between task and glare source to the point where the task cannot be distinguished. When glare is so intense that vision is completely impaired, it is sometimes called dazzle. Reducing factors. Glare can reduce visibility by: Sunglasses are often worn to reduce glare; polarized sunglasses are designed to reduce glare caused by light reflected from non-metallic surfaces such as water, glossy printed matter or painted surfaces. An anti-reflective treatment on eyeglasses reduces the glare at night and glare from inside lights and computer screens that is caused by light bouncing off the lens. Some types of eyeglasses can reduce glare that occurs because of the imperfections on the surface of the eye. Light field measurements can be taken to reduce glare with digital post-processing. Measurement. Methods. Discomfort glare has often been studied using psychophysics experiments, where the common methods have been the luminance adjustment and category rating procedures. Studies conducted by Petherbridge and Hopkinson and Luckiesh and Guth. were amongst the first to compared subjective assessments given by observers against physical measurements produced by a glare source. Biases. A comprehensive review of the methods used to measure glare showed that there are biases associated with its measurement. Luminance adjustments are sensitive to anchoring (cognitive bias) effects caused when the initial starting luminance viewed influences the final assessment of visual discomfort. Glare is also subject to stimulus range bias effects. This occurs when the luminance range influences the final evaluation of glare given by the observer. A larger range, often results in higher glare evaluations given. Prediction models. Glare from artificial lights is typically measured with luminance meters. From daylit windows, cameras are used to convert the pixels into luminance. Both of which are able to determine the luminance of objects within small solid angles. The glare of a scene i.e. visual field of view, is then calculated from the luminance data of that scene. The International Commission on Illumination (CIE) defines glare as: ""Visual conditions in which there is excessive contrast or an inappropriate distribution of light sources that disturbs the observer or limits the ability to distinguish details and objects"." The CIE recommends the "Unified glare rating" (UGR) as a quantitative measure of glare. Other glare calculation methods include "CIBSE Glare Index", "IES Glare Index" and the "Daylight Glare Index" (DGI). Unified glare rating. The unified glare rating (UGR) is a measure of the glare in a given environment, proposed by Sorensen in 1987 and adopted by the International Commission on Illumination (CIE). It is basically the logarithm of the glare of all visible lamps, divided by the background lumination formula_0: formula_1 Where formula_2 is the common logarithm (base 10), formula_3 is the luminance of each light source numbered formula_4, formula_5 is the solid angle of the light source seen from the observer and formula_6 is the Guth position index, which depends on the distance from the line of sight of the viewer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_{b}" }, { "math_id": 1, "text": "\\mathrm{UGR} =8 \\log \\frac{0.25}{L_{b}} \\sum_{n}\\left(L_{n}^2 \\frac{\\omega_{n}}{p_{n}^2}\\right)," }, { "math_id": 2, "text": "\\log" }, { "math_id": 3, "text": "L_{n}" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "\\omega_{n}" }, { "math_id": 6, "text": "p_{n}" } ]
https://en.wikipedia.org/wiki?curid=7497745
749827
Polarization mode dispersion
Form of modal dispersion Polarization mode dispersion (PMD) is a form of modal dispersion where two different polarizations of light in a waveguide, which normally travel at the same speed, travel at different speeds due to random imperfections and asymmetries, causing random spreading of optical pulses. Unless it is compensated, which is difficult, this ultimately limits the rate at which data can be transmitted over a fiber. Overview. In an ideal optical fiber, the core has a perfectly circular cross-section. In this case, the fundamental mode has two orthogonal polarizations (orientations of the electric field) that travel at the same speed. The signal that is transmitted over the fiber is randomly polarized, i.e. a random superposition of these two polarizations, but that would not matter in an ideal fiber because the two polarizations would propagate identically (are degenerate). In a realistic fiber, however, there are random imperfections that break the circular symmetry, causing the two polarizations to propagate with different speeds. In this case, the two polarization components of a signal will slowly separate, e.g. causing pulses to spread and overlap. Because the imperfections are random, the pulse spreading effects correspond to a random walk, and thus have a mean polarization-dependent time-differential Δ"τ" (also called the differential group delay, or DGD) proportional to the square root of propagation distance L: formula_0 "D"PMD is the "PMD parameter" of the fiber, typically measured in ps/√km, a measure of the strength and frequency of the imperfections. The symmetry-breaking random imperfections fall into several categories. First, there is geometric asymmetry, e.g. slightly elliptical cores. Second, there are stress-induced material birefringences, in which the refractive index itself depends on the polarization. Both of these effects can stem from either imperfection in manufacturing (which is never perfect or stress-free) or from thermal and mechanical stresses imposed on the fiber in the field — moreover, the latter stresses generally vary over time. Compensating for PMD. A PMD compensation system is a device which uses a polarization controller to compensate for PMD in fibers. Essentially, one splits the output of the fiber into two principal polarizations (usually those with "dτ" "dω" = 0, i.e. no first-order variation of time-delay with frequency), and applies a differential delay to re-synchronize them. Because the PMD effects are random and time-dependent, this requires an active device that responds to feedback over time. Such systems are therefore expensive and complex; combined with the fact that PMD is not yet the limiting factor in the lower data rates still in common use, this means that PMD-compensation systems have seen limited deployment in largescale telecommunications systems. Another alternative would be to use a polarization maintaining fiber (PM fiber), a fiber whose symmetry is so strongly broken (e.g. a highly elliptical core) that an input polarization along a principal axis is maintained all the way to the output. Since the second polarization is never excited, PMD does not occur. Such fibers currently have practical problems, however, such as higher losses than ordinary optical fiber and higher cost. An extension of this idea is a single-polarization fiber in which only a single polarization state is allowed to propagate along the fiber (the other polarization is not guided and escapes). Related phenomena. A related effect is polarization-dependent loss (PDL), in which two polarizations suffer different rates of loss in the fiber due, again, to asymmetries. PDL similarly degrades signal quality. Strictly speaking, a circular core is not required in order to have two degenerate polarization states. Rather, one requires a core whose symmetry group admits a two-dimensional irreducible representation. For example, a square or equilateral-triangle core would also have two equal-speed polarization solutions for the fundamental mode; such general shapes also arise in photonic-crystal fibers. Again, any random imperfections that break the symmetry would lead to PMD in such a waveguide.
[ { "math_id": 0, "text": "\\Delta\\tau = D_\\text{PMD} \\sqrt{L} \\, " } ]
https://en.wikipedia.org/wiki?curid=749827
74989697
FaceNet
FaceNet is a facial recognition system developed by Florian Schroff, Dmitry Kalenichenko and James Philbina, a group of researchers affiliated with Google. The system was first presented at the 2015 IEEE Conference on Computer Vision and Pattern Recognition. The system uses a deep convolutional neural network to learn a mapping (also called an embedding) from a set of face images to a 128-dimensional Euclidean space, and assesses the similarity between faces based on the square of the Euclidean distance between the images' corresponding normalized vectors in the 128-dimensional Euclidean space. The system uses the triplet loss function as its cost function and introduced a new online triplet mining method. The system achieved an accuracy of 99.63%, which is the highest score to date on the Labeled Faces in the Wild dataset using the "unrestricted with labeled outside data" protocol. Structure. Basic structure. The structure of FaceNet is represented schematically in Figure 1. For training, researchers used input batches of about 1800 images. For each identity represented in the input batches, there were 40 similar images of that identity and several randomly selected images of other identities. These batches were fed to a deep convolutional neural network, which was trained using stochastic gradient descent with standard backpropagation and the Adaptive Gradient Optimizer (AdaGrad) algorithm. The learning rate was initially set at 0.05, which was later lowered while finalizing the model. Structure of the CNN. The researchers used two types of architectures, which they called NN1 and NN2, and explored their trade-offs. The practical differences between the models lie in the difference of parameters and FLOPS. The details of the NN1 model are presented in the table below. Triplet loss function. FaceNet introduced a novel loss function called "triplet loss". This function is defined using triplets of training images of the form formula_0. In each triplet, formula_1 (called an "anchor image") denotes a reference image of a particular identity, formula_2 (called a "positive image") denotes another image of the same identity in image formula_3, and formula_4 (called a "negative image") denotes a randomly selected image of an identity different from the identity in image formula_1 and formula_5. Let formula_6 be some image and let formula_7 be the embedding of formula_8 in the 128-dimensional Euclidean space. It shall be assumed that the L2-norm of formula_7 is unity (the L2 norm of a vector formula_9 in a finite dimensional Euclidean space is denoted by formula_10.) We assemble formula_11 triplets of images from the training dataset. The goal of training here is to ensure that, after learning, the following condition (called the "triplet constraint") is satisfied by all triplets formula_12 in the training data set: formula_13 The variable formula_14 is a hyperparameter called the margin, and its value must be set manually. Its value has been set as 0.2. Thus, the full form of the function to be minimized is the following function, which is officially called the "triplet loss function": formula_15 Selection of triplets. In general, the number of triplets of the form formula_16 is very large. To make computations faster, the Google researchers considered only those triplets which violate the triplet constraint. For this, for a given anchor image formula_17 they chose that positive image formula_18 for which formula_19 is maximum (such a positive image was called a "hard positive image") and that negative image formula_20 for which formula_21 is minimum (such a positive image was called a "hard negative image"). since using the whole training data set to determine the hard positive and hard negative images was computationally expensive and infeasible, the researchers experimented with several methods for selecting the triplets. Performance. On the widely used Labeled Faces in the Wild (LFW) dataset, the FaceNet system achieved an accuracy of 99.63% which is the highest score on LFW in the unrestricted with labeled outside data protocol. On YouTube Faces DB the system achieved an accuracy of 95.12%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (A, P, N) " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " P " }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": " N " }, { "math_id": 5, "text": "P" }, { "math_id": 6, "text": " x " }, { "math_id": 7, "text": "f(x)" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": " \\Vert X\\Vert" }, { "math_id": 11, "text": "m" }, { "math_id": 12, "text": "(A^{(i)}, P^{(i)}, N^{(i)})" }, { "math_id": 13, "text": "\\Vert f(A^{(i)}) - f(P^{(i)})\\Vert_2^2 + \\alpha < \\Vert f(A^{(i)}) - f(N^{(i)})\\Vert_2^2" }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": " L = \\sum_{i=1}^m \\max \\Big( \\Vert f(A^{(i)}) - f(P^{(i)})\\Vert_2^2 - \\Vert f(A^{(i)}) - f(N^{(i)})\\Vert_2^2 + \\alpha, 0 \\Big) " }, { "math_id": 16, "text": "(A^{(i)},P^{(i)},N^{(i)})" }, { "math_id": 17, "text": "A^{(i)}" }, { "math_id": 18, "text": "P^{(i)}" }, { "math_id": 19, "text": " \\Vert f(A^{(i)}) - f(P^{(i)})\\Vert_2^2" }, { "math_id": 20, "text": "N^{(i)}" }, { "math_id": 21, "text": " \\Vert f(A^{(i)}) - f(N^{(i)})\\Vert_2^2" } ]
https://en.wikipedia.org/wiki?curid=74989697
7498981
Cross-flow filtration
In chemical engineering, biochemical engineering and protein purification, cross-flow filtration (also known as tangential flow filtration) is a type of filtration (a particular unit operation). Cross-flow filtration is different from dead-end filtration in which the feed is passed through a membrane or bed, the solids being trapped in the filter and the filtrate being released at the other end. Cross-flow filtration gets its name because the majority of the feed flow travels tangentially "across" the surface of the filter, rather than into the filter. The principal advantage of this is that the filter cake (which can blind the filter) is substantially washed away during the filtration process, increasing the length of time that a filter unit can be operational. It can be a continuous process, unlike batch-wise dead-end filtration. This type of filtration is typically selected for feeds containing a high proportion of small particle size solids (where the permeate is of most value) because solid material can quickly block (blind) the filter surface with dead-end filtration. Industrial examples of this include the extraction of soluble antibiotics from fermentation liquors. The main driving force of cross-flow filtration process is transmembrane pressure. Transmembrane pressure is a measure of pressure difference between two sides of the membrane. During the process, the transmembrane pressure might decrease due to an increase of permeate viscosity, therefore filtration efficiency decreases and can be time-consuming for large-scale processes. This can be prevented by diluting permeate or increasing flow rate of the system. Operation. In cross-flow filtration, the feed is passed across the filter membrane (tangentially) at positive pressure relative to the permeate side. A proportion of the material which is smaller than the membrane pore size passes through the membrane as permeate or filtrate; everything else is retained on the feed side of the membrane as retentate. With cross-flow filtration the tangential motion of the bulk of the fluid across the membrane causes trapped particles on the filter surface to be rubbed off. This means that a cross-flow filter can operate continuously at relatively high solids loads without blinding. Industrial applications. Cross-flow membrane filtration technology has been used widely in industry around the globe. Filtration membranes can be polymeric or ceramic, depending upon the application. The principles of cross-flow filtration are used in reverse osmosis, nanofiltration, ultrafiltration and microfiltration. When purifying water, it can be very cost-effective in comparison to the traditional evaporation methods. In protein purification, the term tangential flow filtration (TFF) is used to describe cross-flow filtration with membranes. The process can be used at different stages during purification, depending on the type of membrane selected. In the photograph of an industrial filtration unit (right), it is possible to see that the recycle pipework is considerably larger than either the feed pipework (vertical pipe on the right hand side) or the permeate pipework (small manifolds near to the rows of white clamps). These pipe sizes are directly related to the proportion of liquid that flows through the unit. A dedicated pump is used to recycle the feed several times around the unit before the solids-rich retentate is transferred to the next part of the process. Techniques to improve performance. Backwashing. In backwashing, the transmembrane pressure is periodically inverted by the use of a secondary pump, so that permeate flows back into the feed, lifting the fouling layer from the surface of the membrane. Backwashing is not applicable to spirally wound membranes and is not a general practice in most applications. (See Clean-in-place) Alternating tangential flow (ATF). A diaphragm pump is used to produce an alternating tangential flow, helping to dislodge retained particles and prevent membrane fouling. Repligen is the largest producer of ATF systems. Clean-in-place (CIP). Clean-in-place systems are typically used to remove fouling from membranes after extensive use. The CIP process may use detergents, reactive agents such as sodium hypochlorite and acids and alkalis such as citric acid and sodium hydroxide (NaOH). Sodium hypochlorite (bleach) must be removed from the feed in some membrane plants. Bleach oxidizes thin-film membranes. Oxidation will degrade the membranes to a point where they will no longer perform at rated rejection levels and have to be replaced. Bleach can be added to a sodium hydroxide CIP during an initial system start-up before spirally-wound membranes are loaded into the plant to help disinfect the system. Bleach is also used to CIP perforated stainless steel (Graver) membranes, as their tolerance for sodium hypochlorite is much higher than a spirally-wound membrane. Caustics and acids are most often used as primary CIP chemicals. Caustic removes organic fouling and acid removes minerals. Enzyme solutions are also used in some systems for helping remove organic fouling material from the membrane plant. The pH and temperature are important to a CIP program. If pH and temperature are too high the membrane will degrade and flux performance will suffer. If pH and temperature are too low, the system simply will not be cleaned properly. Every application has different CIP requirements. e.g. a dairy reverse osmosis (RO) plant most likely will require a more rigorous CIP program than a water purification RO plant. Each membrane manufacturer has their own guidelines for CIP procedures for their product. Concentration. The volume of the fluid is reduced by allowing permeate flow to occur. Solvent, solutes, and particles smaller than the membrane pore size pass through the membrane, while particles larger than the pore size are retained, and thereby concentrated. In bioprocessing applications, concentration may be followed by diafiltration. Diafiltration. In order to effectively remove permeate components from the slurry, fresh solvent may be added to the feed to replace the permeate volume, at the same rate as the permeate flow rate, such that the volume in the system remains constant. This is analogous to the washing of filter cake to remove soluble components. Dilution and re-concentration is sometimes also referred to as "diafiltration". Process flow disruption (PFD). A technically simpler approach than backwashing is to set the transmembrane pressure to zero by temporarily closing off the permeate outlet, which increases the attrition of the fouling layer without the need for a second pump. PFD is not as effective as backwashing in removing fouling, but can be advantageous. Flow rate calculation. The flux or flow rate in cross-flow filtration systems is given by the equation: formula_0 in which: Note: formula_3 and formula_4 include the inverse of the membrane surface area in their derivation; thus, flux increases with increasing membrane area. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J = \\frac{\\Delta P}{(R_{\\rm m} + R_{\\rm c}) \\mu} " }, { "math_id": 1, "text": "J" }, { "math_id": 2, "text": "\\Delta P" }, { "math_id": 3, "text": "R_{\\rm m}" }, { "math_id": 4, "text": "R_{\\rm c}" }, { "math_id": 5, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=7498981
7499338
Soviet–American Gallium Experiment
Physics project measuring solar neutrino flux SAGE (Soviet–American Gallium Experiment, or sometimes Russian-American Gallium Experiment) is a collaborative experiment devised by several prominent physicists to measure the solar neutrino flux. The experiment. SAGE was devised to measure the radio-chemical solar neutrino flux based on the inverse beta decay reaction, 71Gaformula_071Ge. The target for the reaction was 50-57 tonnes of liquid gallium metal stored deep (2100 meters) underground at the Baksan Neutrino Observatory in the Caucasus mountains in Russia. The laboratory containing the SAGE-experiment is called gallium-germanium neutrino telescope (GGNT) laboratory, GGNT being the name of the SAGE experimental apparatus. About once a month, the neutrino induced Ge is extracted from the Ga. 71Ge is unstable with respect to electron capture (formula_1 days) and, therefore, the amount of extracted germanium can be determined from its activity as measured in small proportional counters. The experiment had begun to measure the solar neutrino capture rate with a target of gallium metal in December 1989 and continued to run in August 2011 with only a few brief interruptions in the timespan. As of 2013 is the experiment was described as "being continued" with the latest published data from August 2011. As of 2014 it was stated that the SAGE experiment continues the once-a-month extractions. The SAGE experiment continued in 2016. As of 2017, the SAGE-experiment continues. The experiment has measured the solar neutrino flux in 168 extractions between January 1990 and December 2007. The result of the experiment based on the whole 1990-2007 set of data is (stat.) (syst.) SNU. This represents only 56%-60% of the capture rate predicted by different Standard Solar Models, which predict 138 SNU. The difference is in agreement with neutrino oscillations. The collaboration has used a 518 kCi 51Cr neutrino source to test the experimental operation. The energy of these neutrinos is similar to the solar 7Be neutrinos and thus makes an ideal check on the experimental procedure. The extractions for the Cr experiment took place between January and May 1995 and the counting of the samples lasted until fall. The result, expressed in terms of a ratio of the measured production rate to the expected production rate, is . This indicates that the discrepancy between the solar model predictions and the SAGE flux measurement cannot be an experimental artifact. Also calibrations with a 37Ar neutrino source had been performed. Baksan Experiment on Sterile Transitions (BEST). In 2014, the SAGE-experiment's GGNT-apparatus (gallium-germanium neutrino telescope) was upgraded to perform a very-short-baseline neutrino oscillation experiment BEST (Baksan Experiment on Sterile Transitions) with an intense artificial neutrino source based on 51Cr. In 2017, the BEST apparatus was completed, but the artificial neutrino source was missing. As of 2018, the BEST experiment was underway. As of 2018, a follow-up experiment BEST-2 where the source would be changed to 65Zn was under consideration. It uses two gallium chambers instead of one, to better determine whether the anomaly could be explained by the distance from the source of the neutrinos The gallium anomaly. In June 2022, the BEST experiment released two papers observing a 20-24% deficit in the production the isotope germanium expected from the reaction formula_2, confirming previous results from SAGE and GALLEX on the so called "gallium anomaly" pointing out that a sterile neutrino explanation can be consistent with the data. Further work have refined the precision for the cross section of the neutrino capture in 2023 as well as half-life of formula_3 in 2024 ruling them out as possible explanations for the anomaly. Members of SAGE. SAGE is led by the following physicists:
[ { "math_id": 0, "text": " + \\nu_e \\rightarrow e^{-}+ " }, { "math_id": 1, "text": "t_{1/2}=11.43" }, { "math_id": 2, "text": " {}^{71}\\text{Ga}+ \\nu_e \\rightarrow e^{-}+{}^{71}\\text{Ge} " }, { "math_id": 3, "text": "^{71}Ge" } ]
https://en.wikipedia.org/wiki?curid=7499338
7500026
Second-order cone programming
A second-order cone program (SOCP) is a convex optimization problem of the form minimize formula_0 subject to formula_1 formula_2 where the problem parameters are formula_3, and formula_4. formula_5 is the optimization variable. formula_6 is the Euclidean norm and formula_7 indicates transpose. The "second-order cone" in SOCP arises from the constraints, which are equivalent to requiring the affine function formula_8 to lie in the second-order cone in formula_9. SOCPs can be solved by interior point methods and in general, can be solved more efficiently than semidefinite programming (SDP) problems. Some engineering applications of SOCP include filter design, antenna array weight design, truss design, and grasping force optimization in robotics. Applications in quantitative finance include portfolio optimization; some market impact constraints, because they are not linear, cannot be solved by quadratic programming but can be formulated as SOCP problems. Second-order cone. The standard or unit second-order cone of dimension formula_10 is defined as formula_11. The second-order cone is also known by quadratic cone or ice-cream cone or Lorentz cone. The standard second-order cone in formula_12 is formula_13. The set of points satisfying a second-order cone constraint is the inverse image of the unit second-order cone under an affine mapping: formula_14 and hence is convex. The second-order cone can be embedded in the cone of the positive semidefinite matrices since formula_15 i.e., a second-order cone constraint is equivalent to a linear matrix inequality (Here formula_16 means formula_17 is semidefinite matrix). Similarly, we also have, formula_18. Relation with other optimization problems. When formula_19 for formula_20, the SOCP reduces to a linear program. When formula_21 for formula_20, the SOCP is equivalent to a convex quadratically constrained linear program. Convex quadratically constrained quadratic programs can also be formulated as SOCPs by reformulating the objective function as a constraint. Semidefinite programming subsumes SOCPs as the SOCP constraints can be written as linear matrix inequalities (LMI) and can be reformulated as an instance of semidefinite program. The converse, however, is not valid: there are positive semidefinite cones that do not admit any second-order cone representation. In fact, while any closed convex semialgebraic set in the plane can be written as a feasible region of a SOCP, it is known that there exist convex semialgebraic sets that are not representable by SDPs, that is, there exist convex semialgebraic sets that can not be written as a feasible region of a SDP. Examples. Quadratic constraint. Consider a convex quadratic constraint of the form formula_22 This is equivalent to the SOCP constraint formula_23 Stochastic linear programming. Consider a stochastic linear program in inequality form minimize formula_24 subject to formula_25 where the parameters formula_26 are independent Gaussian random vectors with mean formula_27 and covariance formula_28 and formula_29. This problem can be expressed as the SOCP minimize formula_24 subject to formula_30 where formula_31 is the inverse normal cumulative distribution function. Stochastic second-order cone programming. We refer to second-order cone programs as deterministic second-order cone programs since data defining them are deterministic. Stochastic second-order cone programs are a class of optimization problems that are defined to handle uncertainty in data defining deterministic second-order cone programs. Other examples. Other modeling examples are available at the MOSEK modeling cookbook. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ f^T x \\ " }, { "math_id": 1, "text": "\\lVert A_i x + b_i \\rVert_2 \\leq c_i^T x + d_i,\\quad i = 1,\\dots,m" }, { "math_id": 2, "text": "Fx = g \\ " }, { "math_id": 3, "text": "f \\in \\mathbb{R}^n, \\ A_i \\in \\mathbb{R}^{{n_i}\\times n}, \\ b_i \\in \\mathbb{R}^{n_i}, \\ c_i \\in \\mathbb{R}^n, \\ d_i \\in \\mathbb{R}, \\ F \\in \\mathbb{R}^{p\\times n}" }, { "math_id": 4, "text": "g \\in \\mathbb{R}^p" }, { "math_id": 5, "text": "x\\in\\mathbb{R}^n" }, { "math_id": 6, "text": "\\lVert x \\rVert_2 " }, { "math_id": 7, "text": "^T" }, { "math_id": 8, "text": "(A x + b, c^T x + d)" }, { "math_id": 9, "text": "\\mathbb{R}^{n_i + 1}" }, { "math_id": 10, "text": "n+1" }, { "math_id": 11, "text": "\\mathcal{C}_{n+1}=\\left\\{ \\begin{bmatrix} x \\\\ t \\end{bmatrix} \\Bigg| x \\in \\mathbb{R}^n, \nt\\in \\mathbb{R}, \\|x\\|_2\\leq t \\right\\}" }, { "math_id": 12, "text": "\\mathbb{R}^3" }, { "math_id": 13, "text": "\\left\\{(x,y,z) \\Big| \\sqrt{x^2 + y^2} \\leq z \\right\\}" }, { "math_id": 14, "text": "\\lVert A_i x + b_i \\rVert_2 \\leq c_i^T x + d_i \\Leftrightarrow \n\\begin{bmatrix} A_i \\\\ c_i^T \\end{bmatrix} x + \\begin{bmatrix} b_i \\\\ d_i \\end{bmatrix} \\in \n\\mathcal{C}_{n_i+1}" }, { "math_id": 15, "text": "||x||\\leq t \\Leftrightarrow \\begin{bmatrix} tI & x \\\\ x^T & t \\end{bmatrix} \\succcurlyeq 0," }, { "math_id": 16, "text": "M\\succcurlyeq 0 " }, { "math_id": 17, "text": "M " }, { "math_id": 18, "text": "\\lVert A_i x + b_i \\rVert_2 \\leq c_i^T x + d_i \\Leftrightarrow \n\\begin{bmatrix} (c_i^T x+d_i)I & A_i x+b_i \\\\ (A_i x + b_i)^T & c_i^T x + d_i \\end{bmatrix} \\succcurlyeq 0" }, { "math_id": 19, "text": "A_i = 0" }, { "math_id": 20, "text": "i = 1,\\dots,m" }, { "math_id": 21, "text": "c_i = 0 " }, { "math_id": 22, "text": " x^T A x + b^T x + c \\leq 0. " }, { "math_id": 23, "text": " \\lVert A^{1/2} x + \\frac{1}{2}A^{-1/2}b \\rVert \\leq \\left(\\frac{1}{4}b^T A^{-1} b - c \\right)^{\\frac{1}{2}} " }, { "math_id": 24, "text": "\\ c^T x \\ " }, { "math_id": 25, "text": "\\mathbb{P}(a_i^Tx \\leq b_i) \\geq p, \\quad i = 1,\\dots,m " }, { "math_id": 26, "text": "a_i \\ " }, { "math_id": 27, "text": "\\bar{a}_i" }, { "math_id": 28, "text": "\\Sigma_i \\ " }, { "math_id": 29, "text": "p\\geq0.5" }, { "math_id": 30, "text": "\\bar{a}_i^T x + \\Phi^{-1}(p) \\lVert \\Sigma_i^{1/2} x \\rVert_2 \\leq b_i , \\quad i = 1,\\dots,m " }, { "math_id": 31, "text": "\\Phi^{-1}(\\cdot) \\ " } ]
https://en.wikipedia.org/wiki?curid=7500026
7500116
Hammer projection
Pseudoazimuthal equal-area map projection The Hammer projection is an equal-area map projection described by Ernst Hammer in 1892. Using the same 2:1 elliptical outer shape as the Mollweide projection, Hammer intended to reduce distortion in the regions of the outer meridians, where it is extreme in the Mollweide. Development. Directly inspired by the Aitoff projection, Hammer suggested the use of the equatorial form of the Lambert azimuthal equal-area projection instead of Aitoff's use of the azimuthal equidistant projection: formula_0 where laea"x" and laea"y" are the "x" and "y" components of the equatorial Lambert azimuthal equal-area projection. Written out explicitly: formula_1 The inverse is calculated with the intermediate variable formula_2 The longitude and latitudes can then be calculated by formula_3 where "λ" is the longitude from the central meridian and "φ" is the latitude. Visually, the Aitoff and Hammer projections are very similar. The Hammer has seen more use because of its equal-area property. The Mollweide projection is another equal-area projection of similar aspect, though with straight parallels of latitude, unlike the Hammer's curved parallels. Briesemeister. William A. Briesemeister presented a variant of the Hammer in 1953. In this version, the central meridian is set to 10°E, the coordinate system is rotated to bring the 45°N parallel to the center, and the resulting map is squashed horizontally and reciprocally stretched vertically to achieve a 7:4 aspect ratio instead of the 2:1 of the Hammer. The purpose is to present the land masses more centrally and with lower distortion. Nordic. Before projecting to Hammer, John Bartholomew rotated the coordinate system to bring the 45° north parallel to the center, leaving the prime meridian as the central meridian. He called this variant the "Nordic" projection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align} x &= \\operatorname{laea}_x\\left(\\frac{\\lambda}{2}, \\varphi\\right) \\\\\ny &= \\tfrac12 \\operatorname{laea}_y\\left(\\frac{\\lambda}{2}, \\varphi\\right) \\end{align}" }, { "math_id": 1, "text": "\\begin{align} x &= \\frac{2 \\sqrt 2 \\cos \\varphi \\sin \\frac{\\lambda}{2}}{\\sqrt{1 + \\cos \\varphi \\cos \\frac{\\lambda}{2}}} \\\\\ny &= \\frac{\\sqrt 2\\sin \\varphi}{\\sqrt{1 + \\cos \\varphi \\cos \\frac{\\lambda}{2}}} \\end{align}" }, { "math_id": 2, "text": "z \\equiv \\sqrt{1 - \\left(\\tfrac14 x\\right)^2 - \\left(\\tfrac12 y\\right)^2}" }, { "math_id": 3, "text": "\\begin{align}\n\\lambda &= 2 \\arctan \\frac{zx}{2\\left(2z^2 - 1\\right)} \\\\\n\\varphi &= \\arcsin zy\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=7500116
75002510
Fractional Chern insulator
Fractional Chern insulators (FCIs) are lattice generalizations of the fractional quantum Hall effect that have been studied theoretically since 1993 and have been studied more intensely since early 2010. They were first predicted to exist in topological flat bands carrying Chern numbers. They can appear in topologically non-trivial band structures even in the absence of the large magnetic fields needed for the fractional quantum Hall effect. In principle, they can also occur in partially filled bands with trivial band structures if the inter-electron interaction is unusual. They promise physical realizations at lower magnetic fields, higher temperatures, and with shorter characteristic length scales compared to their continuum counterparts. FCIs were initially studied by adding electron-electron interactions to a fractionally filled Chern insulator, in one-body models where the Chern band is quasi-flat, at zero magnetic field. The FCIs exhibit a fractional quantized Hall conductance. Prior work and experiments with finite magnetic fields. In works predating the theoretical studies of FCIs, the analogue of the Laughlin state was demonstrated in Hofstadter-type models. The essential features of the topology of single-particle states in such models still stems from the presence of a magnetic field. Nevertheless, it was shown that in the presence of a lattice, fractional quantum Hall states can retain their topological character, in the form of fractional Chern numbers . Chern Insulators - single-particle states exhibiting an integer anomalous quantized Hall effect at zero field - have been theoretically proposed. Fractionally filling such states, in the presence of repulsive interactions, can lead to the zero-field Fractional Chern Insulator. These FCIs are sometimes not connected to the Fractional Quantum Hall Effect in Landau Levels. This is the case in bands with Chern number formula_0, and are therefore a new type of states inherent to such lattice models. They have been explored with respect to their quasi-charge excitations, non-Abelian states and the physics of twist defects, which may be conceptually interesting for topological quantum computing. Experimentally, formula_0 Chern insulators have been realized without a magnetic field. FCIs have been claimed to be realized experimentally in van der Waals heterostructures, but with an external magnetic field of order 10 – 30 T and, more recently, FCIs in a formula_1 band have been claimed to be observed in twisted bilayer graphene close to the magic angle, yet again requiring a magnetic field, of order 5 T in order to "smoothen" out the Berry curvature of the bands. These states have been called FCIs due to their link to lattice physics -- either in Hofstadter bands or in the moiré structure, but still required nonzero-magnetic field for their stabilization. Zero field fractional Chern insulators. The prerequisite of zero field fractional Chern insulator is magnetism. The best way to have magnetism is to have exchange interaction that simultaneously polarize the spin. This phenomenon in twisted MoTe2 in both integer and fractional states was first observed by a University of Washington group. In 2023 a series of groups have reported FCIs at zero magnetic field in twisted MoTe2 samples. The University of Washington group first identified fractional Chern number of formula_2, formula_3 and formula_4 state with trion emission sensing. This is followed by the Cornell group who performed thermodynamic measurement on formula_2 and formula_3 state. These samples, where the moiré bands are valley-spin locked, undergo a spin-polarization transition which gives rise to a formula_5 Chern insulator state at integer filling formula_6 of the moiré bands. Upon fractional filling at formula_7 and formula_8, a gapped state develops with a fractional slope in the Streda formula, a hallmark of an FCI. These fractional states are identical to the predicted zero magnetic field FCIs. After the optical sensing measurement, University of Washington group first reported transport `smoking-gun` evidence of fractional quantum anomalous Hall effect that should be exhibited by a zero-field fractional Chern insulator at formula_2, formula_3 and formula_4. They also identified a possible composite Fermi liquid at formula_9 that mimics the half filled Landau level for 2D electron gas. The formula_2 and formula_3 states are also partially repeated by the Shanghai group, while the formula_3 quantization is not as good. The full matching of FCI physics in MoTe2, using the single particle model proposed in, to experiments still holds intriguing and unresolved mysteries. These were only partially theoretically addressed, where the issues of model parameters, sample magnetization, and the appearance of some FCI states (at filling formula_7 and formula_8) but the absence of others (so far at filling at formula_10) are partially addressed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|C|>1" }, { "math_id": 1, "text": "|C|=1 " }, { "math_id": 2, "text": "\\nu = -1" }, { "math_id": 3, "text": "\\nu = -2/3" }, { "math_id": 4, "text": "\\nu = -3/5" }, { "math_id": 5, "text": "C=1" }, { "math_id": 6, "text": "-1" }, { "math_id": 7, "text": "-2/3" }, { "math_id": 8, "text": "-3/5" }, { "math_id": 9, "text": "\\nu = -1/2" }, { "math_id": 10, "text": "-1/3, -2/5" } ]
https://en.wikipedia.org/wiki?curid=75002510
7500394
Minkowski–Steiner formula
In mathematics, the Minkowski–Steiner formula is a formula relating the surface area and volume of compact subsets of Euclidean space. More precisely, it defines the surface area as the "derivative" of enclosed volume in an appropriate sense. The Minkowski–Steiner formula is used, together with the Brunn–Minkowski theorem, to prove the isoperimetric inequality. It is named after Hermann Minkowski and Jakob Steiner. Statement of the Minkowski-Steiner formula. Let formula_0, and let formula_1 be a compact set. Let formula_2 denote the Lebesgue measure (volume) of formula_3. Define the quantity formula_4 by the Minkowski–Steiner formula formula_5 where formula_6 denotes the closed ball of radius formula_7, and formula_8 is the Minkowski sum of formula_3 and formula_9, so that formula_10 Remarks. Surface measure. For "sufficiently regular" sets formula_3, the quantity formula_4 does indeed correspond with the formula_11-dimensional measure of the boundary formula_12 of formula_3. See Federer (1969) for a full treatment of this problem. Convex sets. When the set formula_3 is a convex set, the lim-inf above is a true limit, and one can show that formula_13 where the formula_14 are some continuous functions of formula_3 (see quermassintegrals) and formula_15 denotes the measure (volume) of the unit ball in formula_16: formula_17 where formula_18 denotes the Gamma function. Example: volume and surface area of a ball. Taking formula_19 gives the following well-known formula for the surface area of the sphere of radius formula_20, formula_21: formula_22 formula_23 formula_24 where formula_15 is as above.
[ { "math_id": 0, "text": "n \\geq 2" }, { "math_id": 1, "text": "A \\subsetneq \\mathbb{R}^{n}" }, { "math_id": 2, "text": "\\mu (A)" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\lambda (\\partial A)" }, { "math_id": 5, "text": "\\lambda (\\partial A) := \\liminf_{\\delta \\to 0} \\frac{\\mu \\left( A + \\overline{B_{\\delta}} \\right) - \\mu (A)}{\\delta}," }, { "math_id": 6, "text": "\\overline{B_{\\delta}} := \\left\\{ x = (x_{1}, \\dots, x_{n}) \\in \\mathbb{R}^{n} \\left| | x | := \\sqrt{x_{1}^{2} + \\dots + x_{n}^{2}} \\leq \\delta \\right. \\right\\}" }, { "math_id": 7, "text": "\\delta > 0" }, { "math_id": 8, "text": "A + \\overline{B_{\\delta}} := \\left\\{ a + b \\in \\mathbb{R}^{n} \\left| a \\in A, b \\in \\overline{B_{\\delta}} \\right. \\right\\}" }, { "math_id": 9, "text": "\\overline{B_{\\delta}}" }, { "math_id": 10, "text": "A + \\overline{B_{\\delta}} = \\left\\{ x \\in \\mathbb{R}^{n} \\mathrel|\\ \\mathopen| x - a \\mathclose| \\leq \\delta \\mbox{ for some } a \\in A \\right\\}." }, { "math_id": 11, "text": "(n - 1)" }, { "math_id": 12, "text": "\\partial A" }, { "math_id": 13, "text": "\\mu \\left( A + \\overline{B_{\\delta}} \\right) = \\mu (A) + \\lambda (\\partial A) \\delta + \\sum_{i = 2}^{n - 1} \\lambda_{i} (A) \\delta^{i} + \\omega_{n} \\delta^{n}," }, { "math_id": 14, "text": "\\lambda_{i}" }, { "math_id": 15, "text": "\\omega_{n}" }, { "math_id": 16, "text": "\\mathbb{R}^{n}" }, { "math_id": 17, "text": "\\omega_{n} = \\frac{2 \\pi^{n / 2}}{n \\Gamma (n / 2)}," }, { "math_id": 18, "text": "\\Gamma" }, { "math_id": 19, "text": "A = \\overline{B_{R}}" }, { "math_id": 20, "text": "R" }, { "math_id": 21, "text": "S_{R} := \\partial B_{R}" }, { "math_id": 22, "text": "\\lambda (S_{R}) = \\lim_{\\delta \\to 0} \\frac{\\mu \\left( \\overline{B_{R}} + \\overline{B_{\\delta}} \\right) - \\mu \\left( \\overline{B_{R}} \\right)}{\\delta}" }, { "math_id": 23, "text": "= \\lim_{\\delta \\to 0} \\frac{[ (R + \\delta)^{n} - R^{n} ] \\omega_{n}}{\\delta}" }, { "math_id": 24, "text": "= n R^{n - 1} \\omega_{n}," } ]
https://en.wikipedia.org/wiki?curid=7500394
7500545
Brunn–Minkowski theorem
In mathematics, the Brunn–Minkowski theorem (or Brunn–Minkowski inequality) is an inequality relating the volumes (or more generally Lebesgue measures) of compact subsets of Euclidean space. The original version of the Brunn–Minkowski theorem (Hermann Brunn 1887; Hermann Minkowski 1896) applied to convex sets; the generalization to compact nonconvex sets stated here is due to Lazar Lyusternik (1935). Statement. Let "n" ≥ 1 and let "μ" denote the Lebesgue measure on R"n". Let "A" and "B" be two nonempty compact subsets of R"n". Then the following inequality holds: formula_0 where "A" + "B" denotes the Minkowski sum: formula_1 The theorem is also true in the setting where formula_2 are only assumed to be measurable and non-empty. Multiplicative version. The multiplicative form of Brunn–Minkowski inequality states that formula_3 for all formula_4. The Brunn–Minkowski inequality is equivalent to the multiplicative version. In one direction, use the inequality formula_5 (exponential is convex), which holds for formula_6. In particular, formula_7. Conversely, using the multiplicative form, we find formula_8 The right side is maximized at formula_9, which gives formula_10. The Prékopa–Leindler inequality is a functional generalization of this version of Brunn–Minkowski. On the hypothesis. Measurability. It is possible for formula_11 to be Lebesgue measurable and formula_12 to not be; a counter example can be found in "Measure zero sets with non-measurable sum." On the other hand, if formula_11 are Borel measurable, then formula_12 is the continuous image of the Borel set formula_13, so analytic and thus measurable. See the discussion in Gardner's survey for more on this, as well as ways to avoid measurability hypothesis. In the case that "A" and "B" are compact, so is "A + B", being the image of the compact set formula_13 under the continuous addition map : formula_14, so the measurability conditions are easy to verify. Non-emptiness. The condition that formula_11 are both non-empty is clearly necessary. This condition is not part of the multiplicative versions of BM stated below. Proofs. We give two well known proofs of Brunn–Minkowski. Important corollaries. The Brunn–Minkowski inequality gives much insight into the geometry of high dimensional convex bodies. In this section we sketch a few of those insights. Concavity of the radius function (Brunn's theorem). Consider a convex body formula_16. Let formula_17 be vertical slices of "K." Define formula_18 to be the radius function; if the slices of K are discs, then "r(x)" gives the radius of the disc "K(x)", up to a constant. For more general bodies this "radius" function does not appear to have a completely clear geometric interpretation beyond being the radius of the disc obtained by packing the volume of the slice as close to the origin as possible; in the case when "K(x)" is not a disc, the example of a hypercube shows that the average distance to the center of mass can be much larger than "r(x)." Sometimes in the context of a convex geometry, the radius function has a different meaning, here we follow the terminology of this lecture. By convexity of "K," we have that formula_19. Applying the Brunn–Minkowski inequality gives formula_20, provided formula_21. This shows that the "radius" function is concave on its support, matching the intuition that a convex body does not dip into itself along any direction. This result is sometimes known as Brunn's theorem. Brunn–Minkowski symmetrization of a convex body. Again consider a convex body formula_22. Fix some line formula_23 and for each formula_24 let formula_25 denote the affine hyperplane orthogonal to formula_26 that passes through formula_27. Define, formula_28; as discussed in the previous section, this function is concave. Now, let formula_29. That is, formula_30 is obtained from formula_22 by replacing each slice formula_31 with a disc of the same formula_32-dimensional volume centered formula_23 inside of formula_25. The concavity of the radius function defined in the previous section implies that that formula_30 is convex. This construction is called the Brunn–Minkowski symmetrization. Grunbaum's theorem. Theorem (Grunbaum's theorem): Consider a convex body formula_33. Let formula_34 be any half-space containing the center of mass of formula_35; that is, the expected location of a uniform point sampled from formula_36 Then formula_37. Grunbaum's theorem can be proven using Brunn–Minkowski inequality, specifically the convexity of the Brunn–Minkowski symmetrization. Grunbaum's inequality has the following fair cake cutting interpretation. Suppose two players are playing a game of cutting up an formula_38 dimensional, convex cake. Player 1 chooses a point in the cake, and player two chooses a hyperplane to cut the cake along. Player 1 then receives the cut of the cake containing his point. Grunbaum's theorem implies that if player 1 chooses the center of mass, then the worst that an adversarial player 2 can do is give him a piece of cake with volume at least a formula_39 fraction of the total. In dimensions 2 and 3, the most common dimensions for cakes, the bounds given by the theorem are approximately formula_40 respectively. Note, however, that in formula_41 dimensions, calculating the centroid is formula_42 hard, limiting the usefulness of this cake cutting strategy for higher dimensional, but computationally bounded creatures. Applications of Grunbaum's theorem also appear in convex optimization, specifically in analyzing the converge of the center of gravity method. Isoperimetric inequality. Let formula_43 denote the unit ball. For a convex body, "K", let formula_44 define its surface area. This agrees with the usual meaning of surface area by the Minkowski-Steiner formula. Consider the function formula_45. The isoperimetric inequality states that this is maximized on Euclidean balls. Applications to inequalities between mixed volumes. The Brunn–Minkowski inequality can be used to deduce the following inequality formula_46, where the formula_47 term is a mixed-volume. Equality holds if and only if "K,L" are homothetic. (See theorem 3.4.3 in Hug and Weil's course on convex geometry.) Concentration of measure on the sphere and other strictly convex surfaces. We prove the following theorem on concentration of measure, following notes by Barvinok and notes by Lap Chi Lau. See also Concentration of measure#Concentration on the sphere. Theorem: Let formula_48 be the unit sphere in formula_49. Let formula_50. Define formula_51, where d refers to the Euclidean distance in formula_52. Let formula_53 denote the surface area on the sphere. Then, for any formula_54 we have that formula_55. Version of this result hold also for so-called "strictly convex surfaces," where the result depends on the modulus of convexity. However, the notion of surface area requires modification, see: the aforementioned notes on concentration of measure from Barvinok. Remarks. The proof of the Brunn–Minkowski theorem establishes that the function formula_56 is concave in the sense that, for every pair of nonempty compact subsets "A" and "B" of R"n" and every 0 ≤ "t" ≤ 1, formula_57 For convex sets "A" and "B" of positive measure, the inequality in the theorem is strict for 0 &lt; "t" &lt; 1 unless "A" and "B" are positive homothetic, i.e. are equal up to translation and dilation by a positive factor. Examples. Rounded cubes. It is instructive to consider the case where formula_58 an formula_59 square in the plane, and formula_60 a ball of radius formula_15. In this case, formula_12 is a rounded square, and its volume can be accounted for as the four rounded quarter circles of radius formula_15, the four rectangles of dimensions formula_61 along the sides, and the original square. Thus, formula_62. This example also hints at the theory of mixed-volumes, since the terms that appear in the expansion of the volume of formula_12 correspond to the differently dimensional pieces of "A." In particular, if we rewrite Brunn–Minkowski as formula_63, we see that we can think of the cross terms of the binomial expansion of the latter as accounting, in some fashion, for the mixed volume representation of formula_64. This same phenomenon can also be seen for the sum of an "n"-dimensional formula_59 box and a ball of radius formula_15, where the cross terms in formula_65, up to constants, account for the mixed volumes. This is made precise for the first mixed volume in the section above on the applications to mixed volumes. Examples where the lower bound is loose. The left-hand side of the BM inequality can in general be much larger than the right side. For instance, we can take X to be the x-axis, and Y the y-axis inside the plane; then each has measure zero but the sum has infinite measure. Another example is given by the Cantor set. If formula_66 denotes the middle third Cantor set, then it is an exercise in analysis to show that formula_67. Connections to other parts of mathematics. The Brunn–Minkowski inequality continues to be relevant to modern geometry and algebra. For instance, there are connections to algebraic geometry, and combinatorial versions about counting sets of points inside the integer lattice. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[ \\mu (A + B) ]^{1/n} \\geq [\\mu (A)]^{1/n} + [\\mu (B)]^{1/n}," }, { "math_id": 1, "text": "A + B := \\{\\, a + b \\in \\mathbb{R}^{n} \\mid a \\in A,\\ b \\in B \\,\\}." }, { "math_id": 2, "text": " A, B, A + B " }, { "math_id": 3, "text": " \\mu(\\lambda A + (1 - \\lambda) B) \\geq \\mu(A)^{\\lambda} \\mu(B)^{1 - \\lambda} " }, { "math_id": 4, "text": " \\lambda \\in [0,1] " }, { "math_id": 5, "text": " \\lambda x + (1 - \\lambda) y \\geq x^{\\lambda} y^{1-\\lambda} " }, { "math_id": 6, "text": " x,y \\geq 0, \\lambda \\in [0,1] " }, { "math_id": 7, "text": " \\mu(\\lambda A + (1 - \\lambda) B) \\geq ( \\lambda \\mu(A)^{1/n} + ( 1 - \\lambda) \\mu(B)^{1/n})^n \\geq \\mu(A)^{\\lambda} \\mu(B)^{1 - \\lambda} " }, { "math_id": 8, "text": " \\mu(A+B) = \\mu(\\lambda \\frac{A}{\\lambda} + (1 - \\lambda) \\frac{B}{1 - \\lambda}) \\geq \\frac{\\mu(A)^{\\lambda} \\mu(B)^{1 - \\lambda}}{\\lambda^{n\\lambda}(1 - \\lambda)^{n(1 - \\lambda)}} " }, { "math_id": 9, "text": " \\lambda = \\frac{1}{1+e^C}, C = \\frac 1 n \\ln\\frac{\\mu(A)}{\\mu(B)} " }, { "math_id": 10, "text": " \\mu(A+B) \\geq (\\mu(A)^{1/n}+\\mu(B)^{1/n})^n " }, { "math_id": 11, "text": " A,B " }, { "math_id": 12, "text": " A + B " }, { "math_id": 13, "text": " A \\times B " }, { "math_id": 14, "text": " + : \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}^n " }, { "math_id": 15, "text": " \\epsilon " }, { "math_id": 16, "text": " K \\subseteq \\mathbb{R}^n " }, { "math_id": 17, "text": " K(x) = K \\cap \\{ x_1 = x \\} " }, { "math_id": 18, "text": " r(x) = \\mu(K(x))^{\\frac{1}{n-1}} " }, { "math_id": 19, "text": " K( \\lambda x + (1 - \\lambda)y ) \\supseteq \\lambda K(x) + (1 - \\lambda) K(y) " }, { "math_id": 20, "text": "r(K( \\lambda x + (1 - \\lambda)y )) \\geq \\lambda r ( K(x)) + (1 - \\lambda) r ( K(y)) " }, { "math_id": 21, "text": " K(x) \\not = \\emptyset, K(y) \\not = \\emptyset " }, { "math_id": 22, "text": " K " }, { "math_id": 23, "text": " l " }, { "math_id": 24, "text": " t \\in l " }, { "math_id": 25, "text": " H_t " }, { "math_id": 26, "text": " l" }, { "math_id": 27, "text": " t " }, { "math_id": 28, "text": " r(t) = Vol( K \\cap H_t) " }, { "math_id": 29, "text": " K' = \\bigcup_{t \\in l, K \\cap H_t \\not = \\emptyset } B(t, r(t)) \\cap H_t " }, { "math_id": 30, "text": " K' " }, { "math_id": 31, "text": " H_t \\cap K " }, { "math_id": 32, "text": " (n-1) " }, { "math_id": 33, "text": " K \\subseteq \\mathbb{R}^n " }, { "math_id": 34, "text": " H " }, { "math_id": 35, "text": " K " }, { "math_id": 36, "text": " K. " }, { "math_id": 37, "text": " \\mu( H \\cap K) \\geq (\\frac{n}{n+1})^n \\mu(K) \\geq \\frac{1}{e} \\mu(K) " }, { "math_id": 38, "text": " n " }, { "math_id": 39, "text": " 1/e " }, { "math_id": 40, "text": " .444, .42 " }, { "math_id": 41, "text": " n " }, { "math_id": 42, "text": " \\# P " }, { "math_id": 43, "text": " B = B(0,1) = \\{ x \\in \\mathbb{R}^n : ||x||_2 \\leq 1 \\} " }, { "math_id": 44, "text": " S(K) = \\lim_{\\epsilon \\to 0} \\frac{ \\mu(K + \\epsilon B) - \\mu(K)}{\\epsilon} " }, { "math_id": 45, "text": " c(X) = \\frac{ \\mu(K)^{1/n} }{ S(K)^{1/(n-1)}} " }, { "math_id": 46, "text": " V(K, \\ldots, K, L)^n \\geq V(K)^{n-1} V(L) " }, { "math_id": 47, "text": " V(K, \\ldots, K,L) " }, { "math_id": 48, "text": " S " }, { "math_id": 49, "text": " \\mathbb{R}^n " }, { "math_id": 50, "text": " X \\subseteq S " }, { "math_id": 51, "text": " X_{\\epsilon} = \\{ z \\in S : d(z,X) \\leq \\epsilon \\} " }, { "math_id": 52, "text": " \\mathbb{R}^n " }, { "math_id": 53, "text": " \\nu " }, { "math_id": 54, "text": " \\epsilon \\in (0,1 ]" }, { "math_id": 55, "text": " \\frac{\\nu(X_{\\epsilon})}{\\nu(S) } \\geq 1 - \\frac{ \\nu(S)}{\\nu(X)} e^{ - \\frac{n \\epsilon^2}{4}} " }, { "math_id": 56, "text": "A \\mapsto [\\mu (A)]^{1/n}" }, { "math_id": 57, "text": "\\left[ \\mu (t A + (1 - t) B ) \\right]^{1/n} \\geq t [ \\mu (A) ]^{1/n} + (1 - t) [ \\mu (B) ]^{1/n}." }, { "math_id": 58, "text": " A " }, { "math_id": 59, "text": " l \\times l " }, { "math_id": 60, "text": " B " }, { "math_id": 61, "text": " l \\times \\epsilon " }, { "math_id": 62, "text": " \\mu( A + B) = l^2 + 4 \\epsilon l + \\frac{4}{4} \\pi \\epsilon^2 = \\mu(A) + 4 \\epsilon l + \\mu(B) \\geq \\mu(A) + 2 \\sqrt{\\pi} \\epsilon l + \\mu(B) = \\mu(A) + 2 \\sqrt{ \\mu(A) \\mu(B) } + \\mu(B) = ( \\mu(A)^{1/2} + \\mu(B)^{1/2})^2 " }, { "math_id": 63, "text": " \\mu( A + B) \\geq ( \\mu(A)^{1/n} + \\mu(B)^{1/n})^n " }, { "math_id": 64, "text": " \\mu(A + B) = V(A, \\ldots, A) + n V(B, A, \\ldots, A) + \\ldots + {n \\choose j} V(B,\\ldots, B, A,\\ldots,A) + \\ldots n V(B,\\ldots, B,A) + \\mu(B) " }, { "math_id": 65, "text": " ( \\mu(A)^{1/n} + \\mu(B)^{1/n})^n " }, { "math_id": 66, "text": " C " }, { "math_id": 67, "text": " C + C = [0,2] " } ]
https://en.wikipedia.org/wiki?curid=7500545
750227
Simple public-key infrastructure
Simple public key infrastructure (SPKI, pronounced "spoo-key") was an attempt to overcome the complexity of traditional X.509 public key infrastructure. It was specified in two Internet Engineering Task Force (IETF) Request for Comments (RFC) specifications— and —from the IETF SPKI working group. These two RFCs never passed the "experimental" maturity level of the IETF's RFC status. The SPKI specification defined an authorization certificate format, providing for the delineation of privileges, rights or other such attributes (called authorizations) and binding them to a public key. In 1996, SPKI was merged with Simple Distributed Security Infrastructure (SDSI, pronounced "sudsy") by Ron Rivest and Butler Lampson. History and overview. The original SPKI had identified principals only as public keys but allowed binding authorizations to those keys and delegation of authorization from one key to another. The encoding used was attribute:value pairing, similar to headers. The original SDSI bound local names (of individuals or groups) to public keys (or other names), but carried authorization only in Access Control Lists (ACLs) and did not allow for delegation of subsets of a principal's authorization. The encoding used was standard S-expression. Sample RSA public key in SPKI in "advanced transport format" (for actual transport the structure would be Base64-encoded): (public-key    (rsa-pkcs1-md5     (e #03#)     (n      |ANHCG85jXFGmicr3MGPj53FYYSY1aWAue6PKnpFErHhKMJa4HrK4WSKTO      YTTlapRznnELD2D7lWd3Q8PD0lyi1NJpNzMkxQVHrrAnIQoczeOZuiz/yY      VDzJ1DdiImixyb/Jyme3D0UiUXhd6VGAz0x0cgrKefKnmjy410Kro3uW1| ))) The combined SPKI/SDSI allows the naming of principals, creation of named groups of principals and the delegation of rights or other attributes from one principal to another. It includes a language for expression of authorization - a language that includes a definition of "intersection" of authorizations. It also includes the notion of threshold subject - a construct granting authorizations (or delegations) only when formula_0 of formula_1 of the listed subjects concur (in a request for access or a delegation of rights). SPKI/SDSI uses S-expression encoding, but specifies a binary form that is extremely easy to parse - an LR(0) grammar - called Canonical S-expressions. SPKI/SDSI does not define a role for a commercial certificate authority (CA). In fact, one premise behind SPKI is that a commercial CA serves no useful purpose. As a result of that, SPKI/SDSI is deployed primarily in closed solutions and in demonstration projects of academic interest. Another side-effect of this design element is that it is difficult to monetize SPKI/SDSI by itself. It can be a component of some other product, but there is no business case for developing SPKI/SDSI tools and services except as part of some other product. The most prominent general deployments of SPKI/SDSI are E-speak, a middleware product from HP that used SPKI/SDSI for access control of web methods, and UPnP Security, that uses an XML dialect of SPKI/SDSI for access control of web methods, delegation of rights among network participants, etc.
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=750227
7503084
Okamoto–Uchiyama cryptosystem
The Okamoto–Uchiyama cryptosystem is a public key cryptosystem proposed in 1998 by Tatsuaki Okamoto and Shigenori Uchiyama. The system works in the multiplicative group of integers modulo n, formula_0, where "n" is of the form "p"2"q" and "p" and "q" are large primes. Background. Let formula_1 be an odd prime. Define formula_2. formula_3 is a subgroup of formula_4 with formula_5 (the elements of formula_3 are formula_6). Define formula_7 by formula_8 formula_9 is a homomorphism between formula_3 and the additive group formula_10: that is, formula_11. Since formula_9 is bijective, it is an isomorphism. One can now show the following as a corollary: Let formula_12 such that formula_13 and formula_14 for formula_15. Then formula_16 The corollary is a direct consequence of formula_17. Operation. Like many public key cryptosystems, this scheme works in the group formula_0. This scheme is homomorphic and hence malleable. Key generation. A public/private key pair is generated as follows: The public key is then formula_23 and the private key is formula_24. Encryption. A message formula_25 can be encrypted with the public key formula_23 as follows. The value formula_28 is the encryption of formula_29. Decryption. An encrypted message formula_28 can be decrypted with the private key formula_24 as follows. The value formula_29 is the decryption of formula_28. Example. Let formula_36 and formula_37. Then formula_38. Select formula_39. Then formula_40. Now to encrypt a message formula_41, we pick a random formula_42 and compute formula_43. To decrypt the message 43, we compute formula_44. formula_45. formula_46. And finally formula_47. Proof of correctness. We wish to prove that the value computed in the last decryption step, formula_48, is equal to the original message formula_29. We have formula_49 So to recover formula_29 we need to take the discrete logarithm with base formula_50. This can be done by applying formula_9, as follows. By Fermat's little theorem, formula_51. Since formula_52 one can write formula_53 with formula_54. Then formula_55 and the corollary from earlier applies: formula_56. Security. Inverting the encryption function can be shown to be as hard as factoring "n", meaning that if an adversary could recover the entire message from the encryption of the message they would be able to factor "n". The semantic security (meaning adversaries cannot recover any information about the message from the encryption) rests on the "p"-subgroup assumption, which assumes that it is difficult to determine whether an element "x" in formula_0 is in the subgroup of order "p". This is very similar to the quadratic residuosity problem and the higher residuosity problem.
[ { "math_id": 0, "text": "(\\mathbb{Z}/n\\mathbb{Z})^*" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "\\Gamma = \\{ x \\in (\\mathbb{Z}/p^2 \\mathbb{Z})^* | x \\equiv 1 \\bmod p \\}" }, { "math_id": 3, "text": "\\Gamma" }, { "math_id": 4, "text": "(\\mathbb{Z}/p^2 \\mathbb{Z})^*" }, { "math_id": 5, "text": "|\\Gamma| = p" }, { "math_id": 6, "text": "1, 1+p, 1+2p \\dots 1 + (p-1)p" }, { "math_id": 7, "text": "L: \\Gamma \\to \\mathbb{Z}/p\\mathbb{Z}" }, { "math_id": 8, "text": "L(x) = \\frac{x-1}{p}" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "\\mathbb{Z}/p\\mathbb{Z}" }, { "math_id": 11, "text": "L(ab) = L(a) + L(b) \\bmod p" }, { "math_id": 12, "text": "x \\in \\Gamma" }, { "math_id": 13, "text": "L(x) \\neq 0 \\bmod p" }, { "math_id": 14, "text": "y = x^m \\bmod p^2" }, { "math_id": 15, "text": "0 \\leq m < p" }, { "math_id": 16, "text": "m = \\frac{L(y)}{L(x)} = \\frac{y-1}{x-1} \\bmod p" }, { "math_id": 17, "text": "L(x^m) = m \\cdot L(x)" }, { "math_id": 18, "text": "q" }, { "math_id": 19, "text": "n = p^2 q" }, { "math_id": 20, "text": "g \\in \\{ 2 \\dots n-1 \\}" }, { "math_id": 21, "text": "g^{p-1} \\not\\equiv 1 \\mod p^2" }, { "math_id": 22, "text": "h = g^n \\bmod n" }, { "math_id": 23, "text": "(n,g,h)" }, { "math_id": 24, "text": "(p,q)" }, { "math_id": 25, "text": "m < p" }, { "math_id": 26, "text": "r \\in \\{ 1 \\dots n-1 \\}" }, { "math_id": 27, "text": "c = g^m h^r \\bmod n" }, { "math_id": 28, "text": "c" }, { "math_id": 29, "text": "m" }, { "math_id": 30, "text": "a = L(c^{p-1} \\bmod{p^2})" }, { "math_id": 31, "text": "b = L(g^{p-1} \\bmod{p^2})" }, { "math_id": 32, "text": "a" }, { "math_id": 33, "text": "b" }, { "math_id": 34, "text": "b' = b^{-1} \\bmod p" }, { "math_id": 35, "text": "m = ab' \\bmod p" }, { "math_id": 36, "text": "p = 3" }, { "math_id": 37, "text": "q = 5" }, { "math_id": 38, "text": "n = 3^2 \\cdot 5 = 45" }, { "math_id": 39, "text": "g = 22" }, { "math_id": 40, "text": "h = 22^{45} \\bmod 45 = 37" }, { "math_id": 41, "text": "m = 2" }, { "math_id": 42, "text": "r = 13" }, { "math_id": 43, "text": "c = g^m h^r \\bmod n = 22^{2} 37^{13} \\bmod 45 = 43" }, { "math_id": 44, "text": "a = \\frac{(43^2 \\bmod 3^2) - 1}{3} = 1" }, { "math_id": 45, "text": "b = \\frac{(22^2 \\bmod 3^2) - 1}{3} = 2" }, { "math_id": 46, "text": "b' = 2^{-1} \\bmod 3 = 2" }, { "math_id": 47, "text": "m = ab' = 2" }, { "math_id": 48, "text": "ab' \\bmod p" }, { "math_id": 49, "text": "(g^mh^r)^{p-1} \\equiv (g^m g^{nr})^{p-1} \\equiv (g^{p-1})^m g^{p(p-1)rpq} \\equiv (g^{p-1})^m \\mod p^2" }, { "math_id": 50, "text": "g^{p-1}" }, { "math_id": 51, "text": "g^{p-1} \\equiv 1 \\bmod p" }, { "math_id": 52, "text": "g^{p-1} \\not\\equiv 1 \\bmod p^2" }, { "math_id": 53, "text": "g^{p-1} = 1 + pr" }, { "math_id": 54, "text": "0 < r < p" }, { "math_id": 55, "text": "L(g^{p-1}) \\not\\equiv 0 \\bmod p" }, { "math_id": 56, "text": "m = \\frac{L((g^{p-1})^m)}{L(g^{p-1})} \\bmod p" } ]
https://en.wikipedia.org/wiki?curid=7503084
750326
Compact group
Topological group with compact topology In mathematics, a compact (topological) group is a topological group whose topology realizes it as a compact topological space (when an element of the group is operated on, the result is also within the group). Compact groups are a natural generalization of finite groups with the discrete topology and have properties that carry over in significant fashion. Compact groups have a well-understood theory, in relation to group actions and representation theory. In the following we will assume all groups are Hausdorff spaces. Compact Lie groups. Lie groups form a class of topological groups, and the compact Lie groups have a particularly well-developed theory. Basic examples of compact Lie groups include The classification theorem of compact Lie groups states that up to finite extensions and finite covers this exhausts the list of examples (which already includes some redundancies). This classification is described in more detail in the next subsection. Classification. Given any compact Lie group "G" one can take its identity component "G"0, which is connected. The quotient group "G"/"G"0 is the group of components π0("G") which must be finite since "G" is compact. We therefore have a finite extension formula_0 Meanwhile, for connected compact Lie groups, we have the following result: Theorem: Every connected compact Lie group is the quotient by a finite central subgroup of a product of a simply connected compact Lie group and a torus. Thus, the classification of connected compact Lie groups can in principle be reduced to knowledge of the simply connected compact Lie groups together with information about their centers. (For information about the center, see the section below on fundamental group and center.) Finally, every compact, connected, simply-connected Lie group "K" is a product of finitely many compact, connected, simply-connected simple Lie groups "K""i" each of which is isomorphic to exactly one of the following: or one of the five exceptional groups G2, F4, E6, E7, and E8. The restrictions on "n" are to avoid special isomorphisms among the various families for small values of "n". For each of these groups, the center is known explicitly. The classification is through the associated root system (for a fixed maximal torus), which in turn are classified by their Dynkin diagrams. The classification of compact, simply connected Lie groups is the same as the classification of complex semisimple Lie algebras. Indeed, if "K" is a simply connected compact Lie group, then the complexification of the Lie algebra of "K" is semisimple. Conversely, every complex semisimple Lie algebra has a compact real form isomorphic to the Lie algebra of a compact, simply connected Lie group. Maximal tori and root systems. A key idea in the study of a connected compact Lie group "K" is the concept of a "maximal torus", that is a subgroup "T" of "K" that is isomorphic to a product of several copies of formula_4 and that is not contained in any larger subgroup of this type. A basic example is the case formula_5, in which case we may take formula_6 to be the group of diagonal elements in formula_7. A basic result is the "torus theorem" which states that every element of formula_7 belongs to a maximal torus and that all maximal tori are conjugate. The maximal torus in a compact group plays a role analogous to that of the Cartan subalgebra in a complex semisimple Lie algebra. In particular, once a maximal torus formula_8 has been chosen, one can define a root system and a Weyl group similar to what one has for semisimple Lie algebras. These structures then play an essential role both in the classification of connected compact groups (described above) and in the representation theory of a fixed such group (described below). The root systems associated to the simple compact groups appearing in the classification of simply connected compact groups are as follows: Fundamental group and center. It is important to know whether a connected compact Lie group is simply connected, and if not, to determine its fundamental group. For compact Lie groups, there are two basic approaches to computing the fundamental group. The first approach applies to the classical compact groups formula_9, formula_17, formula_18, and formula_13 and proceeds by induction on formula_19. The second approach uses the root system and applies to all connected compact Lie groups. It is also important to know the center of a connected compact Lie group. The center of a classical group formula_20 can easily be computed "by hand," and in most cases consists simply of whatever roots of the identity are in formula_20. (The group SO(2) is an exception—the center is the whole group, even though most elements are not roots of the identity.) Thus, for example, the center of formula_9 consists of "n"th roots of unity times the identity, a cyclic group of order formula_19. In general, the center can be expressed in terms of the root lattice and the kernel of the exponential map for the maximal torus. The general method shows, for example, that the simply connected compact group corresponding to the exceptional root system formula_21 has trivial center. Thus, the compact formula_21 group is one of very few simple compact groups that are simultaneously simply connected and center free. (The others are formula_22 and formula_23.) Further examples. Amongst groups that are not Lie groups, and so do not carry the structure of a manifold, examples are the additive group "Z""p" of p-adic integers, and constructions from it. In fact any profinite group is a compact group. This means that Galois groups are compact groups, a basic fact for the theory of algebraic extensions in the case of infinite degree. Pontryagin duality provides a large supply of examples of compact commutative groups. These are in duality with abelian discrete groups. Haar measure. Compact groups all carry a Haar measure, which will be invariant by both left and right translation (the modulus function must be a continuous homomorphism to positive reals (R+, ×), and so 1). In other words, these groups are unimodular. Haar measure is easily normalized to be a probability measure, analogous to dθ/2π on the circle. Such a Haar measure is in many cases easy to compute; for example for orthogonal groups it was known to Adolf Hurwitz, and in the Lie group cases can always be given by an invariant differential form. In the profinite case there are many subgroups of finite index, and Haar measure of a coset will be the reciprocal of the index. Therefore, integrals are often computable quite directly, a fact applied constantly in number theory. If formula_7 is a compact group and formula_24 is the associated Haar measure, the Peter–Weyl theorem provides a decomposition of formula_25 as an orthogonal direct sum of finite-dimensional subspaces of matrix entries for the irreducible representations of formula_7. Representation theory. The representation theory of compact groups (not necessarily Lie groups and not necessarily connected) was founded by the Peter–Weyl theorem. Hermann Weyl went on to give the detailed character theory of the compact connected Lie groups, based on maximal torus theory. The resulting Weyl character formula was one of the influential results of twentieth century mathematics. The combination of the Peter–Weyl theorem and the Weyl character formula led Weyl to a complete classification of the representations of a connected compact Lie group; this theory is described in the next section. A combination of Weyl's work and Cartan's theorem gives a survey of the whole representation theory of compact groups "G". That is, by the Peter–Weyl theorem the irreducible unitary representations ρ of "G" are into a unitary group (of finite dimension) and the image will be a closed subgroup of the unitary group by compactness. Cartan's theorem states that Im(ρ) must itself be a Lie subgroup in the unitary group. If "G" is not itself a Lie group, there must be a kernel to ρ. Further one can form an inverse system, for the kernel of ρ smaller and smaller, of finite-dimensional unitary representations, which identifies "G" as an inverse limit of compact Lie groups. Here the fact that in the limit a faithful representation of "G" is found is another consequence of the Peter–Weyl theorem. The unknown part of the representation theory of compact groups is thereby, roughly speaking, thrown back onto the complex representations of finite groups. This theory is rather rich in detail, but is qualitatively well understood. Representation theory of a connected compact Lie group. Certain simple examples of the representation theory of compact Lie groups can be worked out by hand, such as the representations of the , the special unitary group SU(2), and the special unitary group SU(3). We focus here on the general theory. See also the parallel theory of representations of a semisimple Lie algebra. Throughout this section, we fix a connected compact Lie group "K" and a maximal torus "T" in "K". Representation theory of "T". Since "T" is commutative, Schur's lemma tells us that each irreducible representation formula_26 of "T" is one-dimensional: formula_27 Since, also, "T" is compact, formula_26 must actually map into formula_28. To describe these representations concretely, we let formula_29 be the Lie algebra of "T" and we write points formula_30 as formula_31 In such coordinates, formula_26 will have the form formula_32 for some linear functional formula_33 on formula_29. Now, since the exponential map formula_34 is not injective, not every such linear functional formula_33 gives rise to a well-defined map of "T" into formula_4. Rather, let formula_35 denote the kernel of the exponential map: formula_36 where formula_37 is the identity element of "T". (We scale the exponential map here by a factor of formula_38 in order to avoid such factors elsewhere.) Then for formula_33 to give a well-defined map formula_26, formula_33 must satisfy formula_39 where formula_40 is the set of integers. A linear functional formula_33 satisfying this condition is called an analytically integral element. This integrality condition is related to, but not identical to, the notion of integral element in the setting of semisimple Lie algebras. Suppose, for example, "T" is just the group formula_4 of complex numbers formula_41 of absolute value 1. The Lie algebra is the set of purely imaginary numbers, formula_42 and the kernel of the (scaled) exponential map is the set of numbers of the form formula_43 where formula_19 is an integer. A linear functional formula_33 takes integer values on all such numbers if and only if it is of the form formula_44 for some integer formula_45. The irreducible representations of "T" in this case are one-dimensional and of the form formula_46 Representation theory of "K". We now let formula_47 denote a finite-dimensional irreducible representation of "K" (over formula_48). We then consider the restriction of formula_47 to "T". This restriction is not irreducible unless formula_47 is one-dimensional. Nevertheless, the restriction decomposes as a direct sum of irreducible representations of "T". (Note that a given irreducible representation of "T" may occur more than once.) Now, each irreducible representation of "T" is described by a linear functional formula_33 as in the preceding subsection. If a given formula_33 occurs at least once in the decomposition of the restriction of formula_47 to "T", we call formula_33 a weight of formula_47. The strategy of the representation theory of "K" is to classify the irreducible representations in terms of their weights. We now briefly describe the structures needed to formulate the theorem; more details can be found in the article on weights in representation theory. We need the notion of a root system for "K" (relative to a given maximal torus "T"). The construction of this root system formula_49 is very similar to the construction for complex semisimple Lie algebras. Specifically, the weights are the nonzero weights for the adjoint action of "T" on the complexified Lie algebra of "K". The root system "R" has all the usual properties of a root system, except that the elements of "R" may not span formula_29. We then choose a base formula_50 for "R" and we say that an integral element formula_33 is dominant if formula_51 for all formula_52. Finally, we say that one weight is higher than another if their difference can be expressed as a linear combination of elements of formula_50 with non-negative coefficients. The irreducible finite-dimensional representations of "K" are then classified by a theorem of the highest weight, which is closely related to the analogous theorem classifying representations of a semisimple Lie algebra. The result says that: The theorem of the highest weight for representations of "K" is then almost the same as for semisimple Lie algebras, with one notable exception: The concept of an integral element is different. The weights formula_33 of a representation formula_47 are analytically integral in the sense described in the previous subsection. Every analytically integral element is integral in the Lie algebra sense, but not the other way around. (This phenomenon reflects that, in general, not every representation of the Lie algebra formula_53 comes from a representation of the group "K".) On the other hand, if "K" is simply connected, the set of possible highest weights in the group sense is the same as the set of possible highest weights in the Lie algebra sense. The Weyl character formula. If formula_54 is representation of "K", we define the character of formula_55 to be the function formula_56 given by formula_57. This function is easily seen to be a class function, i.e., formula_58 for all formula_59 and formula_60 in "K". Thus, formula_61 is determined by its restriction to "T". The study of characters is an important part of the representation theory of compact groups. One crucial result, which is a corollary of the Peter–Weyl theorem, is that the characters form an orthonormal basis for the set of square-integrable class functions in "K". A second key result is the Weyl character formula, which gives an explicit formula for the character—or, rather, the restriction of the character to "T"—in terms of the highest weight of the representation. In the closely related representation theory of semisimple Lie algebras, the Weyl character formula is an additional result established "after" the representations have been classified. In Weyl's analysis of the compact group case, however, the Weyl character formula is actually a crucial part of the classification itself. Specifically, in Weyl's analysis of the representations of "K", the hardest part of the theorem—showing that every dominant, analytically integral element is actually the highest weight of some representation—is proved in a totally different way from the usual Lie algebra construction using Verma modules. In Weyl's approach, the construction is based on the Peter–Weyl theorem and an analytic proof of the Weyl character formula. Ultimately, the irreducible representations of "K" are realized inside the space of continuous functions on "K". The SU(2) case. We now consider the case of the compact group SU(2). The representations are often considered from the Lie algebra point of view, but we here look at them from the group point of view. We take the maximal torus to be the set of matrices of the form formula_62 According to the example discussed above in the section on representations of "T", the analytically integral elements are labeled by integers, so that the dominant, analytically integral elements are non-negative integers formula_24. The general theory then tells us that for each formula_24, there is a unique irreducible representation of SU(2) with highest weight formula_24. Much information about the representation corresponding to a given formula_24 is encoded in its character. Now, the Weyl character formula says, in this case, that the character is given by formula_63 We can also write the character as sum of exponentials as follows: formula_64 From this last expression and the standard formula for the character in terms of the weights of the representation, we can read off that the weights of the representation are formula_65 each with multiplicity one. (The weights are the integers appearing in the exponents of the exponentials and the multiplicities are the coefficients of the exponentials.) Since there are formula_66 weights, each with multiplicity 1, the dimension of the representation is formula_66. Thus, we recover much of the information about the representations that is usually obtained from the Lie algebra computation. An outline of the proof. We now outline the proof of the theorem of the highest weight, following the original argument of Hermann Weyl. We continue to let formula_7 be a connected compact Lie group and formula_6 a fixed maximal torus in formula_7. We focus on the most difficult part of the theorem, showing that every dominant, analytically integral element is the highest weight of some (finite-dimensional) irreducible representation. The tools for the proof are the following: With these tools in hand, we proceed with the proof. The first major step in the argument is to prove the Weyl character formula. The formula states that if formula_55 is an irreducible representation with highest weight formula_33, then the character formula_61 of formula_55 satisfies: formula_67 for all formula_68 in the Lie algebra of formula_6. Here formula_26 is half the sum of the positive roots. (The notation uses the convention of "real weights"; this convention requires an explicit factor of formula_69 in the exponent.) Weyl's proof of the character formula is analytic in nature and hinges on the fact that the formula_70 norm of the character is 1. Specifically, if there were any additional terms in the numerator, the Weyl integral formula would force the norm of the character to be greater than 1. Next, we let formula_71 denote the function on the right-hand side of the character formula. We show that "even if formula_33 is not known to be the highest weight of a representation", formula_71 is a well-defined, Weyl-invariant function on formula_6, which therefore extends to a class function on formula_7. Then using the Weyl integral formula, one can show that as formula_33 ranges over the set of dominant, analytically integral elements, the functions formula_71 form an orthonormal family of class functions. We emphasize that we do not currently know that every such formula_33 is the highest weight of a representation; nevertheless, the expressions on the right-hand side of the character formula gives a well-defined set of functions formula_71, and these functions are orthonormal. Now comes the conclusion. The set of all formula_71—with formula_33 ranging over the dominant, analytically integral elements—forms an orthonormal set in the space of square integrable class functions. But by the Weyl character formula, the characters of the irreducible representations form a subset of the formula_71's. And by the Peter–Weyl theorem, the characters of the irreducible representations form an orthonormal basis for the space of square integrable class functions. If there were some formula_33 that is not the highest weight of a representation, then the corresponding formula_71 would not be the character of a representation. Thus, the characters would be a "proper" subset of the set of formula_71's. But then we have an impossible situation: an orthonormal "basis" (the set of characters of the irreducible representations) would be contained in a strictly larger orthonormal set (the set of formula_71's). Thus, every formula_33 must actually be the highest weight of a representation. Duality. The topic of recovering a compact group from its representation theory is the subject of the Tannaka–Krein duality, now often recast in terms of Tannakian category theory. From compact to non-compact groups. The influence of the compact group theory on non-compact groups was formulated by Weyl in his unitarian trick. Inside a general semisimple Lie group there is a maximal compact subgroup, and the representation theory of such groups, developed largely by Harish-Chandra, uses intensively the restriction of a representation to such a subgroup, and also the model of Weyl's character theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1\\to G_0 \\to G \\to \\pi_0(G) \\to 1." }, { "math_id": 1, "text": "\\operatorname{Sp}(n),\\,n\\geq 1" }, { "math_id": 2, "text": "\\operatorname{SU}(n),\\,n\\geq 3" }, { "math_id": 3, "text": "\\operatorname{Spin}(n),\\,n\\geq 7" }, { "math_id": 4, "text": "S^1" }, { "math_id": 5, "text": "K = \\operatorname{SU}(n)" }, { "math_id": 6, "text": "T" }, { "math_id": 7, "text": "K" }, { "math_id": 8, "text": "T\\subset K" }, { "math_id": 9, "text": "\\operatorname{SU}(n)" }, { "math_id": 10, "text": "A_{n-1}" }, { "math_id": 11, "text": "\\operatorname{Spin}(2n+1)" }, { "math_id": 12, "text": "B_{n}" }, { "math_id": 13, "text": "\\operatorname{Sp}(n)" }, { "math_id": 14, "text": "C_{n}" }, { "math_id": 15, "text": "\\operatorname{Spin}(2n)" }, { "math_id": 16, "text": "D_{n}" }, { "math_id": 17, "text": "\\operatorname{U}(n)" }, { "math_id": 18, "text": "\\operatorname{SO}(n)" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "G" }, { "math_id": 21, "text": "G_2" }, { "math_id": 22, "text": "F_4" }, { "math_id": 23, "text": "E_8" }, { "math_id": 24, "text": "m" }, { "math_id": 25, "text": "L^2(K,dm)" }, { "math_id": 26, "text": "\\rho" }, { "math_id": 27, "text": "\\rho:T\\rightarrow GL(1;\\mathbb{C})=\\mathbb{C}^* ." }, { "math_id": 28, "text": "S^1\\subset\\mathbb{C}" }, { "math_id": 29, "text": "\\mathfrak{t}" }, { "math_id": 30, "text": "h\\in T" }, { "math_id": 31, "text": "h=e^{H},\\quad H\\in\\mathfrak{t} ." }, { "math_id": 32, "text": "\\rho(e^{H})=e^{i \\lambda(H)}" }, { "math_id": 33, "text": "\\lambda" }, { "math_id": 34, "text": "H\\mapsto e^{H}" }, { "math_id": 35, "text": "\\Gamma" }, { "math_id": 36, "text": "\\Gamma = \\left\\{ H\\in\\mathfrak{t} \\mid e^{2\\pi H}=\\operatorname{Id} \\right\\}," }, { "math_id": 37, "text": "\\operatorname{Id}" }, { "math_id": 38, "text": "2\\pi" }, { "math_id": 39, "text": "\\lambda(H)\\in\\mathbb{Z},\\quad H\\in\\Gamma," }, { "math_id": 40, "text": "\\mathbb{Z}" }, { "math_id": 41, "text": "e^{i\\theta}" }, { "math_id": 42, "text": "H=i\\theta,\\,\\theta\\in\\mathbb{R}," }, { "math_id": 43, "text": "in" }, { "math_id": 44, "text": "\\lambda(i\\theta)= k\\theta" }, { "math_id": 45, "text": "k" }, { "math_id": 46, "text": "\\rho(e^{i\\theta})=e^{ik\\theta},\\quad k \\in \\Z ." }, { "math_id": 47, "text": "\\Sigma" }, { "math_id": 48, "text": "\\mathbb{C}" }, { "math_id": 49, "text": "R\\subset \\mathfrak{t}" }, { "math_id": 50, "text": "\\Delta" }, { "math_id": 51, "text": "\\lambda(\\alpha)\\ge 0" }, { "math_id": 52, "text": "\\alpha\\in\\Delta" }, { "math_id": 53, "text": "\\mathfrak{k}" }, { "math_id": 54, "text": "\\Pi:K\\to\\operatorname{GL}(V)" }, { "math_id": 55, "text": "\\Pi" }, { "math_id": 56, "text": "\\Chi : K \\to \\mathbb{C}" }, { "math_id": 57, "text": "\\Chi(x)=\\operatorname{trace}(\\Pi(x)),\\quad x\\in K" }, { "math_id": 58, "text": "\\Chi(xyx^{-1})=\\Chi(y)" }, { "math_id": 59, "text": "x" }, { "math_id": 60, "text": "y" }, { "math_id": 61, "text": "\\Chi" }, { "math_id": 62, "text": " \\begin{pmatrix}\ne^{i\\theta} & 0\\\\\n0 & e^{-i\\theta}\n\\end{pmatrix} .\n" }, { "math_id": 63, "text": "\\Chi\\left(\\begin{pmatrix}\ne^{i\\theta} & 0\\\\\n0 & e^{-i\\theta}\n\\end{pmatrix}\\right)=\\frac{\\sin((m+1)\\theta)}{\\sin(\\theta)}." }, { "math_id": 64, "text": "\\Chi\\left(\\begin{pmatrix}\ne^{i\\theta} & 0\\\\\n0 & e^{-i\\theta}\n\\end{pmatrix}\\right)=e^{im\\theta}+e^{i(m-2)\\theta}+\\cdots e^{-i(m-2)\\theta}+e^{-im\\theta}." }, { "math_id": 65, "text": "m,m-2,\\ldots,-(m-2),-m," }, { "math_id": 66, "text": "m+1" }, { "math_id": 67, "text": "\\Chi(e^H)=\\frac{\\sum_{w\\in W} \\det(w) e^{i\\langle w\\cdot(\\lambda+\\rho),H\\rangle}}{\\sum_{w\\in W} \\det(w) e^{i\\langle w\\cdot\\rho,H\\rangle}}" }, { "math_id": 68, "text": "H" }, { "math_id": 69, "text": "i" }, { "math_id": 70, "text": "L^2" }, { "math_id": 71, "text": "\\Phi_\\lambda" } ]
https://en.wikipedia.org/wiki?curid=750326
7503909
Beta wavelet
Continuous wavelets of compact support alpha can be built, which are related to the beta distribution. The process is derived from probability distributions using blur derivative. These new wavelets have just one cycle, so they are termed unicycle wavelets. They can be viewed as a "soft variety" of Haar wavelets whose shape is fine-tuned by two parameters formula_0 and formula_1. Closed-form expressions for beta wavelets and scale functions as well as their spectra are derived. Their importance is due to the Central Limit Theorem by Gnedenko and Kolmogorov applied for compactly supported signals. Beta distribution. The beta distribution is a continuous probability distribution defined over the interval formula_2. It is characterised by a couple of parameters, namely formula_0 and formula_1 according to: formula_3. The normalising factor is formula_4, where formula_5 is the generalised factorial function of Euler and formula_6 is the Beta function. Gnedenko-Kolmogorov central limit theorem revisited. Let formula_7 be a probability density of the random variable formula_8, formula_9 i.e. formula_10, formula_11 and formula_12. Suppose that all variables are independent. The mean and the variance of a given random variable formula_8 are, respectively formula_13 formula_14. The mean and variance of formula_15 are therefore formula_16 and formula_17. The density formula_18 of the random variable corresponding to the sum formula_19 is given by the Central Limit Theorem for distributions of compact support (Gnedenko and Kolmogorov). Let formula_20 be distributions such that formula_21. Let formula_22, and formula_23. Without loss of generality assume that formula_24 and formula_25. The random variable formula_15 holds, as formula_26, formula_27 formula_28 where formula_29 and formula_30 Beta wavelets. Since formula_31 is unimodal, the wavelet generated by formula_32 has only one-cycle (a negative half-cycle and a positive half-cycle). The main features of beta wavelets of parameters formula_0 and formula_1 are: formula_33 formula_34 The parameter formula_35 is referred to as “cyclic balance”, and is defined as the ratio between the lengths of the causal and non-causal piece of the wavelet. The instant of transition formula_36 from the first to the second half cycle is given by formula_37 The (unimodal) scale function associated with the wavelets is given by formula_38 formula_39. A closed-form expression for first-order beta wavelets can easily be derived. Within their support, formula_40 Beta wavelet spectrum. The beta wavelet spectrum can be derived in terms of the Kummer hypergeometric function. Let formula_41 denote the Fourier transform pair associated with the wavelet. This spectrum is also denoted by formula_42 for short. It can be proved by applying properties of the Fourier transform that formula_43 where formula_44. Only symmetrical formula_45 cases have zeroes in the spectrum. A few asymmetric formula_46 beta wavelets are shown in Fig. Inquisitively, they are parameter-symmetrical in the sense that they hold formula_47 Higher derivatives may also generate further beta wavelets. Higher order beta wavelets are defined by formula_48 This is henceforth referred to as an formula_49-order beta wavelet. They exist for order formula_50. After some algebraic handling, their closed-form expression can be found: formula_51 Application. Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Almost all practically useful discrete wavelet transforms use discrete-time filter banks. Similarly, Beta wavelet and its derivative are utilized in several real-time engineering applications such as image compression, bio-medical signal compression, image recognition [9] etc. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\beta" }, { "math_id": 2, "text": "0\\leq t\\leq 1" }, { "math_id": 3, "text": "P(t)=\\frac{1}{B(\\alpha ,\\beta )}t^{\\alpha -1}\\cdot (1-t)^{\\beta -1},\\quad 1\\leq \\alpha ,\\beta \\leq +\\infty " }, { "math_id": 4, "text": "B(\\alpha ,\\beta )=\\frac{\\Gamma (\\alpha )\\cdot \\Gamma (\\beta )}{\\Gamma (\\alpha +\\beta )}" }, { "math_id": 5, "text": " \\Gamma (\\cdot )" }, { "math_id": 6, "text": "B(\\cdot ,\\cdot )" }, { "math_id": 7, "text": "p_{i}(t)" }, { "math_id": 8, "text": "t_{i}" }, { "math_id": 9, "text": "i=1,2,3..N" }, { "math_id": 10, "text": "p_{i}(t)\\ge 0" }, { "math_id": 11, "text": "(\\forall t)" }, { "math_id": 12, "text": "\\int_{-\\infty }^{+\\infty }p_{i}(t)dt=1" }, { "math_id": 13, "text": "m_{i}=\\int_{-\\infty }^{+\\infty }\\tau \\cdot p_{i}(\\tau )d\\tau ," }, { "math_id": 14, "text": "\\sigma _{i}^{2}=\\int_{-\\infty }^{+\\infty }(\\tau -m_{i})^{2}\\cdot p_{i}(\\tau )d\\tau " }, { "math_id": 15, "text": "t" }, { "math_id": 16, "text": "m=\\sum_{i=1}^{N}m_{i}" }, { "math_id": 17, "text": "\\sigma^2 =\\sum_{i=1}^{N}\\sigma _{i}^{2}" }, { "math_id": 18, "text": "p(t)" }, { "math_id": 19, "text": "t=\\sum_{i=1}^{N}t_{i}" }, { "math_id": 20, "text": "\\{p_{i}(t)\\}" }, { "math_id": 21, "text": "Supp\\{(p_{i}(t))\\}=(a_{i},b_{i})(\\forall i)" }, { "math_id": 22, "text": "a=\\sum_{i=1}^{N}a_{i}<+\\infty " }, { "math_id": 23, "text": "b=\\sum_{i=1}^{N}b_{i}<+\\infty" }, { "math_id": 24, "text": "a=0" }, { "math_id": 25, "text": "b=1" }, { "math_id": 26, "text": "N\\rightarrow \\infty " }, { "math_id": 27, "text": "p(t)\\approx " }, { "math_id": 28, "text": "\\begin{cases} {k \\cdot t^{\\alpha }(1-t)^{\\beta}}, \\\\otherwise \\end{cases}" }, { "math_id": 29, "text": "\\alpha =\\frac{m(m-m^{2}-\\sigma ^{2})}{\\sigma ^{2}}," }, { "math_id": 30, "text": "\\beta =\\frac{(1-m)(\\alpha +1)}{m}." }, { "math_id": 31, "text": "P(\\cdot |\\alpha ,\\beta )" }, { "math_id": 32, "text": "\\psi _{beta}(t|\\alpha ,\\beta )=(-1)\\frac{dP(t|\\alpha ,\\beta )}{dt}" }, { "math_id": 33, "text": "Supp(\\psi )=[ -\\sqrt{\\frac{\\alpha}{\\beta}}\\sqrt{\\alpha + \\beta +1},\\sqrt{ \\frac{\\beta }{\\alpha }} \\sqrt{\\alpha +\\beta +1}]=[a,b]." }, { "math_id": 34, "text": "lengthSupp(\\psi )=T(\\alpha ,\\beta )=(\\alpha +\\beta )\\sqrt{\\frac{\\alpha +\\beta +1}{\\alpha \\beta }}." }, { "math_id": 35, "text": "R=b/|a| =\\beta / \\alpha" }, { "math_id": 36, "text": "t_{zerocross}" }, { "math_id": 37, "text": "t_{zerocross}=\\frac{(\\alpha -\\beta )}{(\\alpha +\\beta -2)}\\sqrt{\\frac{\\alpha +\\beta +1}{\\alpha \\beta }}." }, { "math_id": 38, "text": "\\phi _{beta}(t|\\alpha ,\\beta )=\\frac{1}{B(\\alpha ,\\beta )T^{\\alpha +\\beta -1}}\\cdot (t-a)^{\\alpha -1}\\cdot (b-t)^{\\beta -1}," }, { "math_id": 39, "text": "a\\leq t\\leq b " }, { "math_id": 40, "text": "\\psi_{beta}(t|\\alpha ,\\beta ) =\\frac{-1}{B(\\alpha ,\\beta )T^{\\alpha +\\beta -1}} \\cdot [\\frac{\\alpha -1}{t-a}-\\frac{\\beta -1}{b-t}] \\cdot(t-a)^{\\alpha -1} \\cdot(b-t)^{\\beta -1}" }, { "math_id": 41, "text": "\\psi _{beta}(t|\\alpha ,\\beta )\\leftrightarrow \\Psi _{BETA}(\\omega |\\alpha ,\\beta )" }, { "math_id": 42, "text": "\\Psi _{BETA}(\\omega)" }, { "math_id": 43, "text": "\\Psi _{BETA}(\\omega ) =-j\\omega \\cdot M(\\alpha ,\\alpha +\\beta ,-j\\omega (\\alpha +\\beta )\\sqrt{\\frac{\\alpha +\\beta +1}{\\alpha \\beta}})\\cdot exp\\{(j\\omega \\sqrt{\\frac{\\alpha (\\alpha +\\beta +1)}{\\beta }})\\}" }, { "math_id": 44, "text": "M(\\alpha ,\\alpha +\\beta ,j\\nu )=\\frac{\\Gamma (\\alpha +\\beta )}{\\Gamma (\\alpha )\\cdot \\Gamma (\\beta )}\\cdot \\int_{0}^{1}e^{j\\nu t}t^{\\alpha -1}(1-t)^{\\beta -1}dt" }, { "math_id": 45, "text": "(\\alpha =\\beta )" }, { "math_id": 46, "text": "(\\alpha \\neq \\beta )" }, { "math_id": 47, "text": "|\\Psi _{BETA}(\\omega |\\alpha ,\\beta )|=|\\Psi _{BETA}(\\omega |\\beta ,\\alpha )|." }, { "math_id": 48, "text": "\\psi _{beta}(t|\\alpha ,\\beta )=(-1)^{N}\\frac{d^{N}P(t|\\alpha ,\\beta )}{dt^{N}}." }, { "math_id": 49, "text": "N" }, { "math_id": 50, "text": "N\\leq Min(\\alpha ,\\beta )-1" }, { "math_id": 51, "text": "\\Psi _{beta}(t|\\alpha ,\\beta ) =\\frac{(-1)^{N}}{B(\\alpha ,\\beta ) \\cdot T^{\\alpha +\\beta -1}} \\sum_{n=0}^{N}sgn(2n-N)\\cdot \\frac{\\Gamma (\\alpha )}{\\Gamma (\\alpha -(N-n))}(t-a)^{\\alpha -1-(N-n)} \\cdot \\frac{\\Gamma (\\beta )}{\\Gamma (\\beta -n)}(b-t)^{\\beta -1-n}." } ]
https://en.wikipedia.org/wiki?curid=7503909
7503941
Jung's theorem
Theorem relating the diameter of a point set to the minimum radius of an enclosing ball In geometry, Jung's theorem is an inequality between the diameter of a set of points in any Euclidean space and the radius of the minimum enclosing ball of that set. It is named after Heinrich Jung, who first studied this inequality in 1901. Algorithms also exist to solve the smallest-circle problem explicitly. Statement. Consider a compact set formula_0 and let formula_1 be the diameter of "K", that is, the largest Euclidean distance between any two of its points. Jung's theorem states that there exists a closed ball with radius formula_2 that contains "K". The boundary case of equality is attained by the regular "n"-simplex. Jung's theorem in the plane. The most common case of Jung's theorem is in the plane, that is, when "n" = 2. In this case the theorem states that there exists a circle enclosing all points whose radius satisfies formula_3 and this bound is as tight as possible since when "K" is an equilateral triangle (or its three vertices) one has formula_4 General metric spaces. For any bounded set formula_5 in any metric space, formula_6. The first inequality is implied by the triangle inequality for the center of the ball and the two diametral points, and the second inequality follows since a ball of radius formula_7 centered at any point of formula_5 will contain all of formula_5. Both these inequalities are tight: Versions of Jung's theorem for various non-Euclidean geometries are also known (see e.g. Dekster 1995, 1997).
[ { "math_id": 0, "text": "K \\subset \\mathbb{R}^n" }, { "math_id": 1, "text": "d = \\max_{p,q\\,\\in\\, K} \\| p - q \\|_2" }, { "math_id": 2, "text": "r \\leq d \\sqrt{\\frac{n}{2(n+1)}}" }, { "math_id": 3, "text": "r \\leq \\frac{d}{\\sqrt{3}}," }, { "math_id": 4, "text": "r = \\frac{d}{\\sqrt{3}}." }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "d/2\\le r\\le d" }, { "math_id": 7, "text": "d" }, { "math_id": 8, "text": "r=d" }, { "math_id": 9, "text": "r=d/2" }, { "math_id": 10, "text": "d/2" } ]
https://en.wikipedia.org/wiki?curid=7503941
75060288
Analytic combinatorics
Analytic combinatorics uses techniques from complex analysis to solve problems in enumerative combinatorics, specifically to find asymptotic estimates for the coefficients of generating functions. History. One of the earliest uses of analytic techniques for an enumeration problem came from Srinivasa Ramanujan and G. H. Hardy's work on integer partitions, starting in 1918, first using a Tauberian theorem and later the circle method. Walter Hayman's 1956 paper "A Generalisation of Stirling's Formula" is considered one of the earliest examples of the saddle-point method. In 1990, Philippe Flajolet and Andrew Odlyzko developed the theory of singularity analysis. In 2009, Philippe Flajolet and Robert Sedgewick wrote the book Analytic Combinatorics. Some of the earliest work on multivariate generating functions started in the 1970s using probabilistic methods. Development of further multivariate techniques started in the early 2000s. Techniques. Meromorphic functions. If formula_0 is a meromorphic function and formula_1 is its pole closest to the origin with order formula_2, then formula_3 as formula_4 Tauberian theorem. If formula_5 as formula_6 where formula_7 and formula_8 is a slowly varying function, then formula_9 as formula_4 See also the Hardy–Littlewood Tauberian theorem. Circle Method. For generating functions with logarithms or roots, which have branch singularities. Darboux's method. If we have a function formula_10 where formula_11 and formula_12 has a radius of convergence greater than formula_13 and a Taylor expansion near 1 of formula_14, then formula_15 See Szegő (1975) for a similar theorem dealing with multiple singularities. Singularity analysis. If formula_12 has a singularity at formula_16 and formula_17 as formula_18 where formula_19 then formula_20 as formula_4 Saddle-point method. For generating functions including entire functions which have no singularities. Intuitively, the biggest contribution to the contour integral is around the saddle point and estimating near the saddle-point gives us an estimate for the whole contour. If formula_21 is an admissible function, then formula_22 as formula_4 where formula_23. See also the method of steepest descent. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. "As of 4th November 2023, this article is derived in whole or in part from "Wikibooks". The copyright holder has licensed the content in a manner that permits reuse under and . All relevant terms must be followed."
[ { "math_id": 0, "text": "h(z) = \\frac{f(z)}{g(z)}" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "[z^n] h(z) \\sim \\frac{(-1)^m m f(a)}{a^m g^{(m)}(a)} \\left( \\frac{1}{a} \\right)^n n^{m-1} \\quad" }, { "math_id": 4, "text": "n \\to \\infty" }, { "math_id": 5, "text": "f(z) \\sim \\frac{1}{(1 - z)^\\sigma} L(\\frac{1}{1 - z}) \\quad" }, { "math_id": 6, "text": "z \\to 1" }, { "math_id": 7, "text": "\\sigma > 0" }, { "math_id": 8, "text": "L" }, { "math_id": 9, "text": "[z^n]f(z) \\sim \\frac{n^{\\sigma-1}}{\\Gamma(\\sigma)} L(n) \\quad" }, { "math_id": 10, "text": "(1 - z)^\\beta f(z)" }, { "math_id": 11, "text": "\\beta \\notin \\{0, 1, 2, \\ldots\\}" }, { "math_id": 12, "text": "f(z)" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "\\sum_{j\\geq0} f_j (1 - z)^j" }, { "math_id": 15, "text": "[z^n](1 - z)^\\beta f(z) = \\sum_{j=0}^m f_j \\frac{n^{-\\beta-j-1}}{\\Gamma(-\\beta-j)} + O(n^{-m-\\beta-2})" }, { "math_id": 16, "text": "\\zeta" }, { "math_id": 17, "text": "f(z) \\sim \\left(1 - \\frac{z}{\\zeta}\\right)^\\alpha \\left(\\frac{1}{\\frac{z}{\\zeta}}\\log\\frac{1}{1 - \\frac{z}{\\zeta}}\\right)^\\gamma \\left(\\frac{1}{\\frac{z}{\\zeta}}\\log\\left(\\frac{1}{\\frac{z}{\\zeta}}\\log\\frac{1}{1 - \\frac{z}{\\zeta}}\\right)\\right)^\\delta \\quad" }, { "math_id": 18, "text": "z \\to \\zeta" }, { "math_id": 19, "text": "\\alpha \\notin \\{0, 1, 2, \\cdots\\}, \\gamma, \\delta \\notin \\{1, 2, \\cdots\\}" }, { "math_id": 20, "text": "[z^n]f(z) \\sim \\zeta^{-n} \\frac{n^{-\\alpha-1}}{\\Gamma(-\\alpha)} (\\log n)^\\gamma (\\log\\log n)^\\delta \\quad" }, { "math_id": 21, "text": "F(z)" }, { "math_id": 22, "text": "[z^n] F(z) \\sim \\frac{F(\\zeta)}{\\zeta^{n+1} \\sqrt{2 \\pi f^{''}(\\zeta)}} \\quad" }, { "math_id": 23, "text": "F^'(\\zeta) = 0" } ]
https://en.wikipedia.org/wiki?curid=75060288
75060458
Budget-proposal aggregation
Decision rules for participatory budgeting Budget-proposal aggregation (BPA) is a problem in social choice theory. A group has to decide on how to distribute its budget among several issues. Each group-member has a different idea about what the ideal budget-distribution should be. The problem is how to aggregate the different opinions into a single budget-distribution program. BPA is a special case of participatory budgeting, with the following characteristics: It is also a special case of fractional social choice (portioning), in which agents express their preferences by stating their ideal distribution, rather than by a ranking of the issues. Another sense in which aggregation in budgeting has been studied is as follows. Suppose a manager asks his worker to submit a budget-proposal for a project. The worker can over-report the project cost, in order to get the slack to himself. Knowing that, the manager might reject the worker's proposal when it is too high, even though the high cost might be real. To mitigate this effect, it is possible to ask the worker for aggregate budget-proposals (for several projects at once). The experiment shows that this approach can indeed improve the efficiency of the process. The same problem has been studied in the context of aggregating probability distributions. Suppose each citizen in society has a certain probability-distribution over candidates, representing the probability that the citizen prefers each candidate. The goal is to aggregate all distributions to a single probability-distribution, representing the probability that society should choose each candidate. Rules for the one-dimensional case. The "one-dimensional" case is the special case in which there are only two issues, e.g. defense and education. In this case, distributions can be represented by a single parameter: the allocation to issue #1 (the allocation to issue #2 is simply the total budget minus the allocation to issue #1). It is natural to assume that the agents have single-peaked preferences, that is: between two options that are both larger, or both smaller, than their ideal allocation, they prefer the option that is closer to their ideal allocation. This setting is similar to a one-dimensional facility location problem: a certain facility (e.g. a public school) has to be built on a line; each voter has an ideal spot on the line in which the facility should be built (nearest to his own house); and the problem is to aggregate the voters' preferences and decide where on the line the facility should be built. The average rule. The average voting rule is an aggregation rule that simply returns the arithmetic mean of all individual distributions. It is the unique rule that satisfies the following three axioms: But the average rule is not incentive-compatible, and is very easy to manipulate. For example, suppose there are two issues, the ideal distribution of Alice is (80%, 20%), and the average of the ideal distributions of the other voters is (60%, 40%). Then Alice would be better-off by reporting that her ideal distribution is (100%, 0%), since this will pull the average distribution closer to her ideal distribution. The median rule. The median voting rule for the one-dimensional case is an aggregation rule that returns the median of the ideal budgets of all citizens. It has several advanatages: But the median rule may be considered unfair, as it ignores the minority opinion. For example, suppose the two issues are "investment in the north" vs. "investment in the south". 49% of the population live in the north and therefore their ideal distribution is (100%,0%), while 51% of the population live in the south and therefore their ideal distribution is (0%,100%), The median rule selects the distribution (0%,100%), which is unfair for the citizens living in the north. This fairness notion is captured by "proportionality (PROP)," which means that, if all agents are single-minded (want either 0% or 100%), then the allocation equals the fraction of agents who want 100%. The average rule is PROP but not strategyproof; the median rule is strategyproof but not PROP. Median with phantoms. The median rule can be generalized by adding fixed votes, that do not depend on the citizen votes. These fixed votes are called "phantoms". Forr every set of phantoms, the rule that chooses the median of the set of real votes + phantoms is strategyproof; see median voting rule for examples and characterization. The Uniform Phantom median rule (UPM) is a special case of the median rule, with "n"-1 phantoms at 1/"n", ..., ("n"-1)/"n". This rule is strategyproof (like all phantom-median rules), but in addition, it is also proportional. It has several characerizations: Proportional fairness. Aziz, Lam, Lee and Walsh study the special case in which the preferences are single-peaked and "symmetric", that is: each agent compares alternatives only by their distance from his ideal point, regardless of the direction. In particular, they assume that each agent's utility is 1 minus the distance between his ideal point and the chosen allocation. They consider several fairness axioms: The following is known about existing rules: They prove the following characterizations: Border and JordanCor.1 prove that the only rule satisfying continuity, anonymity, proportionality and strategyproofness is UPM. Average vs. median. Rosar compares the average rule to the median rule when the voters have diverse private information and interdependent preferences. For uniformly distributed information, the average report dominates the median report from a utilitarian perspective, when the set of admissible reports is designed optimally. For general distributions, the results still hold when there are many agents. Rules for the multi-dimensional case. When there are more than two issues, the space of possible budget-allocations is multi-dimensional. Extending the median rule to the multi-dimensional case is challenging, since the sum of the medians might be different than the median of the sum. In other words, if we pick the median on each issue separately, we might not get a feasible distribution. In the multi-dimensional case, aggregation rules depend on assumptions on the utility functions of the voters. L1 utilities. A common assumption is that the utility of voter "i", with ideal budget (peak) "pi", from a given budget allocation x, is minus the L1-distance between "pi" and "x". Under this assumption, several aggregation rules were studied. Utilitarian rules. Lindner, Nehring and Puppe consider BPA with discrete amounts (e.g. whole dollars). They define the midpoint rule: it chooses a budget-allocation that minimizes the sum of L1-distances to the voters' peaks. In other words, it maximizes the sum of utilities – it is a utilitarian rule. They prove that the set of midpoints is convex, and that it is locally determined (one can check if a point is a midpoint only by looking at its neighbors in the simplex of allocations). Moreover, they prove that the possibility of strategic manipulation is limited: a manipulating agent cannot make the closest midpoint closer to his peak, nor make the farthest midpoint closer to his peak. As a consequence, the midpoint rule is strategyproof if all agents have symmetric single-peaked preferences. Goel, Krishnaswamy, Saskhuwong and Aitamurto consider BPA in the context of participatory budgeting with divisible projects: they propose to replace the common voting format of approving "k" projects with "knapsack voting". With discrete projects, this means that each voter has to select a set of projects whose total cost is at most the available budget; with divisible projects, this means that each voter reports his ideal budget-allocation. Now, each project is partitioned into individual "dollars"; for dollar j of project i, the number of votes is the total number of agents whose ideal budget gives at least j to project i. Given the votes, the knapsack-voting rule selects the dollars with the highest amount of support (as in utilitarian approval voting). They prove that, with L1-utilities, the knapsack voting is strategyproof and utilitarian (and hence efficient). Both utilitarian rules are not "fair" in the sense that they may ignore minorities. For example, if 60% of the voters vote for the distribution (100%,0%) whereas 40% vote for (0%,100%), then the utilitarian rules would choose (100%,0%) and give nothing to the issue important for the minority. Moving phantoms rules. Freeman, Pennock, Peters and Vaughan suggest a class of rules called "moving-phantoms" "rules", where there are "n"+1 phantoms that increase continuously until the outcome equals the total budget. They prove that all these rules are strategyproof. The proof proceeds in two steps. (1) If an agent changes his reported peak, but all the phantoms are fixed in place, then we have a median voting rule in each issue, so the outcome in each issue either stays the same or goes farther from the agent's real peak. (2) as the phantoms move, the outcome in some issues may move nearer to the agent's real peak, but the agent's gain from this is at most the agent's loss in step 1. Note that the proof of (2) crucially relies on the assumption of L1 utilities, and does not work with other distance metrics. For example, suppose there are 3 issues and two agents with peaks at (20,60,20) and (0,50,50). One moving-phantoms rule (the "independent markets" rule below) returns (20,50,30), so agent 1's L1 disutility is 10+10+0=20 and L2 disutility is sqrt(100+100+0)=sqrt(200). If agent 1 changes his peak to (30,50,20), then the rule returns (25,50,25). Agent 1's L1 disutility is 5+10+5=20 and L2 disutility is sqrt(25+100+25)=sqrt(150). Agent 1 does not gain in L1 disutility, but does gain in L2 disutility. Independent markets rule. A particular moving-phantoms rule is the "Independent Markets rule". In addition to strategyproofness, it satisifes a property that they call "proportionality": if all agents are single-minded (each agent's ideal budget allocates 100% of the budget to a single issue), then the rule allocates the budget among the issues proportionally to the number of their supporters. However, this rule is not efficient (in fact, the only efficient moving-phantom rule is the utilitarian rule). A demo of the Independent Markets rule, and several other moving-phantoms rules, is available online.. Piecewise-uniform rule. Caragiannis, Christodoulou and Protopapas extended the definition of proportionality from single-minded preference profiles to "any" preference profile. They define the proportional outcome as the arithmetic mean of peaks. The only mechanism which is always proportional is the average rule, which is not strategyproof. Hence, they define the L1 distance between the outcome of a rule and the average as the degree of dis-proportionality. The disproportionality of any budget-allocation is between 0 and 2. They evaluate BPA mechanisms by their worst-case disproportionality. In BPA with two issues, they show that UPM has worst-case disproportionality 1/2. With 3 issues, the independent-markets mechanism may have disproportionality 0.6862; they propose another moving-phantoms rule, called "Piecewise-Uniform rule", which is still proportional, and has disproportionality ~2/3. They prove that the worst-case disproportionality of a moving-phantoms mechanism on "m" issues is at least 1-1/"m", and the worst-case disproportionality of any truthful mechanism is at least 1/2; this implies that their mechanisms attain the optimal disproportionality. Ladder rule. Freeman and Schmidt-Kraepelin study a different measure of dis-proportionality: the L-infinity distance between the outcome and the average (i.e., the maximum difference per project, rather than the sum of differences). They define a new moving-phantoms rule called the "Ladder rule", which underfunds a project by at most 1/2-1/(2"m"), and overfunds a project by at most 1/4; both bounds are tight for moving-phantoms rules. Other rules. It is an open question whether every anonymous, neutral, continuous and strategyproof rule is a moving-phantoms rule. Elkind, Suksompong and Teh define various axioms for BPA with L1-disutilities, analyze the implications between axioms, and determine which axioms are satisfied by common aggregation rules. They study two classes of rules: those based on coordinate-wise aggregation (average, maximum, minimum, median, product), and those based on global optimization (utilitarian, egalitarian). Convex preferences. Nehring and Puppe aim to derive decision rules with as few assumptions as possible on agents' preferences; they call this the "frugal model." They assume that the social planner knows the agents' peaks, but does not know their exact preferences; this leads to uncertainty regarding how many people prefer an alternative "x" to an alternative "y". Given two alternatives x and y, x is a "necessary majority winner" if it wins over y according to all preferenes in the domain that are consistent with the agents' peaks; x is "majority admissible" if no other alternative is a necessary-majority-winner over x. Given two alternatives x and y, x is an "ex-ante majority winner" if its smallest possible number of supporters is at least as high as the smallest possible number of supporters of y, which holds iff its largest possible number of supporters is at least as high as the largest possible number of supporters of y. x is an "ex-ante Condorcet winner (EAC)" if it is an ex-ante majority winner over every other alternative. They assume that agents' preferences are convex, which in one dimension is equivalent to single-peakedness. But convexity alone is not enough to attain meaningful results in two or more dimensions (if the peaks are in general position, then all peaks are EAC winners). So they consider two subsets of convex preferences: homogenous quadratic preferences, and separable convex preferences. They study BPA allowing lower and upper bounds on the spending on each issue. Fain, Goel and Munagala assume that agents have additive concave utility functions, which represent convex preferences over bundles. In particular, for each agent "i" and issue "j" there is a coefficient "ai,j", and for each issue "j" there is an increasing and strictly concave function "gj"; the total utility of agent "i" from budget allocation x is: formula_0. They study the Lindahl equilibrium of this problem, prove that it is in the core (which is a strong fairness property), and show that it can be computed in polynomial time. Wagner and Meir study a generalization of BPA in which each agent may propose, in addition to a budget-allocation, also an amount "t" of tax (positive or negative) that will be taken from all agents and added to the budget. For each agent "i" there is a coefficient "ai,f" that represents the utility of monetary gains and losses, and there is a function f which is strictly convex for negative values and strictly concave for positive values, and formula_1, where "d" is the monetary gain (which can be negative). For this utility model, they present a variant of the Vickrey–Clarke–Groves mechanism that is strategyproof, but requires side-payments (in addition to the tax). Empirical evidence. Puppe and Rollmann present a lab experiment comparing the average voting rule and a normalized median voting rule in multidimensional budget aggregation setting. Under the average rule, people act in equilibrium when the equilibrium strategies are easily identifiable. Under normalized median rule, many people play best responses, but these best responses are usually not exactly truthful. Still, the median rule attains much higher social welfare than the average rule. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_i(x) = \\sum_{j=1}^m a_{i,j}\\cdot g_j(x_j)" }, { "math_id": 1, "text": "u_i(x,d) = \\sum_{j=1}^m a_{i,j}\\cdot g_j(x_j) + a_{i,f}\\cdot f(d)" } ]
https://en.wikipedia.org/wiki?curid=75060458
75072610
Einstein–Weyl geometry
An Einstein–Weyl geometry is a smooth conformal manifold, together with a compatible Weyl connection that satisfies an appropriate version of the Einstein vacuum equations, first considered by and named after Albert Einstein and Hermann Weyl. Specifically, if formula_0 is a manifold with a conformal metric formula_1, then a Weyl connection is by definition a torsion-free affine connection formula_2 such that formula_3 where formula_4 is a one-form. The curvature tensor is defined in the usual manner by formula_5 and the Ricci curvature is formula_6 The Ricci curvature for a Weyl connection may fail to be symmetric (its skew part is essentially the exterior derivative of formula_4.) An Einstein–Weyl geometry is then one for which the symmetric part of the Ricci curvature is a multiple of the metric, by an arbitrary smooth function: formula_7 The global analysis of Einstein–Weyl geometries is generally more subtle than that of conformal geometry. For example, the Einstein cylinder is a global static conformal structure, but only one period of the cylinder (with the conformal structure of the de Sitter metric) is Einstein–Weyl. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "[g]" }, { "math_id": 2, "text": "\\nabla" }, { "math_id": 3, "text": "\\nabla g = \\alpha\\otimes g" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "R(X,Y)Z = (\\nabla_X\\nabla_Y - \\nabla_Y\\nabla_X - \\nabla_{[X,Y]})Z," }, { "math_id": 6, "text": "Rc(Y,Z) = \\operatorname{tr}(X\\mapsto R(X,Y)Z)." }, { "math_id": 7, "text": "Rc(X,Y) + Rc(Y,X) = f\\,g(X,Y)." } ]
https://en.wikipedia.org/wiki?curid=75072610
75073083
Psychological barriers to effective altruism
In the philosophy of effective altruism, an altruistic act such as charitable giving is considered more effective, or cost-effective, if it uses a set of resources to do more good per unit of resource than other options, with the goal of trying to do the most good. Following this definition of effectiveness, researchers in psychology and related fields have identified psychological barriers to effective altruism that can cause people to choose less effective options when they engage in altruistic activities such as charitable giving. These barriers can include evolutionary influences as well as motivational and epistemic obstacles. Overview. In general, humans are motivated to do good things in the world, whether that is through donations to charity, volunteering time for a cause, or just lending a hand to someone who needs help. In 2022, approximately 4.2 billion people donated their money, time, or helped a stranger. Donating money to charity is especially substantial. For instance, 2% of the GDP of the United States goes to charitable organizations—a total of more than $450 billion in annual donations. Despite the human tendency and motivation to give and engage in altruistic behavior, research has shed light on an unequal motivation to give effectively. Humans are motivated to give, but often not motivated to give most effectively. In the title of an article published in "Nature Human Behaviour" in 2020, Bethany Burum, Martin Nowak, and Moshe Hoffman termed this phenomenon ineffective altruism, that is, relatively less sensitivity to cost-effectiveness in altruistic behaviour. In the domain of business decisions, investors look for how much return they will get for each dollar they invest. However, when it comes to the domain of altruistic decision-making, this line of thinking is far less common. Most donors seem to prioritize giving to charitable organizations that spend the least possible amount on running costs in the hopes of having more of their donation reach the destination. Evolutionary explanations. While plenty of studies in the behavioral sciences have demonstrated the cognitive and emotional limitations in charitable giving, some argue that the reasons behind ineffective giving run deeper. A study by Martin Nowak and fellow academics at Harvard University and the Massachusetts Institute of Technology suggested that the human tendency to ineffective altruism can be explained through evolutionary motives and evolutionary game theory. They argue that society rewards the act of giving but generally provides no motivation or incentive to give effectively. Past research suggests that altruistic motives are distorted by, among other things, parochialism, status seeking and conformity. Parochialism. People are sensitive to effectiveness when they or their kin are at stake, but not so much when confronted with a needy stranger. Donors have been shown to respond to impact and efficacy when giving to themselves, but less so when donating to charity. While cost-effectiveness information of charities tends to be hard to evaluate, studies have shown that people are less scope insensitive when the beneficiaries are family members. Throughout human evolutionary history, residing in small, tightly-knit groups has given rise to prosocial emotions and intentions towards kin and ingroup members, rather than universally extending to those outside the group boundaries. Humans tend to exhibit parochial tendencies, showing concern for their in-groups, but not out-groups. This parochial inclination can hinder effective altruism, especially as a significant portion of human suffering occurs in distant regions. Despite the potential impact of donations in different parts of the world, individuals in rich and developed countries often view assistance to physically distant others as less important than helping those in close proximity. Contrary to maximizing impact and effectiveness with their donations, many individuals commit to donating money to local charities and organizations to which they have a personal connection, thus living by the notion of "charity begins at home." Similarly, people are more inclined to help a needy child from their neighborhood rather than their city or country. Status seeking. Humans assign value to their social status within a group for survival and reproduction. People tend to pursue high-status positions to enjoy benefits, such as desirable mating partners. Therefore, behaviors that can produce reputational benefits are desirable to enhance one's standing in society. Altruistic acts are generally viewed positively, yield social rewards, and are cumulative. However, "effective" altruism, that is, altruistic behavior that focuses on maximizing others' welfare, is often not socially rewarded. Evidence-based reasoning in charitable giving may be perceived negatively, as amoral, and so will reduce a person's likability. Some have even argued that the reputational costs incurred for engaging in effective giving explain people's aversion to prioritizing some causes over more impactful ones. Conformity. Many living organisms have demonstrated conformity, that is, the tendency to use dominant group norms (or descriptive norms) as guiding rules of behavior. Research on humans has also shown that social norms have the power to influence what others do. In the judgment and decision-making research, this observation has come to be known as the bandwagon effect. The power of this bias has also been demonstrated in the field of charitable giving. In fact, people have been shown to donate more, or to exhibit an increased likelihood to donate, when they perceived donating to charity as the social norm or the default choice. Therefore, the fact that many people become increasingly in favor of donating to ineffective options, then society will see the creation of a norm for people to give ineffectively. As a result, people rely more strongly on their intuitions which lead them to choosing to give ineffectively simply because they know that most others would do the same thing. Motivational obstacles. Subjective preferences. People often prioritize giving to charities that align with their subjectively preferred causes. Commonly, people believe charity to be a subjective decision which should not be motivated by numbers, but by care for the cause given the lack of responsibility attributed to the effects of donations. This aligns with the theory of warm-glow giving originally proposed by the economist James Andreoni. According to Andreoni (1990), individuals gain satisfaction from the act of giving but are not concerned about the benefits generated by their act. Narrow moral circle. Moral circle expansion is the concept of increasing one's number and kind of subjects deserving of moral concern over time. The establishment of one's moral circle depends on spatial, biological, and temporal proximity. For instance, many donors in WEIRD countries tend to favor charities that conduct work within their respective geographical boundaries. In terms of biological distance, people favor donating money to help humans instead of animals, even in cases when animals can have equal cognitive and suffering capacities. The idea of temporal proximity relates to people's tendency to prefer helping current generations over future ones. Scope neglect (insensitivity). Scope neglect (or "scope insensitivity") is the idea that people are numb to the number of victims in large, high-stake humanitarian situations. Some research has compared this cognitive bias to the economic concept of diminishing marginal utility wherein people demonstrate a decreasing non-linear concern for individuals as the number of people increases. Epistemic obstacles. Overhead aversion. Donors are averse to giving to charities that devote a lot of their expenses to administration or running costs. Several studies have demonstrated the ubiquitous effect of overhead aversion which is commonly attributed to people's conflation between overhead spending and charity cost-effectiveness (or impact). Furthermore, some have argued that when donors learn that a charity uses their donation to fund running costs, donors experience a diminished feeling of warm-glow, which is a significant driver of donation behavior. Quantifiability scepticism. Intangible outcomes (such as health interventions, charity effectiveness) are hard to quantify, and many people doubt that they can every be quantified and compared. However, in disciplines such as health economics, health outcomes and interventions are quantified and evaluated using metrics such as quality-adjusted life years (QALYs). In a similar vein, happiness economists have developed the concept of wellbeing-years (WELLBYs) which evaluates effectiveness in terms of life-years lived up to full life satisfaction. Put simply, a WELLBY is given by:formula_0Where formula_1 is the number of lives remaining from the region's life expectancy and formula_2 is the change in life satisfaction expected to result from a particular action or intervention. Thus, charity cost-effectiveness analyses use a number of different measures grounded in academic research to quantify their impact, allowing direct comparisons of charities that address multiple causes. Limited awareness. The effective altruism movement does substantial work on identifying the world's most effective charities through charity evaluators such as GiveWell, Giving What We Can, and Animal Charity Evaluators. However, many people are unaware of these organizations and the charities they evaluate, and are strongly driven by emotional responses when estimating the effectiveness of a charity; choosing instead to prioritize those causes to which they have a personal connection.
[ { "math_id": 0, "text": "WELLBY = { L \\times\\Delta W }" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "\\Delta W" } ]
https://en.wikipedia.org/wiki?curid=75073083
750772
Cooling tower
Device which rejects waste heat to the atmosphere through the cooling of a water stream A cooling tower is a device that rejects waste heat to the atmosphere through the cooling of a coolant stream, usually a water stream, to a lower temperature. Cooling towers may either use the evaporation of water to remove heat and cool the working fluid to near the wet-bulb air temperature or, in the case of "dry cooling towers", rely solely on air to cool the working fluid to near the dry-bulb air temperature using radiators. Common applications include cooling the circulating water used in oil refineries, petrochemical and other chemical plants, thermal power stations, nuclear power stations and HVAC systems for cooling buildings. The classification is based on the type of air induction into the tower: the main types of cooling towers are natural draft and induced draft cooling towers. Cooling towers vary in size from small roof-top units to very large hyperboloid structures that can be up to tall and in diameter, or rectangular structures that can be over tall and long. Hyperboloid cooling towers are often associated with nuclear power plants, although they are also used in many coal-fired plants and to some extent in some large chemical and other industrial plants. The steam turbine is what necessitates the cooling tower. Although these large towers are very prominent, the vast majority of cooling towers are much smaller, including many units installed on or near buildings to discharge heat from air conditioning. Cooling towers are also often thought to emit smoke or harmful fumes by the general public, when in reality the emissions from those towers mostly do not contribute to carbon footprint, and consist solely of water vapor. History. Cooling towers originated in the 19th century through the development of condensers for use with the steam engine. Condensers use relatively cool water, via various means, to condense the steam coming out of the cylinders or turbines. This reduces the back pressure, which in turn reduces the steam consumption, and thus the fuel consumption, while at the same time increasing power and recycling boiler-water. However the condensers require an ample supply of cooling water, without which they are impractical. While water usage is not an issue with marine engines, it forms a significant limitation for many land-based systems. By the turn of the 20th century, several evaporative methods of recycling cooling water were in use in areas lacking an established water supply, as well as in urban locations where municipal water mains may not be of sufficient supply; reliable in times of demand; or otherwise adequate to meet cooling needs. In areas with available land, the systems took the form of cooling ponds; in areas with limited land, such as in cities, they took the form of cooling towers. These early towers were positioned either on the rooftops of buildings or as free-standing structures, supplied with air by fans or relying on natural airflow. An American engineering textbook from 1911 described one design as "a circular or rectangular shell of light plate—in effect, a chimney stack much shortened vertically (20 to 40 ft. high) and very much enlarged laterally. At the top is a set of distributing troughs, to which the water from the condenser must be pumped; from these it trickles down over "mats" made of wooden slats or woven wire screens, which fill the space within the tower". A hyperboloid cooling tower was patented by the Dutch engineers Frederik van Iterson and Gerard Kuypers in the Netherlands on August 16, 1916. The first hyperboloid reinforced concrete cooling towers were built by the Dutch State Mine (DSM) Emma in 1918 in Heerlen. The first ones in the United Kingdom were built in 1924 at Lister Drive power station in Liverpool, England. On both locations they were built to cool water used at a coal-fired electrical power station. According to Gas Technology Institute (GTI) report, the indirect dew point evaporative cooling Maisotsenko Cycle (M-Cycle) is a theoretically sound method of reducing a fluid to dew point temperature which is lower than its wet bulb temperature. The M-cycle utilizes the psychrometric energy (or the potential energy) available from the latent heat of water evaporating into the air. While its current manifestation is as the M-Cycle HMX for air conditioning, through engineering design this cycle could be applied as a heat and moisture recovery device for combustion devices, cooling towers, condensers, and other processes involving humid gas streams. The consumption of cooling water by inland processing and power plants is estimated to reduce power availability for the majority of thermal power plants by 2040–2069. In 2021, researchers presented a method for steam recapture. The steam is charged using an ion beam, and then captured in a wire mesh of opposite charge. The water's purity exceeded EPA potability standards. Classification by use. Heating, ventilation and air conditioning (HVAC). An HVAC (heating, ventilating, and air conditioning) cooling tower is used to dispose of ("reject") unwanted heat from a chiller. Liquid-cooled chillers are normally more energy efficient than air-cooled chillers due to heat rejection to tower water at or near wet-bulb temperatures. Air-cooled chillers must reject heat at the higher dry-bulb temperature, and thus have a lower average reverse-Carnot cycle effectiveness. In areas with a hot climate, large office buildings, hospitals, and schools typically use one or more cooling towers as part of their air conditioning systems. Generally, industrial cooling towers are much larger than HVAC towers. HVAC use of a cooling tower pairs the cooling tower with a liquid-cooled chiller or liquid-cooled condenser. A "ton" of air-conditioning is defined as the removal of . The "equivalent ton" on the cooling tower side actually rejects about due to the additional waste heat-equivalent of the energy needed to drive the chiller's compressor. This "equivalent ton" is defined as the heat rejection in cooling or of water by , which amounts to , assuming a chiller coefficient of performance (COP) of 4.0. This COP is equivalent to an energy efficiency ratio (EER) of 14. Cooling towers are also used in HVAC systems that have multiple water source heat pumps that share a common piping "water loop". In this type of system, the water circulating inside the water loop removes heat from the condenser of the heat pumps whenever the heat pumps are working in the cooling mode, then the externally mounted cooling tower is used to remove heat from the water loop and reject it to the atmosphere. By contrast, when the heat pumps are working in heating mode, the condensers draw heat out of the loop water and reject it into the space to be heated. When the water loop is being used primarily to supply heat to the building, the cooling tower is normally shut down (and may be drained or winterized to prevent freeze damage), and heat is supplied by other means, usually from separate boilers. Industrial cooling towers. Industrial cooling towers can be used to remove heat from various sources such as machinery or heated process material. The primary use of large, industrial cooling towers is to remove the heat absorbed in the circulating cooling water systems used in power plants, petroleum refineries, petrochemical plants, natural gas processing plants, food processing plants, semi-conductor plants, and for other industrial facilities such as in condensers of distillation columns, for cooling liquid in crystallization, etc. The circulation rate of cooling water in a typical 700 MWth coal-fired power plant with a cooling tower amounts to about 71,600 cubic metres an hour (315,000 US gallons per minute) and the circulating water requires a supply water make-up rate of perhaps 5 percent (i.e., 3,600 cubic metres an hour, equivalent to one cubic metre every second). If that same plant had no cooling tower and used once-through cooling water, it would require about 100,000 cubic metres an hour A large cooling water intake typically kills millions of fish and larvae annually, as the organisms are impinged on the intake screens. A large amount of water would have to be continuously returned to the ocean, lake or river from which it was obtained and continuously re-supplied to the plant. Furthermore, discharging large amounts of hot water may raise the temperature of the receiving river or lake to an unacceptable level for the local ecosystem. Elevated water temperatures can kill fish and other aquatic organisms (see "thermal pollution"), or can also cause an increase in undesirable organisms such as invasive species of zebra mussels or algae. A cooling tower serves to dissipate the heat into the atmosphere instead, so that wind and air diffusion spreads the heat over a much larger area than hot water can distribute heat in a body of water. Evaporative cooling water cannot be used for subsequent purposes (other than rain somewhere), whereas surface-only cooling water can be re-used. Some coal-fired and nuclear power plants located in coastal areas do make use of once-through ocean water. But even there, the offshore discharge water outlet requires very careful design to avoid environmental problems. Petroleum refineries may also have very large cooling tower systems. A typical large refinery processing 40,000 metric tonnes of crude oil per day ( per day) circulates about 80,000 cubic metres of water per hour through its cooling tower system. The world's tallest cooling tower is the tall cooling tower of the Pingshan II Power Station in Huaibei, Anhui Province, China. Classification by build. Package type. These types of cooling towers are factory preassembled, and can be simply transported on trucks, as they are compact machines. The capacity of package type towers is limited and, for that reason, they are usually preferred by facilities with low heat rejection requirements such as food processing plants, textile plants, some chemical processing plants, or buildings like hospitals, hotels, malls, automotive factories, etc. Due to their frequent use in or near residential areas, sound level control is a relatively more important issue for package type cooling towers. Field erected type. Facilities such as power plants, steel processing plants, petroleum refineries, or petrochemical plants usually install field erected type cooling towers due to their greater capacity for heat rejection. Field erected towers are usually much larger in size compared to the package type cooling towers. A typical field erected cooling tower has a pultruded fiber-reinforced plastic (FRP) structure, FRP cladding, a mechanical unit for air draft, and a drift eliminator. Heat transfer methods. With respect to the heat transfer mechanism employed, the main types are: In a wet cooling tower (or open circuit cooling tower), the warm water can be cooled to a temperature "lower" than the ambient air dry-bulb temperature, if the air is relatively dry (see dew point and psychrometrics). As ambient air is drawn past a flow of water, a small portion of the water evaporates, and the energy required to evaporate that portion of the water is taken from the remaining mass of water, thus reducing its temperature. Approximately of heat energy is absorbed for the evaporated water. Evaporation results in saturated air conditions, lowering the temperature of the water processed by the tower to a value close to wet-bulb temperature, which is lower than the ambient dry-bulb temperature, the difference determined by the initial humidity of the ambient air. To achieve better performance (more cooling), a medium called "fill" is used to increase the surface area and the time of contact between the air and water flows. "Splash fill" consists of material placed to interrupt the water flow causing splashing. "Film fill" is composed of thin sheets of material (usually PVC) upon which the water flows. Both methods create increased surface area and time of contact between the fluid (water) and the gas (air), to improve heat transfer. Air flow generation methods. With respect to drawing air through the tower, there are three types of cooling towers: Hyperboloid cooling tower. On 16 August 1916, Frederik van Iterson took out the UK patent (108,863) for "Improved Construction of Cooling Towers of Reinforced Concrete". The patent was filed on 9 August 1917, and published on 11 April 1918. In 1918, DSM built the first hyperboloid natural-draft cooling tower at the Staatsmijn Emma, to his design. Hyperboloid (sometimes incorrectly known as hyperbolic) cooling towers have become the design standard for all natural-draft cooling towers because of their structural strength and minimum usage of material. The hyperboloid shape also aids in accelerating the upward convective air flow, improving cooling efficiency. These designs are popularly associated with nuclear power plants. However, this association is misleading, as the same kind of cooling towers are often used at large coal-fired power plants and some geothermal plants as well. The steam turbine is what necessitates the cooling tower. Conversely, not all nuclear power plants have cooling towers, and some instead cool their working fluid with lake, river or ocean water. Categorization by air-to-water flow. Crossflow. Typically lower initial and long-term cost, mostly due to pump requirements. Crossflow is a design in which the airflow is directed perpendicular to the water flow (see diagram at left). Airflow enters one or more vertical faces of the cooling tower to meet the fill material. Water flows (perpendicular to the air) through the fill by gravity. The air continues through the fill and thus past the water flow into an open plenum volume. Lastly, a fan forces the air out into the atmosphere. A "distribution" or "hot water basin" consisting of a deep pan with holes or "nozzles" in its bottom is located near the top of a crossflow tower. Gravity distributes the water through the nozzles uniformly across the fill material. Cross Flow V/s Counter Flow Advantages of the crossflow design: Disadvantages of the crossflow design: Counterflow. In a counterflow design, the air flow is directly opposite to the water flow (see diagram at left). Air flow first enters an open area beneath the fill media, and is then drawn up vertically. The water is sprayed through pressurized nozzles near the top of the tower, and then flows downward through the fill, opposite to the air flow. Advantages of the counterflow design: Disadvantages of the counterflow design: Common aspects. Common aspects of both designs: Both crossflow and counterflow designs can be used in natural draft and in mechanical draft cooling towers. Wet cooling tower material balance. Quantitatively, the material balance around a wet, evaporative cooling tower system is governed by the operational variables of make-up volumetric flow rate, evaporation and windage losses, draw-off rate, and the concentration cycles. In the adjacent diagram, water pumped from the tower basin is the cooling water routed through the process coolers and condensers in an industrial facility. The cool water absorbs heat from the hot process streams which need to be cooled or condensed, and the absorbed heat warms the circulating water (C). The warm water returns to the top of the cooling tower and trickles downward over the fill material inside the tower. As it trickles down, it contacts ambient air rising up through the tower either by natural draft or by forced draft using large fans in the tower. That contact causes a small amount of the water to be lost as windage or drift (W) and some of the water (E) to evaporate. The heat required to evaporate the water is derived from the water itself, which cools the water back to the original basin water temperature and the water is then ready to recirculate. The evaporated water leaves its dissolved salts behind in the bulk of the water which has not been evaporated, thus raising the salt concentration in the circulating cooling water. To prevent the salt concentration of the water from becoming too high, a portion of the water is drawn off or blown down (D) for disposal. Fresh water make-up (M) is supplied to the tower basin to compensate for the loss of evaporated water, the windage loss water and the draw-off water. Using these flow rates and concentration dimensional units: A water balance around the entire system is then: M = E + D + W Since the evaporated water (E) has no salts, a chloride balance around the system is: MXM = DXC + WXC = XC(D + W) formula_0 and, therefore: formula_1 From a simplified heat balance around the cooling tower: formula_2 Windage (or drift) losses (W) is the amount of total tower water flow that is entrained in the flow of air to the atmosphere. From large-scale industrial cooling towers, in the absence of manufacturer's data, it may be assumed to be: W = 0.3 to 1.0 percent of C for a natural draft cooling tower without windage drift eliminators W = 0.1 to 0.3 percent of C for an induced draft cooling tower without windage drift eliminators W = about 0.005 percent of C (or less) if the cooling tower has windage drift eliminators W = about 0.0005 percent of C (or less) if the cooling tower has windage drift eliminators and uses sea water as make-up water. Cycles of concentration. Cycle of concentration represents the accumulation of dissolved minerals in the recirculating cooling water. Discharge of draw-off (or blowdown) is used principally to control the buildup of these minerals. The chemistry of the make-up water, including the amount of dissolved minerals, can vary widely. Make-up waters low in dissolved minerals such as those from surface water supplies (lakes, rivers etc.) tend to be aggressive to metals (corrosive). Make-up waters from ground water supplies (such as wells) are usually higher in minerals, and tend to be scaling (deposit minerals). Increasing the amount of minerals present in the water by cycling can make water less aggressive to piping; however, excessive levels of minerals can cause scaling problems. As the cycles of concentration increase, the water may not be able to hold the minerals in solution. When the solubility of these minerals have been exceeded they can precipitate out as mineral solids and cause fouling and heat exchange problems in the cooling tower or the heat exchangers. The temperatures of the recirculating water, piping and heat exchange surfaces determine if and where minerals will precipitate from the recirculating water. Often a professional water treatment consultant will evaluate the make-up water and the operating conditions of the cooling tower and recommend an appropriate range for the cycles of concentration. The use of water treatment chemicals, pretreatment such as water softening, pH adjustment, and other techniques can affect the acceptable range of cycles of concentration. Concentration cycles in the majority of cooling towers usually range from 3 to 7. In the United States, many water supplies use well water which has significant levels of dissolved solids. On the other hand, one of the largest water supplies, for New York City, has a surface rainwater source quite low in minerals; thus cooling towers in that city are often allowed to concentrate to 7 or more cycles of concentration. Since higher cycles of concentration represent less make-up water, water conservation efforts may focus on increasing cycles of concentration. Highly treated recycled water may be an effective means of reducing cooling tower consumption of potable water, in regions where potable water is scarce. Maintenance. Clean visible dirt &amp; debris from the cold water basin and surfaces with any visible biofilm (i.e., slime). Disinfectant and other chemical levels in cooling towers and hot tubs should be continuously maintained and regularly monitored. Regular checks of water quality (specifically the aerobic bacteria levels) using dipslides should be taken as the presence of other organisms can support legionella by producing the organic nutrients that it needs to thrive. Water treatment. Besides treating the circulating cooling water in large industrial cooling tower systems to minimize scaling and fouling, the water should be filtered to remove particulates, and also be dosed with biocides and algaecides to prevent growths that could interfere with the continuous flow of the water. Under certain conditions, a biofilm of micro-organisms such as bacteria, fungi and algae can grow very rapidly in the cooling water, and can reduce the heat transfer efficiency of the cooling tower. Biofilm can be reduced or prevented by using sodium chlorite or other chlorine based chemicals. A normal industrial practice is to use two biocides, such as oxidizing and non-oxidizing types to complement each other's strengths and weaknesses, and to ensure a broader spectrum of attack. In most cases, a continual low level oxidizing biocide is used, then alternating to a periodic shock dose of non-oxidizing biocides. Algaecides and biocides. Algaecides, as their name might suggest, is intended to kill algae and other related plant-like microbes in the water. Biocides can reduce other living matter that remains, improving the system and keeping clean and efficient water usage in a cooling tower. One of the most common options when it comes to biocides for your water is bromine. Scale inhibitors. Among the issues that cause the most damage and strain to a water tower's systems is scaling. When an unwanted material or contaminant in the water builds up in a certain area, it can create deposits that grow over time. This can cause issues ranging from the narrowing of pipes to total blockages and equipment failures. The water consumption of the cooling tower comes from Drift, Bleed-off, Evaporation loss, The water that is immediately replenished into the cooling tower due to loss is called Make-up Water. The function of make-up water is to make machinery and equipment run safely and stably. Legionnaires' disease. Another very important reason for using biocides in cooling towers is to prevent the growth of "Legionella", including species that cause legionellosis or Legionnaires' disease, most notably "L. pneumophila", or "Mycobacterium avium". The various "Legionella" species are the cause of Legionnaires' disease in humans and transmission is via exposure to aerosols—the inhalation of mist droplets containing the bacteria. Common sources of "Legionella" include cooling towers used in open recirculating evaporative cooling water systems, domestic hot water systems, fountains, and similar disseminators that tap into a public water supply. Natural sources include freshwater ponds and creeks. French researchers found that "Legionella" bacteria travelled up to through the air from a large contaminated cooling tower at a petrochemical plant in Pas-de-Calais, France. That outbreak killed 21 of the 86 people who had a laboratory-confirmed infection. Drift (or windage) is the term for water droplets of the process flow allowed to escape in the cooling tower discharge. Drift eliminators are used in order to hold drift rates typically to 0.001–0.005% of the circulating flow rate. A typical drift eliminator provides multiple directional changes of airflow to prevent the escape of water droplets. A well-designed and well-fitted drift eliminator can greatly reduce water loss and potential for "Legionella" or water treatment chemical exposure. Also, about every six months, inspect the conditions of the drift eliminators making sure there are no gaps to allow the free flow of dirt. The US Centers for Disease Control and Prevention (CDC) does not recommend that health-care facilities regularly test for the "Legionella pneumophila" bacteria. Scheduled microbiologic monitoring for "Legionella" remains controversial because its presence is not necessarily evidence of a potential for causing disease. The CDC recommends aggressive disinfection measures for cleaning and maintaining devices known to transmit "Legionella", but does not recommend regularly-scheduled microbiologic assays for the bacteria. However, scheduled monitoring of potable water within a hospital might be considered in certain settings where persons are highly susceptible to illness and mortality from "Legionella" infection (e.g. hematopoietic stem cell transplantation units, or solid organ transplant units). Also, after an outbreak of legionellosis, health officials agree that monitoring is necessary to identify the source and to evaluate the efficacy of biocides or other prevention measures. Studies have found "Legionella" in 40% to 60% of cooling towers. Fog production. Under certain ambient conditions, plumes of water vapor can be seen rising out of the discharge from a cooling tower, and can be mistaken as smoke from a fire. If the outdoor air is at or near saturation, and the tower adds more water to the air, saturated air with liquid water droplets can be discharged, which is seen as fog. This phenomenon typically occurs on cool, humid days, but is rare in many climates. Fog and clouds associated with cooling towers can be described as homogenitus, as with other clouds of man-made origin, such as contrails and ship tracks. This phenomenon can be prevented by decreasing the relative humidity of the saturated discharge air. For that purpose, in hybrid towers, saturated discharge air is mixed with heated low relative humidity air. Some air enters the tower above drift eliminator level, passing through heat exchangers. The relative humidity of the dry air is even more decreased instantly as being heated while entering the tower. The discharged mixture has a relatively lower relative humidity and the fog is invisible. Cloud formation. Issues related to applied meteorology of cooling towers, including the assessment of the impact of cooling towers on cloud enhancement were considered in a series of models and experiments. One of the results by Haman's group indicated significant dynamic influences of the condensation trails on the surrounding atmosphere, manifested in temperature and humidity disturbances. The mechanism of these influences seemed to be associated either with the airflow over the trail as an obstacle or with vertical waves generated by the trail, often at a considerable altitude above it. Salt emission pollution. When wet cooling towers with seawater make-up are installed in various industries located in or near coastal areas, the drift of fine droplets emitted from the cooling towers contain nearly 6% sodium chloride which deposits on the nearby land areas. This deposition of sodium salts on the nearby agriculture/vegetative lands can convert them into sodic saline or sodic alkaline soils depending on the nature of the soil and enhance the sodicity of ground and surface water. The salt deposition problem from such cooling towers aggravates where national pollution control standards are not imposed or not implemented to minimize the drift emissions from wet cooling towers using seawater make-up. Respirable suspended particulate matter, of less than 10 micrometers (μm) in size, can be present in the drift from cooling towers. Larger particles above 10 μm in size are generally filtered out in the nose and throat via cilia and mucus but particulate matter smaller than 10 μm, referred to as PM10, can settle in the bronchi and lungs and cause health problems. Similarly, particles smaller than 2.5 μm, (PM2.5), tend to penetrate into the gas exchange regions of the lung, and very small particles (less than 100 nanometers) may pass through the lungs to affect other organs. Though the total particulate emissions from wet cooling towers with fresh water make-up is much less, they contain more PM10 and PM2.5 than the total emissions from wet cooling towers with sea water make-up. This is due to lesser salt content in fresh water drift (below 2,000 ppm) compared to the salt content of sea water drift (60,000 ppm). Use as a flue-gas stack. At some modern power stations equipped with flue gas purification, such as the Großkrotzenburg Power Station and the Rostock Power Station, the cooling tower is also used as a flue-gas stack (industrial chimney), thus saving the cost of a separate chimney structure. At plants without flue gas purification, problems with corrosion may occur, due to reactions of raw flue gas with water to form acids. Sometimes, natural draft cooling towers are constructed with structural steel in place of concrete (RCC) when the construction time of natural draft cooling tower is exceeding the construction time of the rest of the plant or the local soil is of poor strength to bear the heavy weight of RCC cooling towers or cement prices are higher at a site to opt for cheaper natural draft cooling towers made of structural steel. Operation in freezing weather. Some cooling towers (such as smaller building air conditioning systems) are shut down seasonally, drained, and winterized to prevent freeze damage. During the winter, other sites continuously operate cooling towers with water leaving the tower. Basin heaters, tower draindown, and other freeze protection methods are often employed in cold climates. Operational cooling towers with malfunctions can freeze during very cold weather. Typically, freezing starts at the corners of a cooling tower with a reduced or absent heat load. Severe freezing conditions can create growing volumes of ice, resulting in increased structural loads which can cause structural damage or collapse. To prevent freezing, the following procedures are used: Fire hazard. Cooling towers constructed in whole or in part of combustible materials can support internal fire propagation. Such fires can become very intense, due to the high surface-volume ratio of the towers, and fires can be further intensified by natural convection or fan-assisted draft. The resulting damage can be sufficiently severe to require the replacement of the entire cell or tower structure. For this reason, some codes and standards recommend that combustible cooling towers be provided with an automatic fire sprinkler system. Fires can propagate internally within the tower structure when the cell is not in operation (such as for maintenance or construction), and even while the tower is in operation, especially those of the induced-draft type, because of the existence of relatively dry areas within the towers. Structural stability. Being very large structures, cooling towers are susceptible to wind damage, and several spectacular failures have occurred in the past. At Ferrybridge power station on 1 November 1965, the station was the site of a major structural failure, when three of the cooling towers collapsed owing to vibrations in winds. Although the structures had been built to withstand higher wind speeds, the shape of the cooling towers caused westerly winds to be funneled into the towers themselves, creating a vortex. Three out of the original eight cooling towers were destroyed, and the remaining five were severely damaged. The towers were later rebuilt and all eight cooling towers were strengthened to tolerate adverse weather conditions. Building codes were changed to include improved structural support, and wind tunnel tests were introduced to check tower structures and configuration. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M X_M = D X_C + W X_C = X_C (D + W)" }, { "math_id": 1, "text": " {X_C \\over X_M} = \\text{Cycles of concentration} ={ M \\over (D + W)} = {M \\over (M - E)} = 1 + {E \\over (D + W)}" }, { "math_id": 2, "text": " E = {C \\Delta T c_p \\over H_V} " } ]
https://en.wikipedia.org/wiki?curid=750772
7508398
Three-two pull down
Post-production process of transferring film to video Three-two pull down (3:2 pull down) is a term used in filmmaking and television production for the post-production process of transferring film to video. It converts 24 frames per second into 29.97 frames per second, converting approximately every four frames into five frames plus a slight slow down in speed. Film runs at a standard rate of 24 frames per second, whereas NTSC video has a signal frame rate of 29.97 frames per second. Every interlaced video frame has two fields for each frame. The three-two pull down is where the telecine adds a third video field (a half frame) to every second video frame, but the untrained eye cannot see the addition of this extra video field. In the figure, the film frames A–D are the true or original images since they have been photographed as a complete frame. The A, B, and D frames on the right in the NTSC footage are original frames. The third and fourth frames have been created by blending fields from different frames. Video. 2:3. In the United States and other countries where television uses the 59.94 Hz vertical scanning frequency, video is broadcast at 29.97 frame/s. For the film's motion to be accurately rendered on the video signal, a telecine must use a technique called the 2:3 pull down (or a variant called 3:2 pull down) to convert from 24 to 29.97 frame/s. The term "pulldown" comes from the mechanical process of "pulling" (physically moving) the film downward within the film portion of the transport mechanism to advance it from one frame to the next at a repetitive rate (nominally 24 frames/s). This is accomplished in two steps. The first step is to slow down the film motion by 1/1000 to 23.976 frames/s (or 24 frames every 1.001 seconds). This difference in speed is imperceptible to the viewer. For a two-hour film, play time is extended by 7.2 seconds. The second step of is distributing cinema frames into video fields. At 23.976 frame/s, there are four frames of film for every five frames of 29.97 Hz video: formula_0 These four frames needs to be "stretched" into five frames by exploiting the interlaced nature of video. Since an interlaced video "frame" is made up of two incomplete "fields" (one for the odd-numbered lines of the image, and one for the even-numbered lines), conceptually four frames need to be used in ten fields (to produce five frames). The term "2:3" comes from the pattern for producing fields in the new video frames. The pattern of 2-3 is an abbreviation of the actual pattern of 2-3-2-3, which indicates that the first film frame is used in 2 fields, the second film frame is used in 3 fields, the third film frame is used in 2 fields, and the fourth film frame is used in 3 fields, producing a total of 10 fields, or 5 video frames. If the four film frames are called "A", "B", "C" and "D", the five video frames produced are A1-A2, B1-B2, B2-C1, C2-D1 and D1-D2. That is, frame A is used 2 times (in both fields of the first video frame); frame B is used 3 times (in both fields of the second video frame and in one of the fields of the third video frame); frame C is used 2 times (in the other field of the third video frame, and in one of the fields of the fourth video frame); and frame D is used 3 times (in the other field of the fourth video frame, and in both fields of the fifth video frame). The 2-3-2-3 cycle repeats itself completely after four film frames have been exposed. 3:2. The alternative "3:2" pattern is similar to the one shown above, except it is shifted by one frame. For instance, a cycle that starts with film frame B yields a 3:2 pattern: B1-B2, B2-C1, C2-D1, D1-D2, A1-A2 or 3-2-3-2 or simply 3-2. In other words, there is no difference between the 2-3 and 3-2 patterns. In fact, the "3-2" notation is misleading because according to SMPTE standards for every four-frame film sequence the first frame is scanned twice, not three times. Modern alternatives. The above method is a "classic" 2:3, which was used before frame buffers allowed for holding more than one frame. It has the disadvantage of creating two dirty frames (which are a mix from two different film frames) and three clean frames (which matches an unmodified film frame) in every five video frames. The preferred method for doing a 2:3 creates only one dirty frame in every five (i.e. 3:3:2:2 or 2:3:3:2 or 2:2:3:3). The 3-3-2-2 pattern produces A1-A2 A2-B1 B1-B2 C1-C2 D1-D2, where only the second frame is dirty. While this method has a slight bit more judder, it allows for easier upconversion (the dirty frame can be dropped without losing information) and a better overall compression when encoding. Note that just fields are displayed—no frames hence no dirty frames—in interlaced displays such as on a CRT. Dirty frames may appear in other methods of displaying the interlaced video. Audio. The rate of NTSC video (initially monochrome, only, but soon thereafter monochrome "and" color) is 29.97 frames per second, or one-thousandth slower than 30 frame/s, due to the NTSC color encoding process which mandated that the line rate be a sub-multiple of the 3.579545 MHz color "burst" frequency, or 15734.2637 Hz (29.9700 Hz, frame rate), rather than the (60 Hz) ac "line locked" line rate of 15750.0000… Hz (30.0000… Hz, frame rate). This was done to maintain compatibility with black and white televisions. Because of this 0.1% speed difference, when converting film to video, or vice versa, the sync will drift and the audio will end up out of sync by 3.2 seconds per hour. In order to correct this error, the audio can be either pulled up or pulled down. A pull up will speed up the sound by 0.1%, used for transferring video to film. A pull down will slow the audio speed down by 0.1%, necessary for transferring film to video.
[ { "math_id": 0, "text": " \\frac{23.976}{29.97} = \\frac{4}{5}" } ]
https://en.wikipedia.org/wiki?curid=7508398
7508653
Chvorinov's rule
Concept in applied physics Chvorinov's rule is a physical relationship that relates the solidification time for a simple casting to the volume and surface area of the casting. It was first expressed by Czech engineer Nicolas Chvorinov in 1940. Description. According to the rule, a casting with a larger surface area and smaller volume will cool more quickly than a casting with a smaller surface area and a larger volume under otherwise comparable conditions. The relationship can be mathematically expressed as: formula_0 Where t is the solidification time, V is the volume of the casting, A is the surface area of the casting that contacts the mold, n is a constant, and B is the mold constant. This relationship can be expressed more simply as: formula_1 Where the modulus M is the ratio of the casting's volume to its surface area: formula_2 The mold constant B depends on the properties of the metal, such as density, heat capacity, heat of fusion and superheat, and the mold, such as initial temperature, density, thermal conductivity, heat capacity and wall thickness. Mold Constant. The S.I. unit for the mold constant B is seconds per metre squared (s/m2). According to Askeland, the constant n is usually 2, however Degarmo claims it is between 1.5 and 2. The mold constant B can be calculated using the following formula: formula_3 Where "T"m = melting or freezing temperature of the liquid (in kelvins), "T"0 = initial temperature of the mold (in kelvins), Δ"T"s = "T"pour − "T"m = superheat (in kelvins), "L" = latent heat of fusion (in [J·kg−1]), "k" = thermal conductivity of the mold (in [W·m−1·K−1)]), "ρ" = density of the mold (in [kg·m−3]), "c" = specific heat of the mold (in [J·kg−1·K−1]), "ρ"m = density of the metal (in [kg·m−3]), "c"m = specific heat of the metal (in [J·kg−1·K−1]). It is most useful in determining if a riser will solidify before the casting, because if the riser solidifies first then defects like shrinkage or porosity can form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t = B \\left( \\frac{V}{A} \\right)^n" }, { "math_id": 1, "text": "t = BM^n" }, { "math_id": 2, "text": " M = \\frac{V}{A}" }, { "math_id": 3, "text": "B = \\left[ \\frac{\\rho_\\mathrm{m} L}{\\left(T _\\mathrm{m} - T_0\\right)} \\right ]^2 \\left[ \\frac{\\pi}{4k\\rho c} \\right] \\left[ 1 + \\left(\\frac{c_\\mathrm{m}\\Delta T_\\mathrm{s}}{L} \\right)^2 \\right]," } ]
https://en.wikipedia.org/wiki?curid=7508653
75091007
Nelly Litvak
Russian and Dutch applied mathematician Nelly Vladimirovna Litvak (, born January 27, 1972) is a Russian and Dutch applied mathematician whose research includes the study of complex networks, stochastic processes, and their applications in medical logistics. Formerly a professor at the University of Twente, she moved to the Eindhoven University of Technology in 2023. Education and career. Nelly Litvak was born in Gorky (now Nizhny Novgorod), Russia to Nina Zvereva and Vladimir Antonets. In 1995 Litvak graduated from the N. I. Lobachevsky State University of Nizhny Novgorod, and in 1998 she earned the candidate's degree in physical and mathematical sciences from the latter university with the dissertation "Adaptive Control of Conflicting Flows" supervised by Mikhail Andreevich Fedotkin. In 1999 she moved to the Netherlands. She completed another doctoral degree in 2002, a Ph.D. from the Eindhoven University of Technology, with the dissertation "Collecting formula_0 Items Randomly Located on a Circle", completed in EURANDOM, jointly promoted by Ivo Adan, Jaap Wessels, and Henk Zijm. She became a lecturer at the University of Twente in 2002, and was promoted to associate professor in 2011 and full professor in 2018. In 2017, she took on a second part-time affiliation with the Eindhoven University of Technology, and in 2023 she moved from Twente to a full-time position at the Eindhoven University of Technology, as professor of algorithms for complex networks. Books. Litvak is the author of several popular science books, including: Recognition. Litvak was awarded the 2002 Stieltjes Prize for her Ph.D. dissertation. She was the 2011 recipient of the Professor De Winter award of the University of Twente, given annually to recognize the research of a female faculty member. She was named the university's teacher of the year in 2022. Personal. While in the Netherlands, Nelly Litvak married to Pranab Mandal. She has two daughters: Natalia (born in 1993 from the first marriage) and Piyali (born in 2005 to Nelly and Pranab). She has younger sister Yekaterina and brother Pyotr. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=75091007
75094
Archibald Hill
British physiologist (1886–1977) Archibald Vivian Hill (26 September 1886 – 3 June 1977), better known to friends and colleagues as A. V. Hill, was a British physiologist, one of the founders of the diverse disciplines of biophysics and operations research. He shared the 1922 Nobel Prize in Physiology or Medicine for his elucidation of the production of heat and mechanical work in muscles. Biography. Born in Bristol, he was educated at Blundell's School and graduated from Trinity College, Cambridge as third wrangler in the mathematics tripos before turning to physiology. While still an undergraduate at Trinity College, he derived in 1909 what came to be known as the Langmuir equation. This is closely related to Michaelis-Menten kinetics. In this paper, Hill's first publication, he derived both the equilibrium form of the Langmuir equation, and also the exponential approach to equilibrium. The paper, written under the supervision of John Newport Langley, is a landmark in the history of receptor theory, because the context for the derivation was the binding of nicotine and curare to the "receptive substance" at the neuromuscular junction. While a student he had enrolled in the Officers Training Corps; he was a crack shot. In 1914, at the outbreak of the First World War, Hill became the musketry officer of the Cambridgeshire Regiment. The British made no effort to make use of their scientists. At the end of 1915, while home on leave he was asked by Horace Darwin from the Ministry of Munitions to come for a day to advise them on how to train anti-aircraft gunners. On site, Hill immediately proposed a simple two mirror method to determine airplanes' heights. Transferred to Munitions, he realized that the mirrors could measure where smoke shells burst and if he fitted this data with the equations describing a shell's flight they could provide accurate range tables for anti-aircraft guns. To measure and compute he assembled the Anti-Aircraft Experimental Section, a team of men too old for conscription, Ralph H. Fowler (a wounded officer), and lads too young for service including Douglas Hartree, Arthur Milne and James Crowther. Someone dubbed his motley group "Hill's Brigands", which they proudly adopted. Later in the war they also worked on locating enemy planes from their sound. He sped between their working sites on his beloved motorcycle. At the end of the war, Major Hill issued certificates to more than one hundred Brigands. He was appointed an Officer of the Order of the British Empire (OBE). In 1923, he succeeded Ernest Starling as professor of physiology at University College London, a few years later becoming a Royal Society Research professor there, where he remained until retirement in 1951. In 1933, he became with William Beveridge and Lord Rutherford a founder member and vice-president of the Academic Assistance Council (which in 1936 became the Society for the Protection of Science and Learning). By the start of the Second World War, the organisation had saved 900 academics (18 of whom went on to win Nobel Prizes) from Nazi persecution. He prominently displayed in his laboratory a toy figure of Adolf Hitler with saluting arm upraised, which he explained was in gratitude for all the scientists Germany had expelled, some of whom were now working with him. Hill believed that "Laughter is the best detergent for nonsense". In 1935, he served with Patrick Blackett and Sir Henry Tizard on the committee that gave birth to radar. He was also biological secretary of the Royal Society; William Henry Bragg was president. Both had been frustrated by the delay in putting scientists to work in the previous war. The Royal Society collated a list of scientists and Hill represented the Society at the Ministry of Labour. When the war came Hill led a campaign to liberate refugee scientists who had been interned. He served as an independent Member of Parliament (MP) for Cambridge University from 1940 to 1945. In 1940, he was posted to the British Embassy in Washington to promote war research in the still neutral United States. He was authorized to swap secrets with the Americans, but this could not work: how do you place a value on another's secret? Hill saw the answer and persuaded the British to show the Americans everything they were working on (except for the atomic bomb). The mobilization of Allied scientists was one of the major successes in the war. He visited India between November 1943 and April 1944 to survey scientific and technological research. His suggestions influenced the establishment of the Indian Institutes of Technology (IITs) in the following decade. After the war he rebuilt his laboratory at University College and vigorously carried on research. In 1951, his advocacy was rewarded by the establishment of a Biophysics Department under his leadership. In 1952, he became head of the British Association for the Advancement of Science and Secretary General of the International Council of Scientific Unions. He was President of the Marine Biological Association from 1955 to 1960. In 1967, he retired to Cambridge where he gradually lost the use of his legs. He died "held in the greatest affection by more than a hundred scientific descendants all over the world". Cooperativity of protein binding and enzyme kinetics. Although Hill's work in muscle physiology is probably the most important, and certainly responsible for his Nobel Prize, he is also very well known in biochemistry for the "Hill equation", which is used to quantify binding of oxygen to haemoglobin, written here as a kinetic equation: formula_0 Here formula_1 is the rate of reaction at concentration formula_2 of substrate, formula_3 is the rate at saturation, formula_4 is the value of formula_2 that gives formula_5, and the exponent formula_6 is a parameter that expresses the degree of departure from Michaelis-Menten kinetics: "positive cooperativity" for formula_7, no cooperativity for formula_8, and "negative cooperativity" for formula_9. Note that there is no implication that formula_6 is an integer, and in most experimental cases, apart from the trivial case of formula_8, it is not. Although many authors use formula_6 or formula_10 rather than formula_6 these symbols are misleading if taken to imply that it shows the number of binding sites on the protein. Hill himself avoided any such interpretation. The equation can be rearranged as follows: formula_11 This shows that when the Hill equation is accurately obeyed (which usually it is not) a plot of formula_12 gives a straight line of slope formula_6. This is called a "Hill plot". Muscle physiology. Hill made many exacting measurements of the heat released when skeletal muscles contract and relax. A key finding was that heat is produced during contraction, which requires investment of chemical energy, but not during relaxation, which is passive. His earliest measurements used equipment left behind by the Swedish physiologist Magnus Blix, Hill measured a temperature rise of only 0.003 °C. After publication he learned that German physiologists had already reported on heat and muscle contraction and he went to Germany to learn more about their work. He continually improved his apparatus to make it more sensitive and to reduce the time lag between the heat released by the preparation and its recording by his thermocouple. Hill is regarded, along with Hermann Helmholtz, as one of the founders of biophysics. Hill returned briefly to Cambridge in 1919 before taking the chair in physiology at the Victoria University of Manchester in 1920 in succession to William Stirling. Using himself as the subject — he ran every morning from 7:15 to 10:30 — he showed that running a dash relies on energy stores which afterwards are replenished by increased oxygen consumption. Paralleling the work of German Otto Fritz Meyerhof, Hill elucidated the processes whereby mechanical work is produced in muscles. The two shared the 1922 Nobel Prize in Physiology and Medicine for this work. Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Personal life. In 1913, he married Margaret Neville Keynes (1885-1974), daughter of the economist John Neville Keynes, and sister of the economist John Maynard Keynes and the surgeon Geoffrey Keynes. They had two sons and two daughters: Blue plaque. On 9 September 2015, an English Heritage Blue plaque was erected at Hill's former home, 16 Bishopswood Road, Highgate, where he had lived from 1923 to 1967. Since then the house had been divided into flats and owned by Highgate School, where Hill was a Governor from 1929 to 1960. It has now been sold, redeveloped and renamed as Hurstbourne. In Hill's time, according to his grandson Nicholas Humphrey, regular guests at the house included 18 exiled Nobel laureates, his brother-in-law, the economist John Maynard Keynes, and also friends as diverse and unexpected as Sigmund Freud and Stephen Hawking, he met during the course of his long and extraordinary life. After-dinner conversations in the drawing room would inevitably involve passionate debates about science or politics. "Every Sunday we would have to attend a tea party at grandpa’s house and apart from entertaining some extraordinary guests, he would devise some great games for us, such as frog racing in the garden or looking through the lens of a (dissected) sheep’s eye". Sir Ralph Kohn FRS who proposed the Blue plaque, said: "The Nobel Prize winner A. V. Hill contributed vastly to our understanding of muscle physiology. His work has resulted in wide-ranging application in sports medicine. As an outstanding Humanitarian and Parliamentarian, he was uncompromising in his condemnation of the Nazi regime for its persecution of scientists and others. A. V. Hill played a crucial role in assisting and rescuing many refugees to continue their work in this country". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v = V\\frac{a^h}{K_{0.5}^h + a^h}" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "{K_{0.5}}" }, { "math_id": 5, "text": "v = 0.5 V" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "h>1" }, { "math_id": 8, "text": "h=1" }, { "math_id": 9, "text": "h<1" }, { "math_id": 10, "text": "{n_\\mathrm{H}}" }, { "math_id": 11, "text": " \\ln [v/(V-v )] = h \\ln a - h \\ln K_{0.5} " }, { "math_id": 12, "text": " \\ln [v/(V-v)] " } ]
https://en.wikipedia.org/wiki?curid=75094
751050
Autobianchi Y10
Car manufactured from 1985 to 1995 The Autobianchi Y10 is a supermini and economy car manufactured from 1985 to 1995 and marketed under the Lancia brand in most export markets (as Lancia Y10). The car was manufactured at Fiat's Autobianchi plant in Desio, Milan until 1992 and after that in Arese, near Alfa Romeo's plants. In addition to a relatively high level of trim for its market segment, the Y10 featured a new rear rigid axle suspension design (called Omega axle), subsequently shared with the facelifted Fiat Panda. In spite of its short overall length, the Y10 had a drag coefficient of just 0.31. Production totaled approximately 850,000 in the first seven years, in spite of being a pricier, more niche-oriented product than its Fiat siblings. In addition to unique style and luxurious trim, the Y10's aerodynamics increased the fuel economy. Sales in the United Kingdom were never strong, and it was withdrawn in late 1991. This was more than two years before Lancia withdrew entirely from Britain and all other RHD markets. Lancia remained in the segment in left-hand drive markets with the similarly marketed Ypsilon. Specification. The Autobianchi Y10 made its official debut at the Geneva Motor Show in March 1985 as a replacement for the fifteen-year-old A112. It lost out for the title of European Car of the Year for 1986 to the Ford Scorpio. Design. The A112 remained on sale, alongside the Y10, almost to the end of 1986. The Y10 was sold under the Autobianchi marque in Italy, Portugal, France, and Japan. In most other markets it was sold under the Lancia marque. In Portugal it remained an Autobianchi until 1989, and was badged Lancia thenceforth. The Y10's most important design innovation was the vertical tail cut-off, characterized by the tailgate painted in black satin, regardless of the colour chosen for the body. The notable wedge shape of the small body is very aerodynamic (formula_0 0.31) thanks to the hood, which is inclined towards the curved windscreen, and side windows that were mounted flush with the bodywork, as well as the absence of drip rails, the recessed door handles and roof tapering slightly toward the back. All these features were unique for a small car in 1985 and were developed by the Fiat Style Centre, at that time led by Vittorio Ghidella. The definition of the project Y10 took more than three years of study. It was necessary that this car have a very specific identity to allow it to be unequivocally placed under the Lancia brand. The appointments were originally entrusted to Pininfarina and Giorgetto Giugiaro as well as the Fiat Style Centre. Hundreds of drawings were executed with sketches and scale models starting as early as 1980. Eventually a design by the Fiat Style Centre was selected and developed that best responded to the theme of a car destined for a particular user, "select and elite", a real flagship miniature desired by women as much as a handbag and identified by men as their favorite perfume. The car had rectangular headlights, which, together with a simple grille. The windshield, installed with silicone resin instead of a rubber gasket, was large, sloped and characterized by a single wiper. The side was characterized by a belt line from the base of the hood rising gradually as it approaches the lower edge of the tailgate, delimited by the wrap-around taillights. The Y10 was offered only with a three-door body with relatively large doors, offering easy access even for rear passengers. The tailgate was on a nearly vertical plane, hinged to make it more practical to access the luggage compartment by shifting the point of rotation eleven centimeters towards the center of the roof. The rear lights were of a horizontal layout, similar to the A112 with a wrap around rear bumper fascia. The car shares mechanicals with the Fiat Panda and was largely conventional in its layout, with a transversely mounted engine and front- or four-wheel-drive. The fuel capacity, at , was at least fifty percent more than what was usual for the class. Interior. The interior was carpeted and featured cloth upholstery on the seats and optional Alcantara upholstery on the dashboard, seats and door panels, and options such as electric windows, central locking, split rear bench seat, rear window hinged electrically, glass sunroof and climate control system with electronic controls and LED display, similar to that already adopted on Fiat Regata. First series (1985–1988). The attention and interest shown by the international public at the Geneva Motor Show, gave Fiat-Lancia Group hope that the new Y10 met the public's taste, but sales in the first month struggled. There were few people prepared to pay the Y10's asking price. In 1985 Autobianchi-Lancia assembled 63,495 Y10s, to be compared with 88,292 of the aging A112 having been built the year before. As a response, Lancia readjusted the lineup with a lower-priced Y10 Fire entry model while adding equipment to the rest of the range. Technically speaking, according to an official communique from the time of introduction, the model names were to be written in lower-caps ("Y10 fire, touring, turbo"). All versions came with a five speed gearbox, front-wheel drive, a front transverse engine, a MacPherson strut suspension and a front and rear rigid "omega" axle. 1985. Y10 Fire was equipped with the new four-cylinder Fire (Fully Integrated Robotised Engine), with displacement of and maximum power of at 5000 rpm, with low fuel consumption, low noise, low-power, allowing it to exceed and to accelerate from in 16 seconds. Y10 Touring: It was powered by a Brazilian-built motor of , which was also used in the Brazilian Fiat Uno. This has a maximum power of at 5850 rpm and maximum torque of at 3000 rpm, and was developed from Fiat's Lampredi-designed four-cylinder that originally equipped the Fiat 127. The Touring is externally identical to the Y10 Fire, aside from the trunk badging, while on the carrier can be seen the Alcantara in place of the fabric coverings. The Touring had a maximum speed of and acceleration from in 14.5 seconds. Y10 Turbo: The turbo had the same basic Brazilian FIASA engine as the Touring. However, it was equipped with an IHI turbocharger with intercooler, with a maximum power of at 5750 rpm and maximum torque of at 2750 rpm. Compared to the naturally aspirated 1050 in the Touring, it was distinguished by sodium filled exhaust valves, electric fuel pump, and a Magneti Marelli electronic ignition "Digiplex". It boasted a Turbocharging system from Formula 1 in miniature with intercooler, bypass valve, and thermostatic valve. This version had a top speed of and accelerated from in 9.5 seconds. It was externally recognizable by the presence of a red piping on the bumpers, for the adhesive band at the base of the side with the writing Turbo, larger bumpers at the front to accommodate the intercooler, and at the rear to accommodate the transversal exhaust backbox exclusive to this model with a polished tail pipe. This model was larger than the other versions and quieter on the inside. It had an unusual wheel design with some models being equipped with metric wheels size 340mm ( in between 13 and 14 inches) and sportier seats as well as different molding and wrapping, plus more analog instruments including a Turbo pressure gauge. 1986. In 1986, new versions were added and some of them were less expensive. The range included models from Fire, Fire LX, Touring and Turbo. All had the required height adjustable steering wheel. The sales finally began to grow. At the end of 1986 a 4WD version was introduced; total production of all models in 1986 was 80,403 units. Y10 Fire: The new entry-level version, the Fire, was now standard equipment with a lower price, estimated at one million lire. This model was recognizable by the front with a frame in matte black and black grille, (in contrast to the 1985 Fire, which had a polished stainless steel frame and a grid in silver), the taillights were simplified and asymmetrical, with only a reversing light to the right and only one headlight on the left rear fog light. Inside had cloth upholstery and a new dashboard. There were no doors on compartments or drawers. Y10 Fire LX: Between the Fire and the Touring was a new model, the Fire LX. In essence, it was the 1985 version of the Fire, and differed from the new basic version. It looked richer and had self-closing drawers and compartments, was upholstered in Alcantara. Electric windows, central locking, an overhead digital clock, Borletti Veglia Flash and reading lights were all standard. Y10 Touring: The Touring version remained unchanged in price but offered more equipment that included front electric windows, central locking and a digital clock. The engine remained the same, the naturally aspirated 1049 cc four. Y10 Turbo: The Y10 Turbo (similar to the Touring) offered standard front electric windows, central locking and an internal digital clock. It had an improved finish from previous models with better plastic assembled products. Some models offered rear electric windows, digital clima-control and Alcantara interior. Y10 4WD: At the end of October 1986, the range was extended further. It debuted the Y10 4WD, four-wheel drive version (derived from the same traction system of Panda 4x4 and produced in joint venture with Steyr), equipped with Fire from , with power increased to reach at 5500 rpm. It could be called a true SUV ahead of its time for its design, finishes and its mechanical contents. The all-wheel drive could be implemented by a button on the dash, and a complex and modern electro-pneumatic system allowed, four-wheel drive off, leaving firm transmission shaft and rear axle shafts, the pull was inserted by pressing the button, the engine is running and the car is stopped or at least at speeds below , where the traction was inserted over this speed, the wheel would fit only slowing below and also to prevent accumulation of ice, mud or snow clogging the system actuators of the transmission control, it was automatically inserted by turning off the engine. It was easy to recognize the 4WD thanks to the large lateral fascia of plastic material, the wheel rims of a specific design without any hub caps, and the front and rear splash guard; to distinguish it further unique identification was written on the tailgate, on the side shields and the splash guard as standard. Inside was a novel covered steering wheel. The standard package included: a right external mirror, optional headlamp cleaning system, instrumentation and control system with tachometer, sunroof, electric windows, central locking, split rear seat, and a steering wheel adjustable for height. The Y10 4WD had a top speed of and accelerates from 0 to in 17.4 seconds. 1987–1988. In the period 1987–88 the small Autobianchi became an increasingly mature product, with a precise position in the market attracting a varied clientele. It had now achieved its expected success, and made itself noticed at automotive events with presentation of "special series" of enhanced and exclusive versions, with features and details in their own right, that were not available on other cars once again to emphasize the variety available to buyers of the Y10. Twenty years later, special versions related to a brand, not necessarily in the automotive field, are visible to all and are now a permanent part of the vast majority of manufacturers' lists. These were trendy vehicles that were good looking and linked to specific lifestyles and acted as "status symbols" indicative of belonging to specific a group. In 1987, 109,708 Y10s were assembled bringing the total built since 1985 to 254,000. Alongside the classic versions of the list are also run special versions. Y10 Fila: The first special version to debut in February 1987, was the Y10 Fila, a model aimed primarily at young and dynamic customers – a Biella signed homonym of sports and leisure. Mechanically derived from the Y10 Fire (range 1986), which retained the standard accessories, it was very easy to recognize because they were fully painted in white: not only the body but also the tailgate, bumpers, grille front and wheel trims. To break the monotony adhesive strips in black and blue (or Black and Red), ran along the beltline, culminating towards the door, with the famous mark. The seats and door panels were covered with blue (or Red) fabric, and the Fila logo was placed on the backs of front seats. The success of this first special version, led to a second version, called Fila 2 whose body was painted black, this time excluding hubcaps, bumpers and front grille. The strip that ran along the side wall was white and red as was the interior's fabric. Y10 Martini: The Y10 Martini released a few months following the first edition of Fila, arrived at dealerships in June 1987. Made to celebrate the sporting association with Martini &amp; Rossi, which for decades connected the Lancia successes in the racing world with his incomparable Delta, the Y10 Martini came had a Turbo and was available only in white. White was also used for wheel covers (available as optional alloy wheels) while the bumpers were, wrap around and lowered, the Turbo was unpainted. The side was covered by a strip with the colors of the Martini racing team's winning colours, which were also used for the seat fabric and door panels. Y10 Missoni: In October 1987, the Y10 Missoni, derived from the Fire LX and signed by the famous fashion designer Ottavio Missoni (who appeared in a television spot next to his creation), was released. He chose the colour Memphis Blue for the body exclusive with black doors. The colour was used for interior fabrics, and the Alcantara dashboard and door panels were made of hazelnut. The seats had a fabric "Missonato" velvet stripe, and the carpet was coordinated with the exterior colour. To make this version recognizable, on the back part of the side, halfway between the rib and the rear window, the Missoni mark was applied. Second series (1989–1992). In February 1989, the Autobianchi Y10 presented the second series, characterized by minor revisions in the interior and to the engine. All models now had: newly designed wheel trims (except the 4WD), white front indicators, rear lights (symmetrical and now the same for all versions) made in two colours; double reverse light-colored smoke, dual projector fog red, side lights and stop (double stranded) in red and smoked blinkers. The Fire had a new radiator grille, the outer frame trimmed with stainless steel since 1986, but this version had a single colored grid devoid of chrome. Inside the back seat was changed to increase the load capacity of the trunk, making the car's shape more vertical and less padded. The door panels changed and now included buttons at the bottom of the front door, electric windows (optional on Fire), and speakers. The dash remained without doors or compartments. A new fabric was used for the upholstery. All new models had instrument panels with new graphics, and, except for the Fire, an adjustable steering wheel for height. Air conditioning was now standard and provided a manual internal recirculation function or, on request, a new digital climate control system, this time with electronic temperature control with display including an "auto" setting for the maintenance of constant temperature. The previous range including the Fire, the Fire LX, Touring, the Turbo and the special series. Only the Fire remained as all other versions went out of production along with the engine. This engine was replaced by the Fire of , which debuted in 1988 with the launch of the Fiat Tipo and, in the case of Y10, was equipped with electronic fuel injection and characterized by a maximum power of at 5500 rpm and a maximum torque of at 3000 rpm – i.e. the Fire LX, could reach and accelerate from in 13.9 seconds. The Y10 was withdrawn from sale on the British market at the end of 1991, by which time Lancia sales were in steep decline, not helped by the recession. Just over two years later Lancia withdrew from Britain completely. 1989 and introduction of electronic injection. Y10 GT i.e.: The 1050 turbo engine was dropped because of pollution regulations in countries like Switzerland, Austria and Germany. The new engine, made in Brazil, had fuel injection with electronic Multi Point ("Bosch L3.1 Jetronic"), and was derived from previous 1050. It was able to pull a maximum power of at 5750 rpm and a maximum torque of at 3250 rpm. This equipped the new 1300 GT i.e. version with the capability to reaching and accelerate from in 11.5 seconds. It also promised more comfort and a smoother drive than the raucous turbo version. With the discontinuation of the Turbo, performance was somewhat adversely affected, but the GT was much more "usable" becoming more of an "all-rounder" GT. The GT is characterized by: a red border that frames the front grill, by an adhesive strip with the mark of identification which runs through the lower edge of the side, by original hubcaps (optional alloy wheels), and chrome tailpipe. The instrumentation as well as for the previous Turbo, is more complete and sporty. Y10 Fire LX i.e. (and Selectronic version): The new Y10 Fire LX i.e. was recognized externally by the lower bumper painted in the body color, the adhesive strip, with the mark of identification, which runs through the lower edge of the side and the chrome tailpipe. Internally there were cloth seats or, on request, upholstery in Alcantara. The dashboard and door panels are upholstered in Alcantara as a standard. In December 1989, Autobianchi, debuted the Y10 Selectronic automatic transmission version with ECVT: Electronic Continuously Variable Transmission. Moved by a Fire engine Single Point Injection of LX (i.e., the Selectronic automatic transmission has a continuously variable relationships with the electromagnetic clutch – not hydraulic like the Fiat Uno Selecta), in reaches a top speed of and accelerates from in 15 seconds. The CVT transmission was produced by Japanese Fuji and was the same as that used in the Subaru Justy. Y10 4WD i.e.: The 4WD, all wheel drive can be inserted, changed its engine from 1000 to the Fire 1100 Fire electronic ignition Single Point of LX. With the new engine, the 4WD could reach and accelerate from in 15 seconds. The external appearance remained unchanged, retaining the lateral end of the previous version. 1990–1992. A few months later, in March 1990, a range of Y10 catalysed models was announced. For each existing version there was a corresponding model with a catalytic converter, with the exception of Fire carburetor, which remained the top seller thanks to its lower asking price and overall economy. The engine, was called "Europa" in accordance with EU Directives, in an ecological version reached a maximum power of , sufficient to allow the LX to reach , the Selectronic to reach and i 4WD to reach . The catalyzed version of the GT i.e. had a engine with a maximum power of , which allowed a top speed of . Y10 Mia The Y10 Mia introduced the ability to customize the colour scheme for the dashboard, door panels and upholstery with a choice of different shades of Alcantara (ice, camel beige, turquoise and red carmine). The Mia accounted for almost 40% of total production in 1991–92. Y10 Ego: Two months later, in September 1991, the Ego was introduced. The LX version was based on the Fire. It was only available with a Black Mica body color (including the tailgate), was completely covered in saddle leather ("Poltrona Frau") "Bulgarian Red" hue – as well as the dashboard, gear lever, door panels and steering wheel. It had no foam, and had upholstered front headrests and improved tires. Y10 Avenue In early 1992, as a result of the success of the special version of the Mia, the Y10 Avenue was released. It had the tailgate colour coordinated with the body. The Avenue could also be ordered with Selectronic automatic transmission. Y10 Marazzi Certa: Derived from Avenue, the Y10 Certa was prepared by Carrozzeria Marazzi and presented at the 1992 Turin Motor Show. Now dubbed by the press as "utilitarian abduction prevention", the car had been designed to offer greater resistance to attempts of aggression, with reinforced door structure, locks and bar-proof glass. An available option included a little safe in the cockpit, for the transport of any personal objects. Designed for women, at the time of its launch it had a planned annual production limited to 300, priced at 24 million lire. Third series (1992–1995). In 1992 the Y10 underwent a facelift, with changes to the interior and the exterior. The front end received a new grille, and was more compact; but also more in keeping with the style of the latest Lancia vehicles, the headlights were smaller and longer while the front bumpers had a new design. The distinctive black rear design was now partially body coloured, and new taillights identified the new model. From 1985 to 1992 Autobianchi-Lancia assembled over 850,000 Y10 models. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle C_\\mathrm d" } ]
https://en.wikipedia.org/wiki?curid=751050
75110
Biot–Savart law
Important law of classical magnetism In physics, specifically electromagnetism, the Biot–Savart law ( or ) is an equation describing the magnetic field generated by a constant electric current. It relates the magnetic field to the magnitude, direction, length, and proximity of the electric current. The Biot–Savart law is fundamental to magnetostatics. It is valid in the magnetostatic approximation and consistent with both Ampère's circuital law and Gauss's law for magnetism. When magnetostatics does not apply, the Biot–Savart law should be replaced by Jefimenko's equations. The law is named after Jean-Baptiste Biot and Félix Savart, who discovered this relationship in 1820. Equation. In the following equations, it is assumed that the medium is not magnetic (e.g., vacuum). This allows for straightforward derivation of magnetic field B, while the fundamental vector here is H. Electric currents (along a closed curve/wire). The Biot–Savart lawSec 5-2-1 is used for computing the resultant magnetic flux density B at position r in 3D-space generated by a filamentary current "I" (for example due to a wire). A steady (or stationary) current is a continual flow of charges which does not change with time and the charge neither accumulates nor depletes at any point. The law is a physical example of a line integral, being evaluated over the path "C" in which the electric currents flow (e.g. the wire). The equation in SI units teslas (T) is formula_1 where formula_2 is a vector along the path formula_3 whose magnitude is the length of the differential element of the wire in the direction of "conventional current", formula_4 is a point on path formula_3, and formula_5 is the full displacement vector from the wire element (formula_2) at point formula_4 to the point at which the field is being computed (formula_6), and "μ"0 is the magnetic constant. Alternatively: formula_7 where formula_0 is the unit vector of formula_8. The symbols in boldface denote vector quantities. The integral is usually around a closed curve, since stationary electric currents can only flow around closed paths when they are bounded. However, the law also applies to infinitely long wires (this concept was used in the definition of the SI unit of electric current—the Ampere—until 20 May 2019). To apply the equation, the point in space where the magnetic field is to be calculated is arbitrarily chosen (formula_9). Holding that point fixed, the line integral over the path of the electric current is calculated to find the total magnetic field at that point. The application of this law implicitly relies on the superposition principle for magnetic fields, i.e. the fact that the magnetic field is a vector sum of the field created by each infinitesimal section of the wire individually. For example, consider the magnetic field of a loop of radius formula_10 carrying a current formula_11 For a point a distance formula_12 along the center line of the loop, the magnetic field vector at that point is:formula_13where formula_14 is the unit vector of along the center-line of the loop (and the loop is taken to be centered at the origin).Sec 5-2, Eqn (25) Loops such as the one described appear in devices like the Helmholtz coil, the solenoid, and the Magsail spacecraft propulsion system. Calculation of the magnetic field at points off the center line requires more complex mathematics involving elliptic integrals that require numerical solution or approximations. Electric current density (throughout conductor volume). The formulations given above work well when the current can be approximated as running through an infinitely-narrow wire. If the conductor has some thickness, the proper formulation of the Biot–Savart law (again in SI units) is: formula_15 where formula_8 is the vector from dV to the observation point formula_6, formula_16 is the volume element, and formula_17 is the current density vector in that volume (in SI in units of A/m2). In terms of unit vector formula_0 formula_18 Constant uniform current. In the special case of a uniform constant current "I", the magnetic field formula_19 is formula_20 i.e., the current can be taken out of the integral. Point charge at constant velocity. In the case of a point charged particle "q" moving at a constant velocity v, Maxwell's equations give the following expression for the electric field and magnetic field: formula_21 where When "v"2 ≪ "c"2, the electric field and magnetic field can be approximated as formula_27 formula_28 These equations were first derived by Oliver Heaviside in 1888. Some authors call the above equation for formula_19 the "Biot–Savart law for a point charge" due to its close resemblance to the standard Biot–Savart law. However, this language is misleading as the Biot–Savart law applies only to steady currents and a point charge moving in space does not constitute a steady current. Magnetic responses applications. The Biot–Savart law can be used in the calculation of magnetic responses even at the atomic or molecular level, e.g. chemical shieldings or magnetic susceptibilities, provided that the current density can be obtained from a quantum mechanical calculation or theory. Aerodynamics applications. The Biot–Savart law is also used in aerodynamic theory to calculate the velocity induced by vortex lines. In the aerodynamic application, the roles of vorticity and current are reversed in comparison to the magnetic application. In Maxwell's 1861 paper 'On Physical Lines of Force', magnetic field strength H was directly equated with pure vorticity (spin), whereas B was a weighted vorticity that was weighted for the density of the vortex sea. Maxwell considered magnetic permeability μ to be a measure of the density of the vortex sea. Hence the relationship, B was seen as a kind of magnetic current of vortices aligned in their axial planes, with H being the circumferential velocity of the vortices. The electric current equation can be viewed as a convective current of electric charge that involves linear motion. By analogy, the magnetic equation is an inductive current involving spin. There is no linear motion in the inductive current along the direction of the B vector. The magnetic inductive current represents lines of force. In particular, it represents lines of inverse square law force. In aerodynamics the induced air currents form solenoidal rings around a vortex axis. Analogy can be made that the vortex axis is playing the role that electric current plays in magnetism. This puts the air currents of aerodynamics (fluid velocity field) into the equivalent role of the magnetic induction vector B in electromagnetism. In electromagnetism the B lines form solenoidal rings around the source electric current, whereas in aerodynamics, the air currents (velocity) form solenoidal rings around the source vortex axis. Hence in electromagnetism, the vortex plays the role of 'effect' whereas in aerodynamics, the vortex plays the role of 'cause'. Yet when we look at the B lines in isolation, we see exactly the aerodynamic scenario insomuch as B is the vortex axis and H is the circumferential velocity as in Maxwell's 1861 paper. "In two dimensions", for a vortex line of infinite length, the induced velocity at a point is given by formula_31 where Γ is the strength of the vortex and "r" is the perpendicular distance between the point and the vortex line. This is similar to the magnetic field produced on a plane by an infinitely long straight thin wire normal to the plane. This is a limiting case of the formula for vortex segments of finite length (similar to a finite wire): formula_32 where "A" and "B" are the (signed) angles between the point and the two ends of the segment. The Biot–Savart law, Ampère's circuital law, and Gauss's law for magnetism. In a magnetostatic situation, the magnetic field B as calculated from the Biot–Savart law will always satisfy Gauss's law for magnetism and Ampère's circuital law: &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Starting with the Biot–Savart law: formula_33 Substituting the relation formula_34 and using the product rule for curls, as well as the fact that J does not depend on formula_9, this equation can be rewritten as formula_35 Since the divergence of a curl is always zero, this establishes Gauss's law for magnetism. Next, taking the curl of both sides, using the formula for the curl of a curl, and again using the fact that J does not depend on formula_9, we eventually get the result formula_36 Finally, plugging in the relations formula_37 (where "δ" is the Dirac delta function), using the fact that the divergence of J is zero (due to the assumption of magnetostatics), and performing an integration by parts, the result turns out to be formula_38 i.e. Ampère's circuital law. (Due to the assumption of magnetostatics, formula_39, so there is no extra in Ampère's law.) In a "non"-magnetostatic situation, the Biot–Savart law ceases to be true (it is superseded by Jefimenko's equations), while Gauss's law for magnetism and the Maxwell–Ampère law are still true. Theoretical background. Initially, the Biot–Savart law was discovered experimentally, then this law was derived in different ways theoretically. In The Feynman Lectures on Physics, at first, the similarity of expressions for the electric potential outside the static distribution of charges and the magnetic vector potential outside the system of continuously distributed currents is emphasized, and then the magnetic field is calculated through the curl from the vector potential. Another approach involves a general solution of the inhomogeneous wave equation for the vector potential in the case of constant currents. The magnetic field can also be calculated as a consequence of the Lorentz transformations for the electromagnetic force acting from one charged particle on another particle. Two other ways of deriving the Biot–Savart law include: 1) Lorentz transformation of the electromagnetic tensor components from a moving frame of reference, where there is only an electric field of some distribution of charges, into a stationary frame of reference, in which these charges move. 2) the use of the method of retarded potentials.
[ { "math_id": 0, "text": "\\mathbf{\\hat r'}" }, { "math_id": 1, "text": " \\mathbf{B}(\\mathbf{r}) = \\frac{\\mu_0}{4\\pi} \\int_C \\frac{I \\, d\\boldsymbol \\ell\\times\\mathbf{r'}}{|\\mathbf{r'}|^3}" }, { "math_id": 2, "text": "d\\boldsymbol \\ell" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "\\boldsymbol \\ell" }, { "math_id": 5, "text": "\\mathbf{r'} = \\mathbf{r} - \\boldsymbol \\ell" }, { "math_id": 6, "text": "\\mathbf{r}" }, { "math_id": 7, "text": " \\mathbf{B}(\\mathbf{r}) = \\frac{\\mu_0}{4\\pi}\\int_C \\frac{I \\, d\\boldsymbol \\ell\\times\\mathbf{\\hat r'}}{|\\mathbf{r'}|^2}" }, { "math_id": 8, "text": "\\mathbf{r'}" }, { "math_id": 9, "text": "\\mathbf r" }, { "math_id": 10, "text": "R" }, { "math_id": 11, "text": "I." }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "\\mathbf B(\\hat{\\mathbf x})=\\frac{\\mu_0IR^2}{2(x^2+R^2)^{3/2}}\\hat{\\mathbf x}," }, { "math_id": 14, "text": "\\hat{\\mathbf x}" }, { "math_id": 15, "text": " \\mathbf B (\\mathbf r) = \\frac{\\mu_0}{4\\pi}\\iiint_V\\ \\frac{(\\mathbf{J}\\,dV)\\times\\mathbf r'}{|\\mathbf r'|^3}" }, { "math_id": 16, "text": "dV" }, { "math_id": 17, "text": "\\mathbf{J}" }, { "math_id": 18, "text": " \\mathbf B (\\mathbf r) = \\frac{\\mu_0}{4\\pi}\\iiint_V\\ dV \\frac{\\mathbf J\\times\\mathbf{\\hat r'}}{|\\mathbf r'|^2} " }, { "math_id": 19, "text": "\\mathbf{B}" }, { "math_id": 20, "text": " \\mathbf{B}(\\mathbf{r}) = \\frac{\\mu_0}{4\\pi} I \\int_C \\frac{d\\boldsymbol \\ell \\times \\mathbf{r'}}{|\\mathbf{r'}|^3}" }, { "math_id": 21, "text": "\\begin{align}\n \\mathbf{E} &= \\frac{q}{4\\pi \\varepsilon_0} \\frac{1 - \\beta^2}{\\left(1 - \\beta^2\\sin^2\\theta\\right)^{3/2}} \\frac{\\mathbf{\\hat r'}}{|\\mathbf r'|^2} \\\\[1ex]\n \\mathbf{H} &= \\mathbf{v} \\times \\mathbf{D} \\\\[1ex]\n \\mathbf{B} &= \\frac{1}{c^2} \\mathbf{v} \\times \\mathbf{E}\n\\end{align}" }, { "math_id": 22, "text": "\\mathbf \\hat r'" }, { "math_id": 23, "text": "\\beta = v/c" }, { "math_id": 24, "text": "c" }, { "math_id": 25, "text": "\\mathbf v" }, { "math_id": 26, "text": "\\mathbf r'" }, { "math_id": 27, "text": " \\mathbf{E} = \\frac{q}{4\\pi\\varepsilon_0}\\ \\frac{\\mathbf{\\hat r'}}{|\\mathbf r'|^2} " }, { "math_id": 28, "text": " \\mathbf{B} = \\frac{\\mu_0 q}{4\\pi} \\mathbf{v} \\times \\frac{\\mathbf{\\hat r'}}{|\\mathbf r'|^2} " }, { "math_id": 29, "text": "\\mathbf{B} = \\mu \\mathbf{H}" }, { "math_id": 30, "text": "\\mathbf{J} = \\rho \\mathbf{v} ," }, { "math_id": 31, "text": "v = \\frac{\\Gamma}{2\\pi r}" }, { "math_id": 32, "text": "v = \\frac{\\Gamma}{4 \\pi r} \\left[\\cos A - \\cos B \\right]" }, { "math_id": 33, "text": "\\mathbf B(\\mathbf r) = \\frac{\\mu_0}{4\\pi} \\iiint_V d^3\\boldsymbol{\\ell} \\, \\mathbf J (\\boldsymbol{\\ell}) \\times \\frac{\\mathbf r - \\boldsymbol{\\ell}}{|\\mathbf r - \\boldsymbol{\\ell}|^3}" }, { "math_id": 34, "text": "\\frac{\\mathbf r - \\boldsymbol{\\ell}}{|\\mathbf r - \\boldsymbol{\\ell}|^3} = -\\nabla\\left(\\frac{1}{|\\mathbf r - \\boldsymbol{\\ell}|}\\right)" }, { "math_id": 35, "text": "\\mathbf B (\\mathbf r) = \\frac{\\mu_0}{4\\pi} \\nabla\\times\\iiint_V d^3\\boldsymbol{\\ell} \\, \\frac{\\mathbf J (\\boldsymbol{\\ell})}{|\\mathbf r - \\boldsymbol{\\ell}|}" }, { "math_id": 36, "text": "\\nabla\\times\\mathbf B = \\frac{\\mu_0}{4\\pi} \\nabla \\iiint_V d^3\\boldsymbol{\\ell} \\, \\mathbf J (\\boldsymbol{\\ell})\\cdot\\nabla\\left(\\frac{1}{|\\mathbf r - \\boldsymbol{\\ell}|}\\right) - \\frac{\\mu_0}{4\\pi} \\iiint_V d^3\\boldsymbol{\\ell} \\, \\mathbf J (\\boldsymbol{\\ell})\\nabla^2\\left(\\frac{1}{|\\mathbf r - \\boldsymbol{\\ell}|}\\right)" }, { "math_id": 37, "text": "\\begin{align}\n \\nabla\\left(\\frac{1}{|\\mathbf r - \\boldsymbol{\\ell}|}\\right) &= -\\nabla_{\\boldsymbol{\\ell}} \\left(\\frac{1}{|\\mathbf r - \\boldsymbol{\\ell}|}\\right), \\\\\n \\nabla^2\\left(\\frac{1}{|\\mathbf r - \\boldsymbol{\\ell}|}\\right) &= -4\\pi \\delta(\\mathbf r - \\boldsymbol{\\ell})\n\\end{align}" }, { "math_id": 38, "text": "\\nabla\\times \\mathbf B = \\mu_0 \\mathbf J" }, { "math_id": 39, "text": "\\partial \\mathbf E / \\partial t = \\mathbf 0" } ]
https://en.wikipedia.org/wiki?curid=75110
75111694
Alain Manceau
French environmental mineralogist and biogeochemist Alain Manceau, born September 19, 1955, is a French environmental mineralogist and biogeochemist. He is known for his research on the structure and reactivity of nanoparticulate iron and manganese oxides and clay minerals, on the crystal chemistry of strategic metals and rare-earth elements, and on the structural biogeochemistry of mercury in natural systems, animals, and humans. Biography. Manceau is a former pupil of the in Pontoise, then of the Lycée Henri IV in Paris where he completed his preparatory classes before entering the (now École Normale Supérieure de Lyon) in 1977. He obtained the in natural sciences in 1981, then his doctorate in 1984 at the University Paris VII (now Université Paris Cité) under the direction of George Calas. He spent his entire academic career at the French National Centre for Scientific Research (CNRS), first as a research fellow from 1984, then as a research director from 1993 to 2022. From 1984 to 1992, he worked at the (IMPMC) in Paris, and from 1993 to 2022 at the (ISTerre) of the Grenoble Alpes University. He was appointed emeritus CNRS Researcher at the ENS-Lyon in 2022, and research scientist at the European Synchrotron Radiation Facility (ESRF) in 2023. In 1997, he was a visiting professor at the University of Illinois Urbana-Champaign, then Adjunct professor until 2001. He was a visiting professor at the University of California, Berkeley from 2001 to 2002. Scientific works. Environmental mineralogy and geochemistry Minerals play a key role in the biogeochemical cycling of the elements at the Earth's surface, sequestering and releasing them as they undergo precipitation, crystal growth, and dissolution in response to chemical and biological processes. Manceau's research in this field focuses on the structure of disordered minerals (clays, iron (Fe) and manganese (Mn) oxides, including ferrihydrite and birnessite), on chemical reactions at their surface in contact with aqueous solutions, and on the crystal chemistry of trace metals in these phases. In 1993, he established in collaboration with Victor Drits a structural model for ferrihydrite based on the modeling of the X-ray diffraction pattern. This model was confirmed in 2002 by Rietveld refinement of the neutron diffraction pattern, and in 2014 by simulation of the pair distribution function measured by high-energy X-ray scattering. In 1997, he and Victor Drits led the synthesis and resolution of the structure of hexagonal and monoclinic birnessite, and they showed in 2002 that the monoclinic form possesses a triclinic distortion. The hexagonal form prevails at the Earth's surface and owes its strong chemical reactivity to the existence of heterovalent Mn4+-Mn3+-Mn2+ substitutions and Mn4+ vacancies in the MnO2 layer. The Mn4+-Mn3+ and Mn3+-Mn2+ redox couples confer to this material oxidation-reduction properties used in catalysis, electrochemistry, and in the electron transfer during the photo-dissociation of water by photosystem II, while the vacancies are privileged sites for the adsorption of cations. He has characterized and modeled a number of chemical reactions occurring at the birnessite-water interface, including those of complexation of transition metals (Ni, Cu, Zn, Pb, Cd...), and oxidation of As3+ to As5+, Co2+ to Co3+, and Tl+ to Tl3+. The oxidative uptake of cobalt on birnessite leads to its billion-fold enrichment in marine ferromanganese deposits compared to seawater. From 2002 to 2012, he applied the knowledge base acquired on the crystal chemistry of trace metals and biogeochemical processes at mineral surfaces and the root-soil interface (rhizosphere) to the phytoremediation of contaminated soils and sediments, and abandoned mine sites. He contributed to improving the Jardins Filtrants® (Filtering Gardens) process for treating wastewater and solid matrices by phytolixiviation, phytoextraction, and rhizofiltration developed by the Phyto"r"estore company. In 2022, he extended his research on the crystal chemistry of trace metals to processes responsible for the 106 to 109 times enrichment of strategic rare-earth elements (REE) and redox-sensitive elements (cerium, thallium, platinum) in marine deposits relative to seawater. REE are associated with fluorapatite in marine sediments, whereas redox metals are oxidatively scavenged by birnessite in manganese nodules and crusts. Structural biogeochemistry of mercury Mercury (Hg) is a global pollutant that is generated both by natural sources, such as volcanic eruptions and wildfires, and human activities, such as coal combustion, gold mining, and the incineration of industrial waste. In aquatic and terrestrial food chains, mercury accumulates as methylmercury (MeHg), a potent toxin that affects the function of animal's and human's brain and reproductive system. Understanding the internal detoxification processes of MeHg in living organisms is essential for protecting wildlife and humans, and designing treatments against mercury poisoning. In 2015, Manceau led foundational studies on the structural biogeochemistry of mercury in bacteria, plants, animals, and humans using X-ray emission spectroscopy at the ESRF. In 2021, he found that the Clark's grebe ("Aechmophorus clarkii") and the Forster's tern ("Sterna forsteri") from California, the southern giant petrel ("Macronectes giganteus") and the south polar skua ("Stercorarius maccormicki") from the Southern Ocean, and the Indo-Pacific blue marlin ("Makaira mazera") from French Polynesia, detoxify the organic methylmercury-cysteine complex (MeHgCys) in inorganic mercury-selenocysteine complex (Hg(Sec)4). A few months later, he extended this result to long-finned pilot whale from the analysis of 89 tissues (liver, kidney, muscle, heart, brain) from 28 individuals stranded on the coasts of Scotland and the Faroe Islands. This body of work shed light on how birds, cetacea, and fishes manage to get rid of methylmercury toxicity. Demethylation of the MeHgCys complex to Hg(Sec)4 and very poorly soluble inorganic HgSe is catalyzed by selenoprotein P (SelP) within which nucleate clusters of Hgx(Sec,Se)y that grow, likely by self-assembly of mercurial proteins as is common in biomineralization processes, to form "in fine" inert, non-toxic mercury selenide (HgSe) crystals. The new Hg(Sec)4 species identified by Manceau and his collaborators was the main “missing intermediate” in the chemical reaction that helps animals to survive high levels of mercury. However, because Hg(Sec)4 has a molar ratio of selenium to mercury of 4:1, four selenium atoms are required to detoxify just one mercury atom. Thus, Hg(Sec)4 severely depletes the amount of bioavailable selenium. Selenium deficiency can affect the function of animals’ brain and reproductive system, as selenoproteins serve critical antioxidant functions in the brain and testes. His works on the Hg-Se antagonism won him the ES&amp;T 2021 Best Paper Award. The stepwise MeHgCys → Hg(Sec)4 + HgSe demethylation reaction is accompanied by the fractionation of the 202Hg and 198Hg isotopes, denoted δ202Hg. The δ202Hg fractionation measured on whole animal tissues (δ202Hg"t") is the sum of the fractionations of the MeHgCys, Hg(Sec)4, and HgSe species, weighted by their relative abundance: δ202Hg"t" = formula_0 "f"(Sp"i")"t" × δ202Sp"i" where δ202Sp"i" is the fractionation of each chemical species, and "f"(Sp"i") their relative abundance, or mole fraction. Manceau and his co-authors found that δ202Sp"i" can be obtained by mathematical inversion of macroscopic isotopic and microscopic spectroscopic data. The combination of isotopic and spectroscopic data on birds and cetacea revealed that dietary methylmercury and the Hg(Sec)4-SelP complex are distributed to all tissues (liver, kidney, sketetal muscle, brain) via the circulatory system with, however, a hierarchy in the tissular percentage of each species. Most of the detoxification process is carried out in the liver, whereas the brain, which is particularly sensitive to the neurotoxic effects of mercury, is distinguished from other tissues by a low mercury concentration and a high proportion of inert HgSe. These results appear to be transposable to humans. Publications. Manceau has published more than 200 scientific papers in Science Citation Index journals totalizing more than 24,000 citations and garnering an h-index over 90. In 2020, he was ranked 111th out of a total of 70,197 researchers in Geochemistry/Geophysics in a bibliometric study by scientists of the Stanford University based on the Elsevier Scopus database. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum" } ]
https://en.wikipedia.org/wiki?curid=75111694
751117
Charles Jean de la Vallée Poussin
Belgian mathematician (1866 - 1962) Charles-Jean Étienne Gustave Nicolas, baron de la Vallée Poussin (; 14 August 1866 – 2 March 1962) was a Belgian mathematician. He is best known for proving the prime number theorem. The King of Belgium ennobled him with the title of baron. Biography. De la Vallée Poussin was born in Leuven, Belgium. He studied mathematics at the Catholic University of Leuven under his uncle Louis-Philippe Gilbert, after he had earned his bachelor's degree in engineering. De la Vallée Poussin was encouraged to study for a doctorate in physics and mathematics, and in 1891, at the age of just 25, he became an assistant professor in mathematical analysis. De la Vallée Poussin became a professor at the same university (as was his father, Charles Louis de la Vallée Poussin, who taught mineralogy and geology) in 1892. De la Vallée Poussin was awarded with Gilbert's chair when Gilbert died. While he was a professor there, de la Vallée Poussin carried out research in mathematical analysis and the theory of numbers, and in 1905 was awarded the Decennial Prize for Pure Mathematics 1894–1903. He was awarded this prize a second time in 1924 for his work during 1914–23. In 1898, de la Vallée Poussin was appointed as the correspondent to the Royal Belgian Academy of Sciences, and he became a Member of the Academy in 1908. In 1923, he became the President of the Division of Sciences. In August 1914, de la Vallée Poussin escaped from Leuven at the time of its destruction by the invading German Army of World War I, and he was invited to teach at Harvard University in the United States. He accepted this invitation. In 1918, de la Vallée Poussin returned to Europe to accept professorships in Paris at the Collège de France and at the Sorbonne. After the war was over, de la Vallée Poussin returned to Belgium, The International Union of Mathematicians was created, and he was invited to become its President. Between 1918 and 1925, de la Vallée Poussin traveled extensively, lecturing in Geneva, Strasbourg, and Madrid. and then in the United States where he gave lectures at the Universities of Chicago, California, Pennsylvania, and Brown University, Yale University, Princeton University, Columbia University, and the Rice Institute of Houston. He was awarded the "Prix Poncelet" for 1916. De la Vallée Poussin was given the titles of Doctor Honoris Causa of the Universities of Paris, Toronto, Strasbourg, and Oslo, an Associate of the Institute of France, and a Member of the Pontifical Academy of Sciences, Nazionale dei Lincei, Madrid, Naples, Boston. He was awarded the title of Baron by King Albert I of the Belgians in 1928. In 1961, de la Vallée Poussin fractured his shoulder, and this accident and its complications led to his death in Watermael-Boitsfort, near Brussels, Belgium, a few months later. A student of his, Georges Lemaître, was the first to propose the Big Bang theory of the formation of the Universe. Work. Although his first mathematical interests were in analysis, he became suddenly famous as he proved the prime number theorem independently of his coeval Jacques Hadamard in 1896. Afterwards, he found interest in approximation theory. He defined, for any continuous function "f" on the standard interval formula_0, the sums formula_1, where formula_2 and formula_3 are the vectors of the dual basis with respect to the basis of Chebyshev polynomials (defined as formula_4 Note that the formula is also valid with formula_5 being the Fourier sum of a formula_6-periodic function formula_7 such that formula_8 Finally, the de la Vallée Poussin sums can be evaluated in terms of the so-called Fejér sums (say formula_9) formula_10 The kernel is bounded (formula_11) and obeys the property formula_12, if formula_13 Later, he worked on potential theory and complex analysis. He also published a counterexample to Alfred Kempe's false proof of the four color theorem. The Poussin graph, the graph he used for this counterexample, is named after him. "Cours d’analyse". The textbooks of his mathematical analysis course have been a reference for a long time and had some international influence. The second edition (1909-1912) is remarkable for its introduction of the Lebesgue integral. It was in 1912, "the only textbook on analysis containing both Lebesgue integral and its application to Fourier series, and a general theory of approximation of functions by polynomials". The third edition (1914) introduced the now classical definition of differentiability due to Otto Stolz. The second volume of this third edition was burnt in the fire of Louvain during the German invasion. The further editions were much more conservative, returning essentially to the first edition. Starting from the eighth edition, Fernand Simonart took over the revision and the publication of the "Cours d’analyse." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[-1, 1]" }, { "math_id": 1, "text": " V_n=\\frac{S_n+S_{n+1}+\\cdots+S_{2n-1}}{n} " }, { "math_id": 2, "text": " S_n=\\frac{1}{2}c_0(f)+\\sum_{i=1}^n c_i(f) T_i " }, { "math_id": 3, "text": " c_i(f) \\," }, { "math_id": 4, "text": " (T_0/2,T_1,\\ldots,T_n). " }, { "math_id": 5, "text": " S_n " }, { "math_id": 6, "text": " 2\\pi" }, { "math_id": 7, "text": "F" }, { "math_id": 8, "text": " F(\\theta)=f(\\cos\\theta). \\," }, { "math_id": 9, "text": "F_n" }, { "math_id": 10, "text": "V_n=2F_{2n-1}-F_{n-1}. \\," }, { "math_id": 11, "text": "V_n \\le 3" }, { "math_id": 12, "text": "f*V_n = f \\," }, { "math_id": 13, "text": "f(x)= \\sum_{j=-n}^n a_j e^{i j x}. \\," } ]
https://en.wikipedia.org/wiki?curid=751117
75118858
Graphs with few cliques
In graph theory, a class of graphs is said to have few cliques if every member of the class has a polynomial number of maximal cliques. Certain generally NP-hard computational problems are solvable in polynomial time on such classes of graphs, making graphs with few cliques of interest in computational graph theory, network analysis, and other branches of applied mathematics. Informally, a family of graphs has few cliques if the graphs do not have a large number of large clusters. Definition. A "clique" of a graph is a complete subgraph, while a "maximal clique" is a clique that is not properly contained in another clique. One can regard a clique as a cluster of vertices, since they are by definition all connected to each other by an edge. The concept of clusters is ubiquitous in data analysis, such as on the analysis of social networks. For that reason, limiting the number of possible maximal cliques has computational ramifications for algorithms on graphs or networks. Formally, let formula_0 be a class of graphs. If for every formula_1-vertex graph formula_2 in formula_0, there exists a polynomial formula_3 such that formula_4 has formula_5 maximal cliques, then formula_0 is said to be a "class of graphs with few cliques."
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "f(n)" }, { "math_id": 4, "text": "G\n" }, { "math_id": 5, "text": "O(f(n))" }, { "math_id": 6, "text": "T(n, \\lceil n/3 \\rceil)" }, { "math_id": 7, "text": "3^{n/3}" }, { "math_id": 8, "text": "n \\equiv 0 \\mod 3" }, { "math_id": 9, "text": "n\n" }, { "math_id": 10, "text": "T\n" }, { "math_id": 11, "text": "n-1\n" }, { "math_id": 12, "text": "8n-16 \n" }, { "math_id": 13, "text": "b\n" }, { "math_id": 14, "text": "O(n^b)\n" }, { "math_id": 15, "text": "d\n" }, { "math_id": 16, "text": "(n-d)3^{d/3}\n" }, { "math_id": 17, "text": "d \\equiv 0 \\mod 3\n" }, { "math_id": 18, "text": "n \\geq d+3\n" }, { "math_id": 19, "text": "k\n" }, { "math_id": 20, "text": "O\\left(n^{dk^{d+1}} \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=75118858
7512
Concentration
Ratio of part of a mixture to the whole In chemistry, concentration is the abundance of a constituent divided by the total volume of a mixture. Several types of mathematical description can be distinguished: "mass concentration", "molar concentration", "number concentration", and "volume concentration". The concentration can refer to any kind of chemical mixture, but most frequently refers to solutes and solvents in solutions. The molar (amount) concentration has variants, such as normal concentration and osmotic concentration. Dilution is reduction of concentration, e.g. by adding solvent to a solution. The verb means to increase concentration, the opposite of dilute. Etymology. "Concentration-", "concentratio", action or an act of coming together at a single place, bringing to a common center, was used in post-classical Latin in 1550 or earlier, similar terms attested in Italian (1589), Spanish (1589), English (1606), French (1632). Qualitative description. Often in informal, non-technical language, concentration is described in a qualitative way, through the use of adjectives such as "dilute" for solutions of relatively low concentration and "concentrated" for solutions of relatively high concentration. To concentrate a solution, one must add more solute (for example, alcohol), or reduce the amount of solvent (for example, water). By contrast, to dilute a solution, one must add more solvent, or reduce the amount of solute. Unless two substances are miscible, there exists a concentration at which no further solute will dissolve in a solution. At this point, the solution is said to be saturated. If additional solute is added to a saturated solution, it will not dissolve, except in certain circumstances, when supersaturation may occur. Instead, phase separation will occur, leading to coexisting phases, either completely separated or mixed as a suspension. The point of saturation depends on many variables, such as ambient temperature and the precise chemical nature of the solvent and solute. Concentrations are often called levels, reflecting the mental schema of levels on the vertical axis of a graph, which can be high or low (for example, "high serum levels of bilirubin" are concentrations of bilirubin in the blood serum that are greater than normal). Quantitative notation. There are four quantities that describe concentration: Mass concentration. The mass concentration formula_0 is defined as the mass of a constituent formula_1 divided by the volume of the mixture formula_2: formula_3 The SI unit is kg/m3 (equal to g/L). Molar concentration. The molar concentration formula_4 is defined as the amount of a constituent formula_5 (in moles) divided by the volume of the mixture formula_2: formula_6 The SI unit is mol/m3. However, more commonly the unit mol/L (= mol/dm3) is used. Number concentration. The number concentration formula_7 is defined as the number of entities of a constituent formula_8 in a mixture divided by the volume of the mixture formula_2: formula_9 The SI unit is 1/m3. Volume concentration. The volume concentration formula_10 (not to be confused with volume fraction) is defined as the volume of a constituent formula_11 divided by the volume of the mixture formula_2: formula_12 Being dimensionless, it is expressed as a number, e.g., 0.18 or 18%; its unit is 1. There seems to be no standard notation in the English literature. The letter formula_10 used here is normative in German literature (see ). Related quantities. Several other quantities can be used to describe the composition of a mixture. These should not be called concentrations. Normality. Normality is defined as the molar concentration formula_4 divided by an equivalence factor formula_13. Since the definition of the equivalence factor depends on context (which reaction is being studied), the International Union of Pure and Applied Chemistry and National Institute of Standards and Technology discourage the use of normality. Molality. The molality of a solution formula_14 is defined as the amount of a constituent formula_5 (in moles) divided by the mass of the solvent formula_15 (not the mass of the solution): formula_16 The SI unit for molality is mol/kg. Mole fraction. The mole fraction formula_17 is defined as the amount of a constituent formula_5 (in moles) divided by the total amount of all constituents in a mixture formula_18: formula_19 The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole fractions. Mole ratio. The mole ratio formula_20 is defined as the amount of a constituent formula_5 divided by the total amount of all "other" constituents in a mixture: formula_21 If formula_5 is much smaller than formula_18, the mole ratio is almost identical to the mole fraction. The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole ratios. Mass fraction. The mass fraction formula_22 is the fraction of one substance with mass formula_1 to the mass of the total mixture formula_23, defined as: formula_24 The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass fractions. Mass ratio. The mass ratio formula_25 is defined as the mass of a constituent formula_1 divided by the total mass of all "other" constituents in a mixture: formula_26 If formula_1 is much smaller than formula_23, the mass ratio is almost identical to the mass fraction. The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass ratios. Dependence on volume and temperature. Concentration depends on the variation of the volume of the solution with temperature, due mainly to thermal expansion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho_i" }, { "math_id": 1, "text": "m_i" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "\\rho_i = \\frac {m_i}{V}." }, { "math_id": 4, "text": "c_i" }, { "math_id": 5, "text": "n_i" }, { "math_id": 6, "text": "c_i = \\frac {n_i}{V}." }, { "math_id": 7, "text": "C_i" }, { "math_id": 8, "text": "N_i" }, { "math_id": 9, "text": "C_i = \\frac{N_i}{V}." }, { "math_id": 10, "text": "\\sigma_i" }, { "math_id": 11, "text": "V_i" }, { "math_id": 12, "text": "\\sigma_i = \\frac {V_i}{V}." }, { "math_id": 13, "text": "f_\\mathrm{eq}" }, { "math_id": 14, "text": "b_i" }, { "math_id": 15, "text": "m_\\mathrm{solvent}" }, { "math_id": 16, "text": "b_i = \\frac{n_i}{m_\\mathrm{solvent}}." }, { "math_id": 17, "text": "x_i" }, { "math_id": 18, "text": "n_\\mathrm{tot}" }, { "math_id": 19, "text": "x_i = \\frac {n_i}{n_\\mathrm{tot}}." }, { "math_id": 20, "text": "r_i" }, { "math_id": 21, "text": "r_i = \\frac{n_i}{n_\\mathrm{tot}-n_i}." }, { "math_id": 22, "text": "w_i" }, { "math_id": 23, "text": "m_\\mathrm{tot}" }, { "math_id": 24, "text": "w_i = \\frac {m_i}{m_\\mathrm{tot}}." }, { "math_id": 25, "text": "\\zeta_i" }, { "math_id": 26, "text": "\\zeta_i = \\frac{m_i}{m_\\mathrm{tot}-m_i}." } ]
https://en.wikipedia.org/wiki?curid=7512
7512240
Backspread
The backspread is the converse strategy to the ratio spread and is also known as reverse ratio spread. Using calls, a bullish strategy known as the call backspread can be constructed and with puts, a strategy known as the put backspread can be constructed. Call backspread. The call backspread (reverse call ratio spread) is a bullish strategy in options trading whereby the options trader writes a number of call options and buys more call options of the same underlying stock and expiration date but at a higher strike price. It is an unlimited profit, limited risk strategy that is used when the trader thinks that the price of the underlying stock will rise sharply in the near future. A 2:1 call backspread can be created by selling a number of calls at a lower strike price and buying twice the number of calls at a higher strike. Put backspread. The put backspread is a strategy in options trading whereby the options trader writes a number of put options at a higher strike price (often at-the-money) and buys a greater number (often twice as many) of put options at a lower strike price (often out-of-the-money) of the same underlying stock and expiration date. Typically the strikes are selected such that the cost of the long puts is largely offset by the premium earned in writing the at-the-money puts. This strategy is generally considered very bearish but it can also serve as a neutral/bullish play under the right conditions. The maximum profit for this strategy is achieved when the price of the underlying security moves to zero before expiration of the options. Given these declarations: formula_0 The maximum profit per put backspread combination can be expressed as: formula_1 The maximum upside profit is achieved if the price of the underlying is at or above the upper strike price at expiration and can be expressed simply as: formula_2 The maximum loss for this strategy is taken when the price of the underlying security moves to exactly the lower strike at expiration. The loss taken per put backspread combination can be expressed as: formula_3 As a very bearish strategy. The maximum profit from this strategy is realised if the underlying moves to zero before the options expire. The maximum loss for this strategy is realised when, at expiration, the underlying has moved moderately bearishly to the price of the lower strike price. This strategy might be used when the trader believes that there will be a very sharp, downward move and would like to enter the position without paying a lot of premium, as the written puts will offset the cost of the purchased puts. As a neutral/bullish strategy. The strategy can often be placed for a net credit when the net premium earned for the written puts minus the premium paid for the long puts is positive. In this case, this strategy can be considered a neutral or bullish play, since the net credit may be kept if the underlying remains at or greater than the upper strike price when the options expire. The dynamics of The Greeks. This position has a complex profile in that the Greeks Vega and Theta affect the profitability of the position differently, depending on whether the underlying spot price is above or below the upper strike. When the underlying's price is at or above the upper strike, the position is "short" vega (the value of the position "decreases" as volatility increases) and "long" theta (the value of the position "increases" as time passes). When the underlying is below the upper strike price, it is long vega (the value of the position increases as volatility increases) and short theta (the value of the position decreases as time passes). In equity markets. In equity options markets (including equity indexes and derivative equities such as ETFs, but possibly excluding inverse ETFs), it has been observed that there exists an inverse correlation between the price of the underlying and the implied volatility of its options. The implied volatility will often "increase" as the price of the underlying "decreases" and vice versa. This correlation manifests itself in a beneficial way to traders in a put backspread position. Since this position is long vega when the underlying's price falls below the upper strike price, this position may offer some degree of protection to the equity options trader who did not desire a bearish move. As volatility increases so does the current value of the position which may allow the trader time to exit with reduced losses or even a small profit in some conditions. Since this position is short vega when the underlying is above the upper strike price, this dynamic is again helpful to the equity options trader. For equity markets (as described above), the call backspread does not generally offer these helpful dynamics because the generally associated changes in volatility as price moves in the equity markets may exacerbate losses on a bearish move and reduce profits on a bullish move in the underlying. In commodity futures markets. With options on commodity futures (and possibly inverse ETFs), this relationship may be reversed as the observed correlation between price movement and implied volatility is positive meaning that as prices rise, so does volatility. In this case, the call backspread trader might benefit from these effects and the put backspread trader might not.
[ { "math_id": 0, "text": "\n\\begin{align}\nK_l & = \\text{lower strike price} \\\\\nK_u & = \\text{upper strike price} \\\\\nC_n & = \\text{net credit per share} \\\\\nN & = \\text{number of shares per options contract}\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\text{Maximum Profit} = \\left[ K_u - 2 \\times \\left( K_u - K_l \\right) + C_n \\right] \\times N\n" }, { "math_id": 2, "text": "\n\\text{Maximum Profit (upside)} = C_n\\,\n" }, { "math_id": 3, "text": "\n\\text{Maximum Loss} = \\left( C_n + K_l - K_u \\right) \\times N\n" } ]
https://en.wikipedia.org/wiki?curid=7512240
75122486
Monotone dualization
In theoretical computer science, monotone dualization is a computational problem of constructing the dual of a monotone Boolean function. Equivalent problems can also be formulated as constructing the transversal hypergraph of a given hypergraph, of listing all minimal hitting sets of a family of sets, or of listing all minimal set covers of a family of sets. These problems can be solved in quasi-polynomial time in the combined size of its input and output, but whether they can be solved in polynomial time is an open problem. Definitions. A Boolean function takes as input an assignment of truth values to its arguments, and produces as output another truth value. It is monotone when changing an argument from false to true cannot change the output from true to false. Every monotone Boolean function can be expressed as a Boolean expression using only logical disjunction ("or") and logical conjunction ("and"), without using logical negation ("not"). Such an expression is called a monotone Boolean expression. Every monotone Boolean expression describes a monotone Boolean function. There may be many different expressions for the same function. Among them are two special expressions, the conjunctive normal form and disjunctive normal form. For monotone functions these two special forms can also be restricted to be monotone: The dual of a Boolean function is obtained by negating all of its variables, applying the function, and then negating the result. The dual of the dual of any Boolean function is the original function. The dual of a monotone function is monotone. If one is given a monotone Boolean expression, then replacing all conjunctions by disjunctions produces another monotone Boolean expression for the dual function, following De Morgan's laws. However, this will transform the conjunctive normal form into disjunctive normal form, and vice versa, which may be undesired. Monotone dualization is the problem of finding an expression for the dual function without changing the form of the expression, or equivalently of converting a function in one normal form into the dual form. As a functional problem, monotone dualization can be expressed in the following equivalent ways: Another version of the problem can be formulated as a problem of "exact learning" in computational learning theory: given access to a subroutine for evaluating a monotone Boolean function, reconstruct both the CNF and DNF representations of the function, using a small number of function evaluations. However, it is crucial in analyzing the complexity of this problem that both the CNF and DNF representations are output. If only the CNF representation of an unknown monotone function is output, it follows from information theory that the number of function evaluations must sometimes be exponential in the combined input and output sizes. This is because (to be sure of getting the correct answer) the algorithm must evaluate the function at least once for each prime implicate and at least once for each prime implicant, but this number of evaluations can be exponentially larger than the number of prime implicates alone. It is also possible to express a variant of the monotone dualization problem as a decision problem, with a Boolean answer: &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: Is it possible to test whether two prime CNF expressions represent dual functions in polynomial time? It is an open problem whether monotone dualization has a polynomial time algorithm (in any of these equivalent forms). The fastest algorithms known run in quasi-polynomial time. The size of the output of the dualization and exact learning problems can be exponentially large, as a function of the number of variables or the input size. For instance, an formula_0-vertex graph consisting of formula_1 disjoint triangles has formula_2 hyperedges in its transversal hypergraph. Therefore, what is desired for these problems is an output-sensitive algorithm, one that takes a small amount of time per output clause. The decision, dualization, and exact learning formulations of the problem are all computationally equivalent, in the following sense: any one of them can be solved using a subroutine for any other of these problems, with a number of subroutine calls that is polynomial in the combined input and output sizes of the problems. Therefore, if any one of these problems could be solved in polynomial time, they all could. However, the best time bound that is known for these problems is quasi-polynomial time. It remains an open problem whether they can be solved in polynomial time. Computational complexity. Equivalence of decision, enumeration, and exact learning. The problem of finding the prime CNF expression for the dual function of a monotone function, given as a CNF formula, can be solved by finding the DNF expression for the given function and then dualizing it. Therefore, finding the dual CNF expression, and finding the DNF expression for the (primal) given function, have the same complexity. This problem can also be seen as a special case of the exact learning formulation of the problem. From a given CNF expression, it is straightforward to evaluate the function that it expresses. An exact learning algorithm will return both the starting CNF expression and the desired DNF expression. Therefore, dualization can be no harder than exact learning. It is also straightforward to solve the decision problem given an algorithm for dualization: dualize the given CNF expression and then test whether it is equal to the given DNF expression. Therefore, research in this area has focused on the other direction of this equivalence: solving the exact learning problem (or the dualization problem) given a subroutine for the decision problem. outline the following algorithm for solving exact learning using a decision subroutine: Each iteration through the outer loop of the algorithm uses a linear number of calls to the decision problem to find the unforced truth assignment, uses a linear number of function evaluations to find a minimal true or maximal false function value, and adds one clause to the output. Therefore, the total number of calls to the decision problem and the total number of function evaluations is a polynomial of the total output size. Quasi-polynomial time. A central result in the study of this problem, by Michael Fredman and Leonid Khachiyan, is that monotone dualization (in any of its equivalent forms) can be solved in quasi-polynomial time. Their algorithms directly solve the decision problem, but can be converted to the other forms of the monotone dualization problem as described in . Alternatively, in cases where the answer to the decision problem is no, the algorithms can be modified to return a witness, that is, a truth assignment for which the input formulas fail to determine the function value. Its main idea is to first "clean" the decision problem instance, by removing redundant information and directly solving certain easy-to-solve cases of the problem. Then, in remaining cases it branches on a carefully chosen variable. This means recursively calling the same algorithm on two smaller subproblems, one for a restricted monotone function for which the variable has been set to true and the other in which the variable has been set to false. The cleaning step ensures the existence of a variable that belongs to many clauses, causing a significant reduction in the recursive subproblem size. In more detail, the first and slower of the two algorithms of Fredman and Khachiyan performs the following steps: When this algorithm branches on a variable occurring in many clauses, these clauses are eliminated from one of the two recursive calls. Using this fact, the running time of the algorithm can be bounded by an exponential function of formula_9. A second algorithm of Fredman and Khachiyan has a similar overall structure, but in the case where the branch variable occurs in many clauses of one set and few of the other, it chooses the first of the two recursive calls to be the one where setting the branch variable significantly reduces the number of clauses. If that recursive call fails to find an inconsistency, then, instead of performing a single recursive call for the other branch, it performs one call for each clause that contains the branch variable, on a restricted subproblem in which all the other variables of that clause have been assigned in the same way. Its running time is an exponential function of formula_10. Polynomial special cases. Many special cases of the monotone dualization problem have been shown to be solvable in polynomial time through the analysis of their parameterized complexity. These include: Applications. One application of monotone dualization involves group testing for fault detection and isolation in the model-based diagnosis of complex systems. From a collection of observations of faulty behavior of a system, each with some set of active components, one can surmise that the faulty components causing this misbehavior are likely to form a minimal hitting set of this family of sets. In biochemical engineering, the enumeration of hitting sets has been used to identify subsets of metabolic reactions whose removal from a system adjusts the balance of the system in a desired direction. Analogous methods have also been applied to other biological interaction networks, for instance in the design of microarray experiments that can be used to infer protein interactions in biological systems. In recreational mathematics, in the design of sudoku puzzles, the problem of designing a system of clues that has a given grid of numbers as its unique solution can be formulated as a minimal hitting set problem. The 81 candidate clues from the given grid are the elements to be selected in the hitting set, and the sets to be hit are the sets of candidate clues that can eliminate each alternative solution. Thus, the enumeration of minimal hitting sets can be used to find all systems of clues that have a given solution. This approach has been as part of a computational proof that it is not possible to design a valid sudoku puzzle with only 16 clues. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n/3" }, { "math_id": 2, "text": "3^{n/3}" }, { "math_id": 3, "text": "2^{n-k}" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "2^n" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "\\log_2 m" }, { "math_id": 8, "text": "1/\\log_2 m" }, { "math_id": 9, "text": "(\\log n)^3" }, { "math_id": 10, "text": "(\\log n)^2" } ]
https://en.wikipedia.org/wiki?curid=75122486
7512317
Chen model
Economics model In finance, the Chen model is a mathematical model describing the evolution of interest rates. It is a type of "three-factor model" (short-rate model) as it describes interest rate movements as driven by three sources of market risk. It was the first stochastic mean and stochastic volatility model and it was published in 1994 by Lin Chen, economist, theoretical physicist and former lecturer/professor at Beijing Institute of Technology, American University of Beirut, Yonsei University of Korea, and SunYetSan University . The dynamics of the instantaneous interest rate are specified by the stochastic differential equations: formula_0 formula_1 formula_2 In an authoritative review of modern finance ("Continuous-Time Methods in Finance: A Review and an Assessment"), the Chen model is listed along with the models of Robert C. Merton, Oldrich Vasicek, John C. Cox, Stephen A. Ross, Darrell Duffie, John Hull, Robert A. Jarrow, and Emanuel Derman as a major term structure model. Different variants of Chen model are still being used in financial institutions worldwide. James and Webber devote a section to discuss Chen model in their book; Gibson et al. devote a section to cover Chen model in their review article. Andersen et al. devote a paper to study and extend Chen model. Gallant et al. devote a paper to test Chen model and other models; Wibowo and Cai, among some others, devote their PhD dissertations to testing Chen model and other competing interest rate models. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " dr_t = \\kappa(\\theta_t-r_t)\\,dt + \\sqrt{r_t}\\,\\sqrt{\\sigma_t}\\, dW_1," }, { "math_id": 1, "text": " d \\theta_t = \\nu(\\zeta-\\theta_t)\\,dt + \\alpha\\,\\sqrt{\\theta_t}\\, dW_2," }, { "math_id": 2, "text": " d \\sigma_t = \\mu(\\beta-\\sigma_t)\\,dt + \\eta\\,\\sqrt{\\sigma_t}\\, dW_3." } ]
https://en.wikipedia.org/wiki?curid=7512317
75126778
Shneider-Miles scattering
Shneider-Miles scattering (also referred to as collisional scattering or quasi-Rayleigh scattering) is the quasi-elastic scattering of electromagnetic radiation by charged particles in a small-scale medium with frequent particle collisions. Collisional scattering typically occurs in coherent microwave scattering of high neutral density, low ionization degree microplasmas such as atmospheric pressure laser-induced plasmas. Shneider-Miles scattering is characterized by a 90° phase shift between the incident and scattered waves and a scattering cross section proportional to the square of the incident driving frequency (formula_0). Scattered waves are emitted in a short dipole radiation pattern. The variable phase shift present in semi-collisional scattering regimes allows for determination of a plasma's collisional frequency through coherent microwave scattering. History. Mikhail Shneider and Richard Miles first described the phenomenon mathematically in their 2005 work on microwave diagnostics of small plasma objects. The scattering regime was experimentally demonstrated and formally named by Adam R. Patel and Alexey Shashurin and has been applied in the coherent microwave scattering diagnosis of small laser-induced plasma objects. Physical description. A plasma, consisting of neutral particles, ions, and unbound electrons, responds to the oscillating electric field of incident electromagnetic radiation primarily through the motion of electrons (ions and neutral particles can often be regarded as stationary due to their larger mass). If the frequency of the incident radiation is sufficiently low and the plasma frequency is sufficiently high (corresponding to the Rayleigh scattering regime), the electrons will travel until the plasma object becomes polarized, counteracting the incident electric field and preventing further movement until the incident field reverses direction. If the frequency of the incident radiation is sufficiently high and the plasma frequency is sufficiently low (corresponding to the Thomson scattering regime), electrons will only travel a short distance before the electric field reverses direction, making collisions with other particles unlikely during a given oscillation. If the frequency on the incident radiation is intermediate and a high density of neutral particles and ions is present, electrons will travel far enough to collide many times with other particles but not far enough to significantly polarize the plasma object. This characterizes the collisional scattering regime. The linear oscillation of unbound electrons in a relatively-small space gives rise to a short-dipole radiation pattern. This is analogous to a spring-mass-damper system, where the polarization of the plasma object creates the restoring force and the drag due to collisions with other particles creates the damping force. The phase shift of the scattered wave is 90º in the Shneider-Miles regime due to the drag force being dominant. Note that, in this context, Rayleigh scattering is regarded as formula_1 volumetric small particle scattering rather than an even broader short-dipole approximation of the radiation. Otherwise, Thomson scattering would fall under the banner of "Rayleigh". Mie scattering experiences a similar ambiguity. Mathematical description. The scattering cross section of an object (formula_2) is defined by the time-averaged power of the scattered wave (formula_3) divided by the intensity of the incident wave (formula_4): formula_5. Starting with the assumptions that a plasma object is small relative to the incident wavelength, thin relative to the skin depth, unmagnetized, and homogenous, the scattering cross-section of the plasma object can be determined by the following equation, where formula_6 is the electron charge, formula_7 is the electron mass, formula_8 is the number of unbound electrons in the plasma object, formula_9 is the geometrically-determined depolarization factor, formula_10 is the incident wave circular frequency, formula_11 is the plasma frequency, and formula_12 is the effective momentum-transfer collisional frequency (not to be confused with collisional frequency). formula_13 (The above equation is derived from the Drude-Lorentz-Sommerfeld model. It neglects transient effects of electron motion and is only qualitatively applicable to Rayleigh scattering due to neglecting evanescence effects - strict consideration of boundary conditions is often required to capture the case of negative permittivity.). The total cross section can related to the cross section of an individual electron (formula_14) according to the equation formula_15, since the electron motion will be in phase assuming that the plasma object is small relative to the incident wavelength. The scattering regime is determined by the dominant term in the denominator. Collisional scattering refers to the assumption that formula_16, allowing the total scattering cross section to be expressed as: formula_17 The collisional scattering cross-section can also be expressed in terms of the Thomson scattering cross section (formula_18), which is independent of the plasma geometry and collisional frequency according to the following equation. formula_19
[ { "math_id": 0, "text": "\\omega^2" }, { "math_id": 1, "text": "\\omega^4" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "\\langle P_s \\rangle" }, { "math_id": 4, "text": "I_I" }, { "math_id": 5, "text": "\\sigma=\\frac{\\langle P_s \\rangle}{I_I}" }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": "N_e" }, { "math_id": 9, "text": "\\xi" }, { "math_id": 10, "text": "\\omega" }, { "math_id": 11, "text": "\\omega_p" }, { "math_id": 12, "text": "\\nu_m" }, { "math_id": 13, "text": "\\sigma=\n\\frac{e^4}{6\\pi m^2\\epsilon_0^2c^4}\n\\frac{N_e^4\\omega^4}\n{(\\xi\\omega_p^2-\\omega^2)^2+(\\nu_m\\omega)^2}" }, { "math_id": 14, "text": "\\sigma_e" }, { "math_id": 15, "text": "\\sigma_e=\n\\sigma/N_e^2" }, { "math_id": 16, "text": "\\nu_m\\omega\\gg|\\xi\\omega_p^2-\\omega|" }, { "math_id": 17, "text": "\\sigma_{collisional}=\n\\frac{e^4}{6\\pi m^2\\epsilon_0^2c^4}\n\\frac{N_e^4\\omega^2}\n{\\nu_m^2}" }, { "math_id": 18, "text": "\\sigma_{Thompson}" }, { "math_id": 19, "text": "\\sigma_{collisional}=\n\\sigma_{Thompson}(\\frac{\\omega}{\\nu_m})^2" } ]
https://en.wikipedia.org/wiki?curid=75126778
75134428
Jankov–von Neumann uniformization theorem
In descriptive set theory the Jankov–von Neumann uniformization theorem is a result saying that every measurable relation on a pair of standard Borel spaces (with respect to the sigma algebra of analytic sets) admits a measurable section. It is named after V. A. Jankov and John von Neumann. While the axiom of choice guarantees that every relation has a section, this is a stronger conclusion in that it asserts that the section is measurable, and thus "definable" in some sense without using the axiom of choice. Statement. Let formula_0 be standard Borel spaces and formula_1 a subset that is measurable with respect to the analytic sets. Then there exists a measurable function formula_2 such that, for all formula_3, formula_4 if and only if formula_5. An application of the theorem is that, given any measurable function formula_6, there exists a universally measurable function formula_7 such that formula_8 for all formula_9.
[ { "math_id": 0, "text": "X,Y" }, { "math_id": 1, "text": "R\\subset X\\times Y" }, { "math_id": 2, "text": "f:X\\to Y" }, { "math_id": 3, "text": "x\\in X" }, { "math_id": 4, "text": "\\exists y, R(x,y)" }, { "math_id": 5, "text": "R(x,f(x))" }, { "math_id": 6, "text": "g:Y\\to X" }, { "math_id": 7, "text": "f:g(Y)\\subset X\\to Y" }, { "math_id": 8, "text": "g(f(x))=x" }, { "math_id": 9, "text": "x\\in g(Y)" } ]
https://en.wikipedia.org/wiki?curid=75134428
751519
Comparative statics
Thought experiments In economics, comparative statics is the comparison of two different economic outcomes, before and after a change in some underlying exogenous parameter. As a type of "static analysis" it compares two different equilibrium states, after the process of adjustment (if any). It does not study the motion towards equilibrium, nor the process of the change itself. Comparative statics is commonly used to study changes in supply and demand when analyzing a single market, and to study changes in monetary or fiscal policy when analyzing the whole economy. Comparative statics is a tool of analysis in microeconomics (including general equilibrium analysis) and macroeconomics. Comparative statics was formalized by John R. Hicks (1939) and Paul A. Samuelson (1947) (Kehoe, 1987, p. 517) but was presented graphically from at least the 1870s. For models of stable equilibrium rates of change, such as the neoclassical growth model, comparative dynamics is the counterpart of comparative statics (Eatwell, 1987). Linear approximation. Comparative statics results are usually derived by using the implicit function theorem to calculate a linear approximation to the system of equations that defines the equilibrium, under the assumption that the equilibrium is stable. That is, if we consider a sufficiently small change in some exogenous parameter, we can calculate how each endogenous variable changes using only the first derivatives of the terms that appear in the equilibrium equations. For example, suppose the equilibrium value of some endogenous variable formula_0 is determined by the following equation: formula_1 where formula_2 is an exogenous parameter. Then, to a first-order approximation, the change in formula_0 caused by a small change in formula_2 must satisfy: formula_3 Here formula_4 and formula_5 represent the changes in formula_0 and formula_2, respectively, while formula_6 and formula_7 are the partial derivatives of formula_8 with respect to formula_0 and formula_2 (evaluated at the initial values of formula_0 and formula_2), respectively. Equivalently, we can write the change in formula_0 as: formula_9 Dividing through the last equation by d"a" gives the comparative static derivative of "x" with respect to "a", also called the multiplier of "a" on "x": formula_10 Many equations and unknowns. All the equations above remain true in the case of a system of formula_11 equations in formula_11 unknowns. In other words, suppose formula_12 represents a system of formula_11 equations involving the vector of formula_11 unknowns formula_0, and the vector of formula_13 given parameters formula_2. If we make a sufficiently small change formula_5 in the parameters, then the resulting changes in the endogenous variables can be approximated arbitrarily well by formula_14. In this case, formula_6 represents the formula_11×formula_11 matrix of partial derivatives of the functions formula_8 with respect to the variables formula_0, and formula_7 represents the formula_11×formula_13 matrix of partial derivatives of the functions formula_8 with respect to the parameters formula_2. (The derivatives in formula_6 and formula_7 are evaluated at the initial values of formula_0 and formula_2.) Note that if one wants just the comparative static effect of one exogenous variable on one endogenous variable, Cramer's Rule can be used on the totally differentiated system of equations formula_15. Stability. The assumption that the equilibrium is stable matters for two reasons. First, if the equilibrium were unstable, a small parameter change might cause a large jump in the value of formula_0, invalidating the use of a linear approximation. Moreover, Paul A. Samuelson's correspondence principle states that stability of equilibrium has qualitative implications about the comparative static effects. In other words, knowing that the equilibrium is stable may help us predict whether each of the coefficients in the vector formula_16 is positive or negative. Specifically, one of the "n" necessary and jointly sufficient conditions for stability is that the determinant of the "n"×"n" matrix "B" have a particular sign; since this determinant appears as the denominator in the expression for formula_17, the sign of the determinant influences the signs of all the elements of the vector formula_18 of comparative static effects. An example of the role of the stability assumption. Suppose that the quantities demanded and supplied of a product are determined by the following equations: formula_19 formula_20 where formula_21 is the quantity demanded, formula_22 is the quantity supplied, "P" is the price, "a" and "c" are intercept parameters determined by exogenous influences on demand and supply respectively, "b" &lt; 0 is the reciprocal of the slope of the demand curve, and "g" is the reciprocal of the slope of the supply curve; "g" &gt; 0 if the supply curve is upward sloped, "g" = 0 if the supply curve is vertical, and "g" &lt; 0 if the supply curve is backward-bending. If we equate quantity supplied with quantity demanded to find the equilibrium price formula_23, we find that formula_24 This means that the equilibrium price depends positively on the demand intercept if "g" – "b" &gt; 0, but depends negatively on it if "g" – "b" &lt; 0. Which of these possibilities is relevant? In fact, starting from an initial static equilibrium and then changing "a", the new equilibrium is relevant "only" if the market actually goes to that new equilibrium. Suppose that price adjustments in the market occur according to formula_25 where formula_26 &gt; 0 is the speed of adjustment parameter and formula_27 is the time derivative of the price — that is, it denotes how fast and in what direction the price changes. By stability theory, "P" will converge to its equilibrium value if and only if the derivative formula_28 is negative. This derivative is given by formula_29 This is negative if and only if "g" – "b" &gt; 0, in which case the demand intercept parameter "a" positively influences the price. So we can say that while the direction of effect of the demand intercept on the equilibrium price is ambiguous when all we know is that the reciprocal of the supply curve's slope, "g", is negative, in the only relevant case (in which the price actually goes to its new equilibrium value) an increase in the demand intercept increases the price. Note that this case, with "g" – "b" &gt; 0, is the case in which the supply curve, if negatively sloped, is steeper than the demand curve. Without constraints. Suppose formula_30 is a smooth and strictly concave objective function where "x" is a vector of "n" endogenous variables and "q" is a vector of "m" exogenous parameters. Consider the unconstrained optimization problem formula_31. Let formula_32, the "n" by "n" matrix of first partial derivatives of formula_30 with respect to its first "n" arguments "x"1...,"x""n". The maximizer formula_33 is defined by the "n"×1 first order condition formula_34. Comparative statics asks how this maximizer changes in response to changes in the "m" parameters. The aim is to find formula_35. The strict concavity of the objective function implies that the Jacobian of "f", which is exactly the matrix of second partial derivatives of "p" with respect to the endogenous variables, is nonsingular (has an inverse). By the implicit function theorem, then, formula_33 may be viewed locally as a continuously differentiable function, and the local response of formula_33 to small changes in "q" is given by formula_36 Applying the chain rule and first order condition, formula_37 (See Envelope theorem). Application for profit maximization. Suppose a firm produces "n" goods in quantities formula_38. The firm's profit is a function "p" of formula_38 and of "m" exogenous parameters formula_39 which may represent, for instance, various tax rates. Provided the profit function satisfies the smoothness and concavity requirements, the comparative statics method above describes the changes in the firm's profit due to small changes in the tax rates. With constraints. A generalization of the above method allows the optimization problem to include a set of constraints. This leads to the general envelope theorem. Applications include determining changes in Marshallian demand in response to changes in price or wage. Limitations and extensions. One limitation of comparative statics using the implicit function theorem is that results are valid only in a (potentially very small) neighborhood of the optimum—that is, only for very small changes in the exogenous variables. Another limitation is the potentially overly restrictive nature of the assumptions conventionally used to justify comparative statics procedures. For example, John Nachbar discovered in one of his case studies that using comparative statics in general equilibrium analysis works best with very small, individual level of data rather than at an aggregate level. Paul Milgrom and Chris Shannon pointed out in 1994 that the assumptions conventionally used to justify the use of comparative statics on optimization problems are not actually necessary—specifically, the assumptions of convexity of preferred sets or constraint sets, smoothness of their boundaries, first and second derivative conditions, and linearity of budget sets or objective functions. In fact, sometimes a problem meeting these conditions can be monotonically transformed to give a problem with identical comparative statics but violating some or all of these conditions; hence these conditions are not necessary to justify the comparative statics. Stemming from the article by Milgrom and Shannon as well as the results obtained by Veinott and Topkis an important strand of operational research was developed called monotone comparative statics. In particular, this theory concentrates on the comparative statics analysis using only conditions that are independent of order-preserving transformations. The method uses lattice theory and introduces the notions of quasi-supermodularity and the single-crossing condition. The wide application of monotone comparative statics to economics includes production theory, consumer theory, game theory with complete and incomplete information, auction theory, and others. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "f(x,a)=0 \\," }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "B \\text{d}x + C \\text{d}a = 0." }, { "math_id": 4, "text": "\\text{d}x" }, { "math_id": 5, "text": "\\text{d}a" }, { "math_id": 6, "text": "B" }, { "math_id": 7, "text": "C" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "\\text{d}x = -B^{-1}C \\text{d}a ." }, { "math_id": 10, "text": "\\frac{{\\text{d}x}}{{\\text{d}a}} = -B^{-1}C." }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "f(x,a)=0" }, { "math_id": 13, "text": "m" }, { "math_id": 14, "text": "\\text{d}x = -B^{-1}C \\text{d}a" }, { "math_id": 15, "text": "B\\text{d}x + C \\text{d}a \\,=0" }, { "math_id": 16, "text": "B^{-1}C" }, { "math_id": 17, "text": "B^{-1}" }, { "math_id": 18, "text": "B^{-1}C\\text{d} a" }, { "math_id": 19, "text": "Q^{d}(P) = a + bP" }, { "math_id": 20, "text": "Q^{s}(P) = c + gP" }, { "math_id": 21, "text": "Q^{d}" }, { "math_id": 22, "text": "Q^{s}" }, { "math_id": 23, "text": "P^{eqb}" }, { "math_id": 24, "text": "P^{eqb}=\\frac{a-c}{g-b}." }, { "math_id": 25, "text": "\\frac{dP}{dt}=\\lambda (Q^{d}(P) - Q^{s}(P))" }, { "math_id": 26, "text": "\\lambda" }, { "math_id": 27, "text": "\\frac{dP}{dt}" }, { "math_id": 28, "text": "\\frac{d(dP/dt)}{dP}" }, { "math_id": 29, "text": " \\frac{d(dP/dt)}{dP} = - \\lambda(-b+g)." }, { "math_id": 30, "text": "p(x;q)" }, { "math_id": 31, "text": "x^*(q)= \\arg \\max p(x;q) " }, { "math_id": 32, "text": "f(x;q)=D_xp(x;q)" }, { "math_id": 33, "text": "x^*(q)" }, { "math_id": 34, "text": "f(x^*(q);q)=0" }, { "math_id": 35, "text": "\\partial x^*_i/ \\partial q_j, i=1,...,n, j=1,...,m" }, { "math_id": 36, "text": "D_qx^*(q)=-[D_xf(x^*(q);q)]^{-1}D_qf(x^*(q);q)." }, { "math_id": 37, "text": "D_qp(x^*(q),q)=D_qp(x;q)|_{x=x^*(q)}." }, { "math_id": 38, "text": "x_1,...,x_n" }, { "math_id": 39, "text": "q_1,...,q_m" } ]
https://en.wikipedia.org/wiki?curid=751519
75152043
Donor coordination
Donor coordination is a problem in social choice. There are several donors, each of whom wants to donate some money. Each donor supports a different set of targets. The goal is to distribute the total donated amount among the various targets in a way that respects the donors' preferences. As an example, consider a town with three recreational facilities that require funding: theater, chess club, and basketball field. There are two donors: Alice and George, each of whom wants to donate 3000. Alice wants to donate to indoor activities (theater or chess), whereas George prefers to donate to competitive activities (chess or basketball). Suppose further that the donors consider the facilities substitute goods, so that the utility of a donor is the sum of money distributed to a facility he likes. Consider the following possible distributions: Alternatively, one can assume that the donors consider the facilities complementary goods, so that the utility of a donor is the "minimum" amount of money distributed to a facility he likes. In this case, the uncoordinated distribution 1500,3000,1500 gives both donors utility 1500; the distribution 0,6000,0 gives both donors utility 0; but there is an even better distribution: In both cases, coordination can improve the efficiency of the allocation. Donor coordination is a variant of participatory budgeting, in which the budget is donated by the voters themselves, rather than given by the government. Since the donations are voluntary, it is important that the coordination algorithm ensures that each voter weakly gains from participating in the algorithm, i.e., the amount contributed to projects he approves of is weakly higher when he participates than when he does not. Donor coordination has been studied in several settings, which can be broadly categorized into "divisible" and "indivisible": Divisible targets. Donor coordination with divisible targets is similar to the problem of fractional social choice, except that in the latter, the "budget" is fixed in advance (e.g. time, probability, or government funds), and not donated voluntarily by the agents. Additive binary utilities. Brandl, Brandt, Peters and Stricker study donor coordination with additive binary (dichotomous) preferences, represented by approval ballots. Formally, for each donor "i" there is a set of "approved charities" denoted by "Ai", and "i"'s utility from a distribution "d" is the total amount of money distributed to charities in "Ai": formula_0. They analyze several rules. They are exemplified below for a setting with 4 targets (a,b,c,d) and 5 donors who contribute 1 each, and whose approval sets are ac, ad, bc, bd, a. They also prove a strong impossibility result: there is no PB rule that satisfies the following three properties: strategyproofness, efficiency, and positivity (- at least one approved project of each agent receives a positive amount). The proof reasons about 386 preference profiles and was obtained with the help of a SAT solver. Additive general utilities. Brandl, Brandt, Greger, Peters, Stricker, Suksompong study donor coordination assuming donors have additive but non-binary utilities. Formally, for each donor "i" and charity "x", there is a value "vi,x", and "i"'s utility from a distribution "d" is: formula_1. They prove that the Nash product rule incentivizes donors to contribute their entire budget, even when attractive outside options are available. while spending each donor’s contribution only on projects the donor finds acceptable. The Nash rule is also efficient. On the down side, it is not strategyproof, and violates simple monotonicity conditions (even in the binary case). Leontief utilities. Brandt, Greger, Segal-Halevi, Suksompong study donor coordination assuming donors have Leontief utilities. This is motivated by funding charities, where it is reasonable that donors want to maximize the minimum amount given to a charity they approve. More generally, for each donor "i" and charity "j", there is a value "vi,j", and "i"'s utility from a distribution "d" is: formula_2. They define a rule called the Equilibrium Distribution Rule (EDR), which finds a pure-strategy Nash equilibrium in a game in which the donors' strategies are the possible decompositions of their donations. They prove that there always exists a unique pure Nash equilibrium, and it can be found efficiently using convex programming, by maximizing the Nash social welfare (a sum of logarithms of agents' utilities, weighted by their donations). EDR is Pareto-efficient, group-strategyproof, and satisfies several other monotonicity properties. With binary-Leontief utilities, EDR is also egalitarian for projects and for agents (subject to decomposability), can be found efficiently using linear programming, and attained at the limit of a best-response sequence. Quasilinear utilities. Buterin, Hitzig and Weyl present a mechanism in which donors invest money to create public goods. They assume that agents have quasilinear utilities, so without coordination, there will be under-provision of public goods due to the free-rider problem. They suggest a mechanism called "Quadratic Finance", inspired by quadratic voting. The amount received by each project "x" is formula_3, where "ci,x" is the contribution of agent "i" to project "x". They show that, in the standard model (selfish, independent, private values, quasilinear utilities), this mechanism yields the utilitarian-optimal provision of public goods. Other ways to encourage public goods provision are: They present variations and extensions of QF. They explain how it can be used to campaign finance reform, funding open source software, news media finance, charitable giving, and urban public projects. Indivisible targets. Donor coordination with indivisible targets is similar to combinatorial participatory budgeting, except that in the latter, the budget is fixed in advance and not contributed voluntarily by the agents. Funding by donations only. Aziz and Ganguly study a variant on indivisible participatory budgeting in which there is no exogeneous budget. There is a list of potential projects, each with its own cost. Each agent approves a subset of the projects, and provides an upper bound on the amount of money he can donate. The utility of each agent equals the amount of money spent on projects he approves (i.e., cost-satisfaction). The rule should specify (1) Which projects are funded? (2) How much money each donor pays? Note that, because the projects are indivisible, probably most donors will pay less than their upper bound. They study three axioms related to encouraging participation: Three axioms related to efficiency: Two axioms related to fairness: Finally, they study strategyproofness. They study which axioms are satisfied by three welfare-maximization rules: utilitarian, egalitarian (leximin) and Nash-product; they also study their computational complexity. They also conduct experiments for studying the price of fairness - how much fairness properties effect the social welfare - in instances that model two real-life donor coordination scenarios: share-house setting, and crowdfunding setting. Aziz, Gujar, Padala, Suzuki and Vollen extend the above study to agents with cardinal ballots and quasilinear utilities. They show that welfare maximization admits an FPTAS, but welfare maximization subject to a natural and weak participation requirement is strongly inapproximable. Combining donations and government funds: Donation No Harm. Chen, Lackner and Maly study an extension of indivisible participatory budgeting in which there is both exogeneous budget and potential donations. Each voter can, besides voting for projects, also donate to specific projects of his choice. The donations of each project are deducted from its cost before the PB rule is activated. Their aim is to guarantee that rich donors do not use their donations to have an unfairly large influence on the total budget. Formally, they define a condition called "Donation-No-Harm", which requires that the utility of each agent when there are donations is at least as high as his utility without donations. They also study monotonicity properties specific to the setting with donations. They assume cardinal utilities. They also assume that projects belong to possibly-overlapping categories, with upper and lower quotas on each category. They study 8 rules: 4 based on global optimization and 4 based on greedy optimization. They consider three ways to adapt these rules to the setting with donations: Besides Donation No Harm, they also study three monotonicity axioms: Donation-project-monotonicity, Donation-welfare-monotonicity, and Donation-voter-monotonicity. They also study two computational problems related to this setting: Donor coordination in inter-country aid. In the Paris Declaration of 2005, donor countries agreed to coordinate their donations in order to eliminate duplication of efforts and better align foreign aid flows with priorities of the recipient countries. They acknowledged that aid fragmentation impairs the effectiveness of aid. However, Nunnenkamp, Ohler and Thiele show that these ideas were not implemented in practice, and the donor coordination even declined. Leiderer presents specific evidence for this from aid to the health and education sectors in Zambia. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_i(d) = \\sum_{x\\in A_i} d_x" }, { "math_id": 1, "text": "u_i(d) = \\sum_{x\\in A_i} v_{i,x} d_x" }, { "math_id": 2, "text": "u_i(d) = \\min_{x\\in A_i} d_x / v_{i,x}" }, { "math_id": 3, "text": "\\left(\\sum_{i} \\sqrt{c_{i,x}}\\right)^2" } ]
https://en.wikipedia.org/wiki?curid=75152043
75155069
Calinski–Harabasz index
Clustering evaluation metric The Calinski–Harabasz index (CHI), also known as the Variance Ratio Criterion (VRC), is a metric for evaluating clustering algorithms, introduced by Tadeusz Caliński and Jerzy Harabasz in 1974. It is an internal evaluation metric, where the assessment of the clustering quality is based solely on the dataset and the clustering results, and not on external, ground-truth labels. Definition. Given a data set of "n" points: {x1, ..., x"n"}, and the assignment of these points to "k" clusters: {"C"1, ..., "Ck"}, the Calinski–Harabasz (CH) Index is defined as the ratio of the between-cluster separation (BCSS) to the within-cluster dispersion (WCSS), normalized by their number of degrees of freedom: formula_0 BCSS (Between-Cluster Sum of Squares) is the weighted sum of squared Euclidean distances between each cluster centroid (mean) and the overall data centroid (mean): formula_1 where "ni" is the number of points in cluster "Ci", c"i" is the centroid of "Ci", and c is the overall centroid of the data. BCSS measures how well the clusters are separated from each other (the higher the better). WCSS (Within-Cluster Sum of Squares) is the sum of squared Euclidean distances between the data points and their respective cluster centroids: formula_2 WCSS measures the compactness or cohesiveness of the clusters (the smaller the better). Minimizing the WCSS is the objective of centroid-based clustering algorithms such as k-means. Explanation. The numerator of the CH index is the between-cluster separation (BCSS) divided by its degrees of freedom. The number of degrees of freedom of BCSS is "k" - 1, since fixing the centroids of "k" - 1 clusters also determines the "k"th centroid, as its value makes the weighted sum of all centroids match the overall data centroid. The denominator of the CH index is the within-cluster dispersion (WCSS) divided by its degrees of freedom"." The number of degrees of freedom of WCSS is "n" - "k", since fixing the centroid of each cluster reduces the degrees of freedom by one. This is because given a centroid ci" of cluster "Ci", the assignment of "ni" - 1 points to that cluster also determines the assignment of the "ni"th point, since the overall mean of the points assigned to the cluster should be equal to ci". Dividing both the BCSS and WCSS by their degrees of freedom helps to normalize the values, making them comparable across different numbers of clusters. Without this normalization, the CH index could be artificially inflated for higher values of "k", making it hard to determine whether an increase in the index value is due to genuinely better clustering or just due to the increased number of clusters. A higher value of CH indicates a better clustering, because it means that the data points are more spread out between clusters than they are within clusters. Although there is no satisfactory probabilistic foundation to support the use of CH index, the criterion has some desirable mathematical properties as shown in. For example, in the special case of equal distances between all pairs of points, the CH index is equal to 1. In addition, it is analogous to the F-test statistic in univariate analysis. Liu et al. discuss the effectiveness of using CH index for cluster evaluation relative to other internal clustering evaluation metrics. Maulik and Bandyopadhyay evaluate the performance of three clustering algorithms using four cluster validity indices, including Davies–Bouldin index, Dunn index, Calinski–Harabasz index and a newly developed index. Wang et al. have suggested an improved index for clustering validation based on Silhouette indexing and Calinski–Harabasz index. Finding the optimal number of clusters. Similar to other clustering evaluation metrics such as Silhouette score, the CH index can be used to find the optimal number of clusters "k" in algorithms like k-means, where the value of "k" is not known a priori. This can be done by following these steps: Implementations. The scikit-learn Python library provides an implementation of this metric in the sklearn.metrics module. R provides a similar implementation in its "fpc" package.
[ { "math_id": 0, "text": "CH = \\frac{BCSS / (k - 1)}{WCSS / (n - k)}" }, { "math_id": 1, "text": "BCSS = \\sum_{i=1}^{k} n_i ||\\mathbf{c}_i - \\mathbf{c}||^2" }, { "math_id": 2, "text": "WCSS = \\sum_{i=1}^{k} \\sum_{\\mathbf{x} \\in C_i} ||\\mathbf{x} - \\mathbf{c}_i||^2" } ]
https://en.wikipedia.org/wiki?curid=75155069
7515561
Metacyclic group
Extension of a cyclic group by a cyclic group In group theory, a metacyclic group is an extension of a cyclic group by a cyclic group. That is, it is a group "G" for which there is a short exact sequence formula_0 where "H" and "K" are cyclic. Equivalently, a metacyclic group is a group "G" having a cyclic normal subgroup "N", such that the quotient "G"/"N" is also cyclic. Properties. Metacyclic groups are both supersolvable and metabelian.
[ { "math_id": 0, "text": "1 \\rightarrow K \\rightarrow G \\rightarrow H \\rightarrow 1,\\," } ]
https://en.wikipedia.org/wiki?curid=7515561
7515651
Import
Good brought into a jurisdiction An importer is the receiving country in an export from the sending country. Importation and exportation are the defining financial transactions of international trade. Import is part of the International Trade which involves buying and receiving of goods or services produced in another country. The seller of such goods and services is called an exporter, while the foreign buyer is known as an importer. In international trade, the importation and exportation of goods are limited by import quotas and mandates from the customs authority. The importing and exporting jurisdictions may impose a tariff (tax) on the goods. In addition, the importation and exportation of goods are subject to trade agreements between the importing and exporting jurisdictions. Definition. Imports consist of transactions in goods and services to a resident of a jurisdiction (such as a nation) from non-residents. The exact definition of imports in national accounts includes and excludes specific "borderline" cases. Importation is the action of buying or acquiring products or services from another country or another market other than own. Imports are important for the economy because they allow a country to supply nonexistent, scarce, high cost, or low-quality certain products or services, to its market with products from other countries. A general delimitation of imports in national accounts is given below: Basic trade statistics often differ in terms of definition and coverage from the requirements in the national accounts: Balance of trade. A country has demand for an import when the price of the good (or service) on the world market is less than the price on the domestic market. The balance of trade, usually denoted formula_0, is the difference between the value of all the goods (and services) a country exports and the value of the goods the country imports. A trade deficit occurs when imports are larger than exports. Imports are impacted principally by a country's income and its productive resources. For example, the US imports oil from Canada even though the US has oil and Canada uses oil. However, consumers in the US are willing to pay more for the marginal barrel of oil than Canadian consumers are, because there is more oil demanded in the US than there is oil produced. In 2016, only about 30% of countries had a trade surplus. Most trade experts and economists argue that it's wrong to automatically assume a trade deficit is harmful to a country's economy. In macroeconomic theory, the value of imports can be modeled as a function of domestic absorption (spending on everything, regardless of source) and the real exchange rate. These are the two most important factors affecting imports and they both affect imports positively. Types of import. There are two basic types of import: Companies import goods and services to supply to the domestic market at a cheaper price and better quality than competing goods manufactured in the domestic market. Companies import products that are not available in the local market. There are three broad types of importers: Direct-import refers to a type of business importation involving a major retailer (e.g. Wal-Mart) and an overseas manufacturer. A retailer typically purchases products designed by local companies that can be manufactured overseas. In a direct-import program, the retailer bypasses the local supplier (colloquial: "middle-man") and buys the final product directly from the manufacturer, possibly saving in added cost data on the value of imports and their quantities often broken down by detailed lists of products are available in statistical collections on international trade published by the statistical services of intergovernmental organisations (e.g. UNSD, FAOSTAT, OECD), supranational statistical institutes (e.g. Eurostat) and national statistical institutes. Import of goods. Importation, declaration, and payment of customs duties are done by the importer of record, which may be the owner of the goods, the purchaser, or a licensed customs broker. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "NX" } ]
https://en.wikipedia.org/wiki?curid=7515651