id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
60874
Photoluminescence
Light emission from substances after they absorb photons Photoluminescence (abbreviated as PL) is light emission from any form of matter after the absorption of photons (electromagnetic radiation). It is one of many forms of luminescence (light emission) and is initiated by photoexcitation (i.e. photons that excite electrons to a higher energy level in an atom), hence the prefix "photo-". Following excitation, various relaxation processes typically occur in which other photons are re-radiated. Time periods between absorption and emission may vary: ranging from short femtosecond-regime for emission involving free-carrier plasma in inorganic semiconductors up to milliseconds for phosphoresence processes in molecular systems; and under special circumstances delay of emission may even span to minutes or hours. Observation of photoluminescence at a certain energy can be viewed as an indication that an electron populated an excited state associated with this transition energy. While this is generally true in atoms and similar systems, correlations and other more complex phenomena also act as sources for photoluminescence in many-body systems such as semiconductors. A theoretical approach to handle this is given by the semiconductor luminescence equations. Forms. Photoluminescence processes can be classified by various parameters such as the energy of the exciting photon with respect to the emission. Resonant excitation describes a situation in which photons of a particular wavelength are absorbed and equivalent photons are very rapidly re-emitted. This is often referred to as resonance fluorescence. For materials in solution or in the gas phase, this process involves electrons but no significant internal energy transitions involving molecular features of the chemical substance between absorption and emission. In crystalline inorganic semiconductors where an electronic band structure is formed, secondary emission can be more complicated as events may contain both coherent contributions such as resonant Rayleigh scattering where a fixed phase relation with the driving light field is maintained (i.e. energetically elastic processes where no losses are involved), and incoherent contributions (or inelastic modes where some energy channels into an auxiliary loss mode), The latter originate, e.g., from the radiative recombination of excitons, Coulomb-bound electron-hole pair states in solids. Resonance fluorescence may also show significant quantum optical correlations. More processes may occur when a substance undergoes internal energy transitions before re-emitting the energy from the absorption event. Electrons change energy states by either resonantly gaining energy from absorption of a photon or losing energy by emitting photons. In chemistry-related disciplines, one often distinguishes between fluorescence and phosphorescence. The former is typically a fast process, yet some amount of the original energy is dissipated so that re-emitted light photons will have lower energy than did the absorbed excitation photons. The re-emitted photon in this case is said to be red shifted, referring to the reduced energy it carries following this loss (as the Jablonski diagram shows). For phosphorescence, electrons which absorbed photons, undergo intersystem crossing where they enter into a state with altered spin multiplicity (see term symbol), usually a triplet state. Once the excited electron is transferred into this triplet state, electron transition (relaxation) back to the lower singlet state energies is quantum mechanically forbidden, meaning that it happens much more slowly than other transitions. The result is a slow process of radiative transition back to the singlet state, sometimes lasting minutes or hours. This is the basis for "glow in the dark" substances. Photoluminescence is an important technique for measuring the purity and crystalline quality of semiconductors such as GaN and InP and for quantification of the amount of disorder present in a system. Time-resolved photoluminescence (TRPL) is a method where the sample is excited with a light pulse and then the decay in photoluminescence with respect to time is measured. This technique is useful for measuring the minority carrier lifetime of III-V semiconductors like gallium arsenide (GaAs). Photoluminescence properties of direct-gap semiconductors. In a typical PL experiment, a semiconductor is excited with a light-source that provides photons with an energy larger than the bandgap energy. The incoming light excites a polarization that can be described with the semiconductor Bloch equations. Once the photons are absorbed, electrons and holes are formed with finite momenta formula_0 in the conduction and valence bands, respectively. The excitations then undergo energy and momentum relaxation towards the band-gap minimum. Typical mechanisms are Coulomb scattering and the interaction with phonons. Finally, the electrons recombine with holes under emission of photons. Ideal, defect-free semiconductors are many-body systems where the interactions of charge-carriers and lattice vibrations have to be considered in addition to the light-matter coupling. In general, the PL properties are also extremely sensitive to internal electric fields and to the dielectric environment (such as in photonic crystals) which impose further degrees of complexity. A precise microscopic description is provided by the semiconductor luminescence equations. Ideal quantum-well structures. An ideal, defect-free semiconductor quantum well structure is a useful model system to illustrate the fundamental processes in typical PL experiments. The discussion is based on results published in Klingshirn (2012) and Balkan (1998). The fictive model structure for this discussion has two confined quantized electronic and two hole subbands, e1, e2 and h1, h2, respectively. The linear absorption spectrum of such a structure shows the exciton resonances of the first (e1h1) and the second quantum well subbands (e2, h2), as well as the absorption from the corresponding continuum states and from the barrier. Photoexcitation. In general, three different excitation conditions are distinguished: resonant, quasi-resonant, and non-resonant. For the resonant excitation, the central energy of the laser corresponds to the lowest exciton resonance of the quantum well. No, or only a negligible amount of the excess, energy is injected to the carrier system. For these conditions, coherent processes contribute significantly to the spontaneous emission. The decay of polarization creates excitons directly. The detection of PL is challenging for resonant excitation as it is difficult to discriminate contributions from the excitation, i.e., stray-light and diffuse scattering from surface roughness. Thus, speckle and resonant Rayleigh-scattering are always superimposed to the incoherent emission. In case of the non-resonant excitation, the structure is excited with some excess energy. This is the typical situation used in most PL experiments as the excitation energy can be discriminated using a spectrometer or an optical filter. One has to distinguish between quasi-resonant excitation and barrier excitation. For quasi-resonant conditions, the energy of the excitation is tuned above the ground state but still below the barrier absorption edge, for example, into the continuum of the first subband. The polarization decay for these conditions is much faster than for resonant excitation and coherent contributions to the quantum well emission are negligible. The initial temperature of the carrier system is significantly higher than the lattice temperature due to the surplus energy of the injected carriers. Finally, only the electron-hole plasma is initially created. It is then followed by the formation of excitons. In case of barrier excitation, the initial carrier distribution in the quantum well strongly depends on the carrier scattering between barrier and the well. Relaxation. Initially, the laser light induces coherent polarization in the sample, i.e., the transitions between electron and hole states oscillate with the laser frequency and a fixed phase. The polarization dephases typically on a sub-100 fs time-scale in case of nonresonant excitation due to ultra-fast Coulomb- and phonon-scattering. The dephasing of the polarization leads to creation of populations of electrons and holes in the conduction and the valence bands, respectively. The lifetime of the carrier populations is rather long, limited by radiative and non-radiative recombination such as Auger recombination. During this lifetime a fraction of electrons and holes may form excitons, this topic is still controversially discussed in the literature. The formation rate depends on the experimental conditions such as lattice temperature, excitation density, as well as on the general material parameters, e.g., the strength of the Coulomb-interaction or the exciton binding energy. The characteristic time-scales are in the range of hundreds of picoseconds in GaAs; they appear to be much shorter in wide-gap semiconductors. Directly after the excitation with short (femtosecond) pulses and the quasi-instantaneous decay of the polarization, the carrier distribution is mainly determined by the spectral width of the excitation, e.g., a laser pulse. The distribution is thus highly non-thermal and resembles a Gaussian distribution, centered at a finite momentum. In the first hundreds of femtoseconds, the carriers are scattered by phonons, or at elevated carrier densities via Coulomb-interaction. The carrier system successively relaxes to the Fermi–Dirac distribution typically within the first picosecond. Finally, the carrier system cools down under the emission of phonons. This can take up to several nanoseconds, depending on the material system, the lattice temperature, and the excitation conditions such as the surplus energy. Initially, the carrier temperature decreases fast via emission of optical phonons. This is quite efficient due to the comparatively large energy associated with optical phonons, (36meV or 420K in GaAs) and their rather flat dispersion, allowing for a wide range of scattering processes under conservation of energy and momentum. Once the carrier temperature decreases below the value corresponding to the optical phonon energy, acoustic phonons dominate the relaxation. Here, cooling is less efficient due their dispersion and small energies and the temperature decreases much slower beyond the first tens of picoseconds. At elevated excitation densities, the carrier cooling is further inhibited by the so-called hot-phonon effect. The relaxation of a large number of hot carriers leads to a high generation rate of optical phonons which exceeds the decay rate into acoustic phonons. This creates a non-equilibrium "over-population" of optical phonons and thus causes their increased reabsorption by the charge-carriers significantly suppressing any cooling. Thus, a system cools slower, the higher the carrier density is. Radiative recombination. The emission directly after the excitation is spectrally very broad, yet still centered in the vicinity of the strongest exciton resonance. As the carrier distribution relaxes and cools, the width of the PL peak decreases and the emission energy shifts to match the ground state of the exciton (such as an electron) for ideal samples without disorder. The PL spectrum approaches its quasi-steady-state shape defined by the distribution of electrons and holes. Increasing the excitation density will change the emission spectra. They are dominated by the excitonic ground state for low densities. Additional peaks from higher subband transitions appear as the carrier density or lattice temperature are increased as these states get more and more populated. Also, the width of the main PL peak increases significantly with rising excitation due to excitation-induced dephasing and the emission peak experiences a small shift in energy due to the Coulomb-renormalization and phase-filling. In general, both exciton populations and plasma, uncorrelated electrons and holes, can act as sources for photoluminescence as described in the semiconductor-luminescence equations. Both yield very similar spectral features which are difficult to distinguish; their emission dynamics, however, vary significantly. The decay of excitons yields a single-exponential decay function since the probability of their radiative recombination does not depend on the carrier density. The probability of spontaneous emission for uncorrelated electrons and holes, is approximately proportional to the product of electron and hole populations eventually leading to a non-single-exponential decay described by a hyperbolic function. Effects of disorder. Real material systems always incorporate disorder. Examples are structural defects in the lattice or disorder due to variations of the chemical composition. Their treatment is extremely challenging for microscopic theories due to the lack of detailed knowledge about perturbations of the ideal structure. Thus, the influence of the extrinsic effects on the PL is usually addressed phenomenologically. In experiments, disorder can lead to localization of carriers and hence drastically increase the photoluminescence life times as localized carriers cannot as easily find nonradiative recombination centers as can free ones. Researchers from the King Abdullah University of Science and Technology (KAUST) have studied the photoinduced entropy (i.e. thermodynamic disorder) of InGaN/GaN p-i-n double-heterostructure and AlGaN nanowires using temperature-dependent photoluminescence. They defined the photoinduced entropy as a thermodynamic quantity that represents the unavailability of a system's energy for conversion into useful work due to carrier recombination and photon emission. They have also related the change in entropy generation to the change in photocarrier dynamics in the nanowire active regions using results from time-resolved photoluminescence study. They hypothesized that the amount of generated disorder in the InGaN layers eventually increases as the temperature approaches room temperature because of the thermal activation of surface states, while an insignificant increase was observed in AlGaN nanowires, indicating lower degrees of disorder-induced uncertainty in the wider bandgap semiconductor. To study the photoinduced entropy, the scientists have developed a mathematical model that considers the net energy exchange resulting from photoexcitation and photoluminescence. Photoluminescent materials for temperature detection. In phosphor thermometry, the temperature dependence of the photoluminescence process is exploited to measure temperature. Experimental methods. "Photoluminescence spectroscopy" is a widely used technique for characterisation of the optical and electronic properties of semiconductors and molecules. The technique itself is fast, contactless, and nondestructive. Therefore, it can be used to study the optoelectronic properties of materials of various sizes (from microns to centimeters) during the fabrication process without complex sample preparation. For example, photoluminescence measurements of solar cell absorbers can predict the maximum voltage the material could produce. In chemistry, the method is more often referred to as fluorescence spectroscopy, but the instrumentation is the same. The relaxation processes can be studied using time-resolved fluorescence spectroscopy to find the decay lifetime of the photoluminescence. These techniques can be combined with microscopy, to map the intensity (confocal microscopy) or the lifetime (fluorescence-lifetime imaging microscopy) of the photoluminescence across a sample (e.g. a semiconducting wafer, or a biological sample that has been marked with fluorescent molecules). Modulated photoluminescence is a specific method for measuring the complex frequency response of the photoluminescence signal to a sinusoidal excitation, allowing for the direct extraction of minority carrier lifetime without the need for intensity calibrations. It has been used to study the influence of interface defects on the recombination of excess carriers in crystalline silicon wafers with different passivation schemes. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathbf{k}" } ]
https://en.wikipedia.org/wiki?curid=60874
60876
Markov chain
Random process independent of past history A Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs "now"." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing. The adjectives "Markovian" and "Markov" are used to describe something that is related to a Markov process. <templatestyles src="Template:TOC limit/styles.css" /> Principles. Definition. A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). Types of Markov chains. The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see Variations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. Transitions. The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. History. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s. Examples. A non-Markov example. Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth 10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are set on a table. If formula_0 represents the total value of the coins set on the table after n draws, with formula_1, then the sequence formula_2 is "not" a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus formula_3. If we know not just formula_4, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that formula_5 with probability 1. But if we do not know the earlier values, then based only on the value formula_4 we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about formula_6 are impacted by our knowledge of values prior to formula_4. However, it is possible to model this scenario as a Markov process. Instead of defining formula_0 to represent the "total value" of the coins on the table, we could define formula_0 to represent the "count" of the various coin types on the table. For instance, formula_7 could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by formula_8 possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state formula_9. The probability of achieving formula_10 now depends on formula_11; for example, the state formula_12 is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the formula_13 state depends exclusively on the outcome of the formula_14 state. Formal definition. Discrete-time Markov chain. A discrete-time Markov chain is a sequence of random variables "X"1, "X"2, "X"3, ... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: formula_15 if both conditional probabilities are well defined, that is, if formula_16 The possible values of "X""i" form a countable set "S" called the state space of the chain. Continuous-time Markov chain. A continuous-time Markov chain ("X""t")"t" ≥ 0 is defined by a finite or countable state space "S", a transition rate matrix "Q" with dimensions equal to that of the state space and initial probability distribution defined on the state space. For "i" ≠ "j", the elements "q""ij" are non-negative and describe the rate of the process transitions from state "i" to state "j". The elements "q""ii" are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process. Infinitesimal definition. Let formula_24 be the random variable describing the state of the process at time "t", and assume the process is in a state "i" at time "t". Then, knowing formula_25, formula_26 is independent of previous values formula_27, and as "h" → 0 for all "j" and for all "t", formula_28 where formula_29 is the Kronecker delta, using the little-o notation. The formula_30 can be seen as measuring how quickly the transition from "i" to "j" happens. Jump chain/holding time definition. Define a discrete-time Markov chain "Y""n" to describe the "n"th jump of the process and variables "S"1, "S"2, "S"3, ... to describe holding times in each of the states where "S""i" follows the exponential distribution with rate parameter −"q""Y""i""Y""i". Transition probability definition. For any value "n" = 0, 1, 2, 3, ... and times indexed up to this value of "n": "t"0, "t"1, "t"2, ... and all states recorded at these times "i"0, "i"1, "i"2, "i"3, ... it holds that formula_31 where "p""ij" is the solution of the forward equation (a first-order differential equation) formula_32 with initial condition P(0) is the identity matrix. Finite state space. If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the ("i", "j")th element of P equal to formula_33 Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix. Stationary distribution relation to eigenvectors and simplices. A stationary distribution π is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by formula_34 By comparing this definition with that of an eigenvector we see that the two concepts are related and that formula_35 is a normalized (formula_36) multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution formula_37 are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as formula_38 we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex. Time-homogeneous Markov chain with a finite state space. If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the "k"-step transition probability can be computed as the "k"-th power of the transition matrix, P"k". If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution π. Additionally, in this case P"k" converges to a rank-one matrix in which each row is the stationary distribution π: formula_39 where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem. If, by whatever means, formula_40 is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices P, the limit formula_40 does not exist while the stationary distribution does, as shown by this example: formula_41 formula_42 Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an "n"×"n" matrix, and define formula_43 It is always true that formula_44 Subtracting Q from both sides and factoring then yields formula_45 where I"n" is the identity matrix of size "n", and 0"n","n" is the zero matrix of size "n"×"n". Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are "n+1" equations for determining "n" unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q. Here is one method for doing so: first, define the function "f"(A) to return the matrix A with its right-most column replaced with all 1's. If ["f"(P − I"n")]−1 exists then formula_46 Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n variables. And there are n more linear equations from the fact that Q is a right stochastic matrix whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from “Q multiplied by the right-most column of (P-In)” have been replaced by the n stochastic ones. One thing to notice is that if P has an element P"i","i" on its main diagonal that is equal to 1 and the "i"th row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers P"k". Hence, the "i"th row or column of Q will have the 1 and the 0's in the same positions as in P. Convergence speed to the stationary distribution. As stated earlier, from the equation formula_47 (if exists) the stationary (or steady state) distribution π is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has "n" linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way. Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ = diag("λ"1,"λ"2,"λ"3...,"λ""n"). Then by eigendecomposition formula_48 Let the eigenvalues be enumerated such that: formula_49 Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other π which solves the stationary distribution equation above). Let u"i" be the "i"-th column of U matrix, that is, u"i" is the left eigenvector of P corresponding to λ"i". Also let x be a length "n" row vector that represents a valid probability distribution; since the eigenvectors u"i" span formula_50 we can write formula_51 If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution π. In other words, π = a1 u1 ← xPP...P = xP"k" as "k" → ∞. That means formula_52 Since π is parallel to u1(normalized by L2 norm) and π("k") is a probability vector, π("k") approaches to a1 u1 = π as "k" → ∞ with a speed in the order of "λ"2/"λ"1 exponentially. This follows because formula_53 hence "λ"2/"λ"1 is the dominant term. The smaller the ratio is, the faster the convergence is. Random noise in the state distribution π can also speed up this convergence to the stationary distribution. General state space. Harris chains. Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. Locally interacting Markov chains. Considering a collection of Markov chains whose evolution takes in account the state of other Markov chains, is related to the notion of locally interacting Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. See interacting particle system and stochastic cellular automata (probabilistic cellular automata). See for instance "Interaction of Markov Processes" or. Properties. Two states are said to "communicate" with each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class is "closed" if the probability of leaving the class is zero. A Markov chain is "irreducible" if there is one communicating class, the state space. A state "i" has period "k" if "k" is the greatest common divisor of the number of transitions by which "i" can be reached, starting from "i". That is: formula_54 The state is "periodic" if formula_55; otherwise formula_56 and the state is "aperiodic". A state "i" is said to be "transient" if, starting from "i", there is a non-zero probability that the chain will never return to "i". It is called "recurrent" (or "persistent") otherwise. For a recurrent state "i", the mean "hitting time" is defined as: formula_57 State "i" is "positive recurrent" if formula_58 is finite and "null recurrent" otherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property. A state "i" is called "absorbing" if there are no outgoing transitions from the state. Irreducibility. Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic. If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given by formula_59. Ergodicity. A state "i" is said to be "ergodic" if it is aperiodic and positive recurrent. In other words, a state "i" is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integer formula_60 such that all entries of formula_61 are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number "N" such that any state can be reached from any other state in any number of steps less or equal to a number "N". In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with "N" = 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Terminology. Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones. In fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic theory. Some authors call a matrix "primitive" iff there exists some integer formula_60 such that all entries of formula_61 are positive. Some authors call it "regular". Index of primitivity. The "index of primitivity", or "exponent", of a regular matrix, is the smallest formula_60 such that all entries of formula_61 are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry of formula_62 is zero or positive, and therefore can be found on a directed graph with formula_63 as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Let formula_64 be the number of states, then Measure-preserving dynamical system. If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be formula_75, where formula_76 is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. Let formula_77 be the shift operator: formula_78. Similarly we can construct such a dynamical system with formula_79 instead. Since "irreducible" Markov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. In ergodic theory, a measure-preserving dynamical system is called "ergodic" iff any measurable subset formula_80 such that formula_81 implies formula_82 or formula_83 (up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain is "irreducible" iff its corresponding measure-preserving dynamical system is "ergodic". Markovian representations. In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, let "X" be a non-Markovian process. Then define a process "Y", such that each state of "Y" represents a time-interval of states of "X". Mathematically, this takes the form: formula_84 If "Y" has the Markov property, then it is a Markovian representation of "X". An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one. Hitting times. The "hitting time" is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. Expected hitting times. For a subset of states "A" ⊆ "S", the vector "k""A" of hitting times (where element formula_85 represents the expected value, starting in state "i" that the chain enters one of the states in the set "A") is the minimal non-negative solution to formula_86 Time reversal. For a CTMC "X""t", the time-reversed process is defined to be formula_87. By Kelly's lemma this process has the same stationary distribution as the forward process. A chain is said to be "reversible" if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. Embedded Markov chain. One method of finding the stationary probability distribution, π, of an ergodic continuous-time Markov chain, "Q", is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, "S", is denoted by "s""ij", and represents the conditional probability of transitioning from state "i" into state "j". These conditional probabilities may be found by formula_88 From this, "S" may be written as formula_89 where "I" is the identity matrix and diag("Q") is the diagonal matrix formed by selecting the main diagonal from the matrix "Q" and setting all other elements to zero. To find the stationary probability distribution vector, we must next find formula_90 such that formula_91 with formula_90 being a row vector, such that all elements in formula_90 are greater than 0 and formula_92 = 1. From this, π may be found as formula_93 Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing "X"("t") at intervals of δ units of time. The random variables "X"(0), "X"(δ), "X"(2δ), ... give the sequence of states visited by the δ-skeleton. Special types of Markov chains. Markov model. Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: Bernoulli scheme. A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process. Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states that "any" stationary stochastic process is isomorphic to a Bernoulli scheme; the Markov chain is just one such example. Subshift of finite type. When the Markov matrix is replaced by the adjacency matrix of a finite graph, the resulting shift is termed a topological Markov chain or a subshift of finite type. A Markov matrix that is compatible with the adjacency matrix can then provide a measure on the subshift. Many chaotic dynamical systems are isomorphic to topological Markov chains; examples include diffeomorphisms of closed manifolds, the Prouhet–Thue–Morse system, the Chacon system, sofic systems, context-free systems and block-coding systems. Applications. Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. Physics. Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description. For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects. Markov chains are used in lattice QCD simulations. Chemistry. <chem>{E} + \underset{Substrate\atop binding}{S <=> E}\overset{Catalytic\atop step}{S -> E} + P</chem>Michaelis-Menten kinetics. The enzyme (E) binds a substrate (S) and produces a product (P). Each reaction is a state transition in a Markov chain.A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number "n" of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is "n" times the probability a given molecule is in that state. The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains. An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds. Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains. Biology. Markov chains are used in various areas of biology. Notable examples include: Testing. Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing. Solar irradiance variability. Solar irradiance variability assessments are useful for solar power applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains, also including modeling the two states of clear and cloudiness as a two-state Markov chain. Speech recognition. Hidden Markov models have been used in automatic speech recognition systems. Information theory. Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper "A Mathematical Theory of Communication", which in a single step created the field of information theory, opens by introducing the concept of entropy through Markov modeling of the English language. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection). The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression to achieve very high compression ratios. Queueing theory. Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth). Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from "i" to "i" + 1 occur at rate "λ" according to a Poisson process and describe job arrivals, while transitions from "i" to "i" – 1 (for "i" > 1) occur at rate "μ" (job service times are exponentially distributed) and describe completed services (departures) from the queue. Internet applications. The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page formula_95 in the stationary distribution on the following Markov chain on all (known) webpages. If formula_96 is the number of known webpages, and a page formula_95 has formula_97 links to it then it has transition probability formula_94 for all pages that are linked to and formula_98 for all pages that are not linked to. The parameter formula_99 is taken to be about 0.15. Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user. Statistics. Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically. Economics and finance. Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. D. G. Champernowne built a Markov chain model of the distribution of income in 1953. Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes. Louis Bachelier was the first to observe that stock prices followed a random walk. The random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk models were popular in the literature of the 1960s. Regime-switching models of business cycles were popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions). A more recent example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting. Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings. Social sciences. Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's , tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime. Games. Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares). Music. Markov chains are employed in algorithmic music composition, particularly in software such as Csound, Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric. A second-order Markov chain can be introduced by considering the current state "and" also the previous state, as indicated in the second table. Higher, "n"th-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system. Markov chains can be used structurally, as in Xenakis's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input. Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed. Baseball. Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team. He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. AstroTurf. Markov text generators. Markov processes can also be used to generate superficially real-looking text given a sample document. Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V. Shaney, and Academias Neutronium). Several open-source text generation libraries using Markov chains exist. Probabilistic forecasting. Markov chains have been used for forecasting in several areas: for example, price trends, wind power, stochastic terrorism, and solar irradiance. The Markov chain forecasting models utilize a variety of settings, from discretizing the time series, to hidden Markov models combined with wavelets, and the Markov chain mixture distribution model (MCM). See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" /> External links. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "X_n" }, { "math_id": 1, "text": "X_0 = 0" }, { "math_id": 2, "text": "\\{X_n : n\\in\\mathbb{N}\\}" }, { "math_id": 3, "text": "X_6 = \\$0.50" }, { "math_id": 4, "text": "X_6" }, { "math_id": 5, "text": "X_7 \\geq \\$0.60" }, { "math_id": 6, "text": "X_7" }, { "math_id": 7, "text": "X_6 = 1,0,5" }, { "math_id": 8, "text": "6\\times 6\\times 6=216" }, { "math_id": 9, "text": "X_1 = 0,1,0" }, { "math_id": 10, "text": "X_2" }, { "math_id": 11, "text": "X_1" }, { "math_id": 12, "text": "X_2 = 1,0,1" }, { "math_id": 13, "text": "X_n = i,j,k" }, { "math_id": 14, "text": "X_{n-1}= \\ell,m,p" }, { "math_id": 15, "text": "\\Pr(X_{n+1}=x\\mid X_1=x_1, X_2=x_2, \\ldots, X_n=x_n) = \\Pr(X_{n+1}=x\\mid X_n=x_n)," }, { "math_id": 16, "text": "\\Pr(X_1=x_1,\\ldots,X_n=x_n)>0." }, { "math_id": 17, "text": "\\Pr(X_{n+1}=x\\mid X_n=y) = \\Pr(X_n = x \\mid X_{n-1} = y)" }, { "math_id": 18, "text": "\\Pr(X_{0}=x_0, X_{1} = x_1, \\ldots, X_{k} = x_k) = \\Pr(X_{n}=x_0, X_{n+1} = x_1, \\ldots, X_{n+k} = x_k)" }, { "math_id": 19, "text": "X_0" }, { "math_id": 20, "text": "\n\\begin{align}\n{} &\\Pr(X_n=x_n\\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \\dots , X_1=x_1) \\\\\n= &\\Pr(X_n=x_n\\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \\dots, X_{n-m}=x_{n-m})\n\\text{ for }n > m\n\\end{align}\n" }, { "math_id": 21, "text": "(Y_n)" }, { "math_id": 22, "text": "(X_n)" }, { "math_id": 23, "text": "Y_n= \\left( X_n,X_{n-1},\\ldots,X_{n-m+1} \\right)" }, { "math_id": 24, "text": "X_t" }, { "math_id": 25, "text": "X_t = i" }, { "math_id": 26, "text": "X_{t+h}=j" }, { "math_id": 27, "text": "\\left( X_s : s < t \\right)" }, { "math_id": 28, "text": "\\Pr(X(t+h) = j \\mid X(t) = i) = \\delta_{ij} + q_{ij}h + o(h)," }, { "math_id": 29, "text": "\\delta_{ij}" }, { "math_id": 30, "text": "q_{ij}" }, { "math_id": 31, "text": "\\Pr(X_{t_{n+1}} = i_{n+1} \\mid X_{t_0} = i_0 , X_{t_1} = i_1 , \\ldots, X_{t_n} = i_n ) = p_{i_n i_{n+1}}( t_{n+1} - t_n)" }, { "math_id": 32, "text": "P'(t) = P(t) Q" }, { "math_id": 33, "text": "p_{ij} = \\Pr(X_{n+1}=j\\mid X_n=i). " }, { "math_id": 34, "text": " \\pi\\mathbf{P} = \\pi." }, { "math_id": 35, "text": "\\pi=\\frac{e}{\\sum_i{e_i}}" }, { "math_id": 36, "text": "\\sum_i \\pi_i=1" }, { "math_id": 37, "text": " \\textstyle \\pi_i " }, { "math_id": 38, "text": "\\sum_i 1 \\cdot \\pi_i=1" }, { "math_id": 39, "text": "\\lim_{k\\to\\infty}\\mathbf{P}^k=\\mathbf{1}\\pi" }, { "math_id": 40, "text": "\\lim_{k\\to\\infty}\\mathbf{P}^k" }, { "math_id": 41, "text": "\\mathbf P=\\begin{pmatrix} 0& 1\\\\ 1& 0 \\end{pmatrix} \\qquad \\mathbf P^{2k}=I \\qquad \\mathbf P^{2k+1}=\\mathbf P" }, { "math_id": 42, "text": "\\begin{pmatrix}\\frac{1}{2}&\\frac{1}{2}\\end{pmatrix}\\begin{pmatrix} 0& 1\\\\ 1& 0 \\end{pmatrix}=\\begin{pmatrix}\\frac{1}{2}&\\frac{1}{2}\\end{pmatrix}" }, { "math_id": 43, "text": "\\mathbf{Q} = \\lim_{k\\to\\infty}\\mathbf{P}^k." }, { "math_id": 44, "text": "\\mathbf{QP} = \\mathbf{Q}." }, { "math_id": 45, "text": "\\mathbf{Q}(\\mathbf{P} - \\mathbf{I}_{n}) = \\mathbf{0}_{n,n} ," }, { "math_id": 46, "text": "\\mathbf{Q}=f(\\mathbf{0}_{n,n})[f(\\mathbf{P}-\\mathbf{I}_n)]^{-1}." }, { "math_id": 47, "text": "\\boldsymbol{\\pi} = \\boldsymbol{\\pi} \\mathbf{P}," }, { "math_id": 48, "text": " \\mathbf{P} = \\mathbf{U\\Sigma U}^{-1} ." }, { "math_id": 49, "text": " 1 = |\\lambda_1 |> |\\lambda_2 | \\geq |\\lambda_3 | \\geq \\cdots \\geq |\\lambda_n|." }, { "math_id": 50, "text": "\\R^n," }, { "math_id": 51, "text": " \\mathbf{x}^\\mathsf{T} = \\sum_{i=1}^n a_i \\mathbf{u}_i, \\qquad a_i \\in \\R." }, { "math_id": 52, "text": "\\begin{align}\n\\boldsymbol{\\pi}^{(k)} &= \\mathbf{x} \\left (\\mathbf{U\\Sigma U}^{-1} \\right ) \\left (\\mathbf{U\\Sigma U}^{-1} \\right )\\cdots \\left (\\mathbf{U\\Sigma U}^{-1} \\right ) \\\\\n&= \\mathbf{xU\\Sigma}^k \\mathbf{U}^{-1} \\\\\n&= \\left (a_1\\mathbf{u}_1^\\mathsf{T} + a_2\\mathbf{u}_2^\\mathsf{T} + \\cdots + a_n\\mathbf{u}_n^\\mathsf{T} \\right )\\mathbf{U\\Sigma}^k\\mathbf{U}^{-1} \\\\\n&= a_1\\lambda_1^k\\mathbf{u}_1^\\mathsf{T} + a_2\\lambda_2^k\\mathbf{u}_2^\\mathsf{T} + \\cdots + a_n\\lambda_n^k\\mathbf{u}_n^\\mathsf{T} && u_i \\bot u_j \\text{ for } i\\neq j \\\\\n& = \\lambda_1^k\\left\\{a_1\\mathbf{u}_1^\\mathsf{T} + a_2\\left(\\frac{\\lambda_2}{\\lambda_1}\\right)^k\\mathbf{u}_2^\\mathsf{T} + a_3\\left(\\frac{\\lambda_3}{\\lambda_1}\\right)^k\\mathbf{u}_3^\\mathsf{T} + \\cdots + a_n\\left(\\frac{\\lambda_n}{\\lambda_1}\\right)^k\\mathbf{u}_n^\\mathsf{T}\\right\\}\n\\end{align}" }, { "math_id": 53, "text": " |\\lambda_2| \\geq \\cdots \\geq |\\lambda_n|," }, { "math_id": 54, "text": " k = \\gcd\\{ n > 0: \\Pr(X_n = i \\mid X_0 = i) > 0\\}" }, { "math_id": 55, "text": "k > 1" }, { "math_id": 56, "text": "k = 1" }, { "math_id": 57, "text": " M_i = E[T_i]=\\sum_{n=1}^\\infty n\\cdot f_{ii}^{(n)}." }, { "math_id": 58, "text": "M_i" }, { "math_id": 59, "text": "\\pi_i = 1/E[T_i]" }, { "math_id": 60, "text": "k" }, { "math_id": 61, "text": "M^k" }, { "math_id": 62, "text": "M" }, { "math_id": 63, "text": "\\mathrm{sign}(M)" }, { "math_id": 64, "text": "n" }, { "math_id": 65, "text": " \\leq (n-1)^2 + 1 " }, { "math_id": 66, "text": "1 \\to 2 \\to \\dots \\to n \\to 1 \\text{ and } 2" }, { "math_id": 67, "text": "k \\geq 1" }, { "math_id": 68, "text": "\\leq 2n-k-1" }, { "math_id": 69, "text": "M^2" }, { "math_id": 70, "text": "\\leq 2n-2" }, { "math_id": 71, "text": "\\leq n+s(n-2)" }, { "math_id": 72, "text": "s" }, { "math_id": 73, "text": "\\leq (d+1)+s(d+1-2)" }, { "math_id": 74, "text": "d" }, { "math_id": 75, "text": "\\Omega = \\Sigma^\\N" }, { "math_id": 76, "text": "\\Sigma" }, { "math_id": 77, "text": "T: \\Omega \\to \\Omega" }, { "math_id": 78, "text": "T(X_0, X_1, \\dots) = (X_1, \\dots) " }, { "math_id": 79, "text": "\\Omega = \\Sigma^\\Z" }, { "math_id": 80, "text": "S" }, { "math_id": 81, "text": "T^{-1}(S) = S" }, { "math_id": 82, "text": "S = \\emptyset" }, { "math_id": 83, "text": "\\Omega" }, { "math_id": 84, "text": "Y(t) = \\big\\{ X(s): s \\in [a(t), b(t)] \\, \\big\\}." }, { "math_id": 85, "text": " k_i^A " }, { "math_id": 86, "text": "\\begin{align}\nk_i^A = 0 & \\text{ for } i \\in A\\\\\n-\\sum_{j \\in S} q_{ij} k_j^A = 1&\\text{ for } i \\notin A.\n\\end{align}" }, { "math_id": 87, "text": " \\hat X_t = X_{T-t}" }, { "math_id": 88, "text": "\ns_{ij} = \\begin{cases}\n\\frac{q_{ij}}{\\sum_{k \\neq i} q_{ik}} & \\text{if } i \\neq j \\\\\n0 & \\text{otherwise}.\n\\end{cases}\n" }, { "math_id": 89, "text": "S = I - \\left( \\operatorname{diag}(Q) \\right)^{-1} Q" }, { "math_id": 90, "text": "\\varphi" }, { "math_id": 91, "text": "\\varphi S = \\varphi, " }, { "math_id": 92, "text": "\\|\\varphi\\|_1" }, { "math_id": 93, "text": "\\pi = {-\\varphi (\\operatorname{diag}(Q))^{-1} \\over \\left\\| \\varphi (\\operatorname{diag}(Q))^{-1} \\right\\|_1}." }, { "math_id": 94, "text": "\\frac{\\alpha}{k_i} + \\frac{1-\\alpha}{N}" }, { "math_id": 95, "text": "i" }, { "math_id": 96, "text": "N" }, { "math_id": 97, "text": "k_i" }, { "math_id": 98, "text": "\\frac{1-\\alpha}{N}" }, { "math_id": 99, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=60876
60876593
Nilsson model
Nuclear shell model The Nilsson model is a nuclear shell model treating the atomic nucleus as a deformed sphere. In 1953, the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was nonspherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to the large number of valence particles—and this intractability was even greater in the 1950s, when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is the one now known as the Nilsson model. It is essentially a nuclear shell model using a harmonic oscillator potential, but with anisotropy added, so that the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z. Hamiltonian. For an axially symmetric shape with the axis of symmetry being the z axis, the Hamiltonian is formula_0 Here m is the mass of the nucleon, N is the total number of harmonic oscillator quanta in the spherical basis, formula_1 is the orbital angular momentum operator, formula_2 is its square (with eigenvalues formula_3), formula_4 is the average value of formula_2 over the N shell, and s is the intrinsic spin. The anisotropy of the potential is such that the length of an equipotential along the "z" is greater than the length on the transverse axes in the ratio formula_5. This is conventionally expressed in terms of a deformation parameter δ so that the harmonic oscillator part of the potential can be written as the sum of a spherically symmetric harmonic oscillator and a term proportional to δ. Positive values of δ indicate prolate deformations, like an American football. Most nuclei in their ground states have equilibrium shapes such that δ ranges from 0 to 0.2, while superdeformed states have formula_6 (a 2-to-1 axis ratio). The mathematical details of the deformation parameters are as follows. Considering the success of the nuclear liquid drop model, in which the nucleus is taken to be an incompressible fluid, the harmonic oscillator frequencies are constrained so that formula_7 remains constant with deformation, preserving the volume of equipotential surfaces. Reproducing the observed density of nuclear matter requires formula_8, where "A" is the mass number. The relation between δ and the anisotropy is formula_9, while the relation between δ and the axis ratio formula_10 is formula_11. The remaining two terms in the Hamiltonian do not relate to deformation and are present in the spherical shell model as well. The spin-orbit term represents the spin-orbit dependence of the strong nuclear force; it is much larger than, and has the opposite sign compared to, the special-relativistic spin-orbit splitting. The purpose of the formula_2 term is to mock up the flat profile of the nuclear potential as a function of radius. For nuclear wavefunctions (unlike atomic wavefunctions) states with high angular momentum have their probability density concentrated at greater radii. The term formula_12 prevents this from shifting a major shell up or down as a whole. The two adjustable constants are conventionally parametrized as formula_13 and formula_14. Typical values of κ and μ for heavy nuclei are 0.06 and 0.5. With this parametrization, formula_15 occurs as a simple scaling factor throughout all the calculations. Choice of basis and quantum numbers. For ease of computation using the computational resources of the 1950s, Nilsson used a basis consisting of eigenstates of the spherical hamiltonian. The Nilsson quantum numbers are formula_16. The difference between the spherical and deformed Hamiltonian is proportional to formula_17, and this has matrix elements that are easy to calculate in this basis. They couple the different N shells. Eigenstates of the deformed Hamiltonian have good parity (corresponding to even or odd N) and Ω, the projection of the total angular momentum along the symmetry axis. In the absence of a cranking term (see below), time-reversal symmetry causes states with opposite signs of Ω to be degenerate, so that in the calculations only positive values of Ω need to be considered. Interpretation. In an odd, well-deformed nucleus, the single-particle levels are filled up to the Fermi level, and the odd particle's Ω and parity give the spin and parity of the ground state. Cranking. Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier formula_18, known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level then produces states whose expected angular momentum along the cranking axis formula_19 has the desired value set by the Lagrange multiplier. Total energy. Often one wants to calculate a total energy as a function of deformation. Minima of this function are predicted equilibrium shapes. Adding the single-particle energies does not work for this purpose, partly because kinetic and potential terms are out of proportion by a factor of two, and partly because small errors in the energies accumulate in the sum. For this reason, such sums are usually renormalized using a procedure introduced by Strutinsky. Plots of energy levels. Single-particle levels can be shown in a "spaghetti plot," as functions of the deformation. A large gap between energy levels at zero deformation indicates a particle number at which there is a shell closure: the traditional "magic numbers." Any such gap, at a zero or nonzero deformation, indicates that when the Fermi level is at that height, the nucleus will be stable relative to the liquid drop model.
[ { "math_id": 0, "text": "H=\\frac{1}{2}m\\omega_z^2 z^2+\\frac{1}{2}m\\omega_\\perp^2 (x^2+y^2)-c_1\\ell\\cdot s-c_2(\\ell^2-\\langle\\ell^2\\rangle_N)." }, { "math_id": 1, "text": "\\ell" }, { "math_id": 2, "text": "\\ell^2" }, { "math_id": 3, "text": "\\ell(\\ell+1)" }, { "math_id": 4, "text": "\\langle\\ell^2\\rangle_N=(1/2)N(N+3)" }, { "math_id": 5, "text": "\\omega_\\perp/\\omega_z" }, { "math_id": 6, "text": "\\delta\\approx 0.5" }, { "math_id": 7, "text": "\\omega_z\\omega_\\perp^2=\\omega_0^3" }, { "math_id": 8, "text": "\\hbar\\omega_0\\approx (42\\ \\text{MeV})A^{-1/3}" }, { "math_id": 9, "text": "(\\omega_\\perp/\\omega_z)^2=(1+\\frac{2}{3}\\delta)/(1-\\frac{4}{3}\\delta)" }, { "math_id": 10, "text": "R=\\omega_\\perp/\\omega_z" }, { "math_id": 11, "text": "\\delta=(3/2)(R^2-1)/(2R^2+1)" }, { "math_id": 12, "text": "-\\langle\\ell^2\\rangle_N" }, { "math_id": 13, "text": "c_1=2\\kappa\\hbar\\omega_0" }, { "math_id": 14, "text": "c_2=\\mu\\kappa\\hbar\\omega_0" }, { "math_id": 15, "text": "\\hbar\\omega_0" }, { "math_id": 16, "text": "\\{N,\\ell,m_\\ell,m_s\\}" }, { "math_id": 17, "text": "r^2Y_{20}\\delta" }, { "math_id": 18, "text": "-\\omega\\cdot J" }, { "math_id": 19, "text": "\\langle J_x\\rangle" } ]
https://en.wikipedia.org/wiki?curid=60876593
60877658
Human contingency learning
Human contingency learning (HCL) is the observation that people tend to acquire knowledge based on whichever outcome has the highest probability of occurring from particular stimuli. In other words, individuals gather associations between a certain behaviour and a specific consequence. It is a form of learning for many organisms. Stimulus pairings can have many impacts on responses such as influencing the speed of responses, accuracies of the responses, affective evaluations and causal attributions. There has been much development about human contingency learning over a span of 20 years. Further development in human contingency learning is required because many models that have been proposed are unable to incorporate all existing data. Description. Human contingency learning focuses on the acquisition and development of explicit or implicit knowledge of the relationships or statistical correlations between stimuli and responses. It is similar to operant conditioning, which is a learning process where a behaviour can be encouraged or discouraged through praise or punishment. However, human contingency learning has been recognised as a cognitive process and may be considered an addition to classical conditioning. Human contingency learning also has its theoretical roots entrenched in classical conditioning, which focuses on the statistical correlations between two stimuli instead of a stimulus and response. The methods for the experimentation or studies on human contingency learning are often found to be quite similar. Participants in many studies of human contingency learning are given information about a number of situations where certain stimuli and certain responses are either absent or present. They are then told to determine the extent to which the stimuli are related to the responses. For example, in a trial, participants are provided a list of foods that a fictitious person has eaten (the stimulus) along with details about whether the patient experienced any allergic reactions after the food (the response). According to the Quarterly Journal of Experimental Psychology, the participants will apply this information to determine the probability of that same patient acquiring an allergic reaction after consuming a different set of foods. Human contingency learning mostly inherits the fundamental concepts from classical conditioning (and some from operant conditioning), which primarily focused on studying animals. It expands upon these studies and provides further application to human behaviour. Human contingency learning is recognised as an important ability to human survival because it allows organisms to predict and control events in the environment based on previous experiences. Theoretical roots. Origins of classical (Pavlovian) conditioning. Human contingency learning has its roots connected to classical conditioning; also referred to as Pavlovian conditioning after the Russian psychologist, Ivan Pavlov. It is a type of learning through association where two stimuli are linked to create a new response in an animal or person. The popular experiment is known as Pavlov's dogs where food was provided to the dogs along with repeated sounds of a bell; the food, which was the initial stimulus, would cause the dog to salivate. The pairing of the bell with the food resulted in the former becoming the new stimulus even after the food was excluded from the pairing. This therefore meant that the bell (the new stimulus) would invoke an unconditional response from the dogs without the presence of the initial stimulus since the dogs anticipate the arrival of food. At a procedural level, human contingency learning experiments are very similar to the classical conditioning testing methods. Stimuli consisting of cues and outcomes are paired and the decisions of the participants in response to the stimuli (contingency judgements) are assessed. Origins of operant conditioning. Human contingency learning also has strong similarities with operant conditioning. As mentioned, its method of learning involves the use of praise or punishment of a certain behaviour. Once certain behaviours indicate a certain consequence, the individual in testing will make an association between the behaviour and consequence. For example, if a certain behaviour contains a consequence that is positive, then the individual or organism will learn from this and continue to do it as the action has perceived as being rewarded. This theory was developed by B.F. Skinner and explored in his 1938 book “The Behavior of Organisms: An Experimental Analysis”. The research continued based on the work of Thorndike's law of effect, which states that a particular behaviour persists if pleasant consequences are repeated. The contrary is also true, where if there are unpleasant consequences to a certain behaviour, it is unlikely for that behaviour to continue. Concepts and theories behind human contingency learning. With all theories, providing an introduction to the fundamental concepts and frameworks underlying the overall cognitive process is necessary. The theories however are still undergoing testing as the methods employed to test the hypotheses are still inconclusive and are subject to review. Associative theories. Pathway strengthening (Rescorla-Wagner model). One of the main cognitive theories that is inherent in human contingency learning is pathway strengthening, which is based on the Rescorla-Wagner model. It has been proposed as the mechanism that underlies the gradual learning tendencies to respond to certain inputs. Pathway strengthening is when performance is attributed to the strengthening of pathways linking cue representations with the representation of outcomes. It is a model of classical conditioning where learning is attributed to associations between conditioned and unconditioned stimuli. The main focus of the Rescorla-Wagner model is that conditional stimuli can trigger or signal the unconditional stimuli. Stronger pathways allow for more efficient and automatic responses. When participants are faced with fast-paced sequence-learning tasks, pathway-strengthening is accounted for in the gradual speeding of their responses. Associative models assume that the knowledge of a cue-outcome relationship is represented by an associative bond and that the bond should function similarly even without precise semantics of a test question. The illustration of such a relationship can be linked back to the experiment of Pavlov's dogs. Strengths of conditioned responses that are induced by the conditional stimulus depends on how strong the association is between representations of conditional stimuli and unconditional stimuli.: This relationship can be expressed under the following learning rule or mathematical equation formula_0 From the said formula, the level or extent of the associative strength between the conditional stimulus changes for each different trial (formula_1) and will depend on both the associative strength of the cue acquired previously and the already-present associative strengths of all stimuli currently present within the trial itself (formula_2). The term, formula_3, represents the highest associative strength that a certain unconditional stimulus can provide. To represent an unconditional stimulus that is present in the trial, the Greek term often has a prescribed positive value (sometimes a value of 1). Conversely, if there was a lack of an unconditional stimulus in the trial, the value undertaken by the Greek term would be 0. The formula_4 and formula_5 terms in the formula are constant, representing the speed of learning given a certain unconditional stimulus. Although this model has primarily been applied to classical conditioning, according to Dickinson et al. (1984), the Rescorla-Wagner model has its applications to human contingency learning. The problem with this however is the inherent assumption of the model, as De Houwer and Beckers state, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;One only needs to assume that associations are formed between the representations of cues and outcomes, that the strength of these associations is updated according to the Rescorla–Wagner learning rule, and that judgements about the contingency between a target cue and an outcome are a reflection of the strength of the association that links the representations of that cue and outcome. Therefore, the Rescorla-Wagner model's relevance to human contingency learning is considered problematic because the model "underestimate[s] the active role that observers play when encoding and retrieving knowledge about contingencies". Predictiveness. Predictiveness is the theory that learning will depend on how engaged the individual is to a stimulus - the more attention that is applied to a certain cue, the greater the speed of conditioning. It is a theory that was developed by Nicholas Mackintosh, a British psychologist. If it is assumed that the predictiveness on associability is positive, then the amount of attention an individual applies to a cue will be a measure of the same cue's reliability as a predictor of a result in relation to other cues. It is an assessment of the extent that an organism is able to reliably predict an outcome to occur based on the predictive power of a cue. According to the Mackintosh model, it has generally been found that when individuals gain knowledge from cues that have a certain predictive force from earlier learning stages, an increase in the learning rate for both animals and humans tends to result. The extent of stimulus processing can also be compelled by the interaction effects of relative and absolute predictiveness. Absolute predictiveness mechanisms are "expected to dominate when the entire compound of cues is informative". On the other hand, a "relative-predictiveness mechanism should dominate when simultaneously presented cues differ in their predictiveness". Both of these theories relating to predictiveness have been brought forward by different psychologists, with absolute predictiveness being derived from Pearce &amp; Hall and relative predictiveness being part of Mackintosh's theory. Application. Human contingency learning has been studied under different types of models or paradigms. Some paradigms involve participants being asked to assess the associative relations between stimuli when presented with a combination of stimuli. Particularly in humans, many different studies have been undertaken, such as judgements to determine a relationship between correlated stimuli, judgements on predictive relationships between stimuli and responses while measuring the response time and accuracy gains that may differ between each stimuli and response pair. Some applications of human contingency learning are summarised in the sub-headings below. Generalisation decrement. Generalisation decrement is a type of learning that falls under the umbrella of associative learning. It is a concept where both animals and humans base current circumstances of learning through past events if the conditions of such an event is similar to the present one. The importance of applying associative theories to the context of human learning is due to the human behaviour of generalisation. Generalisation is when an association between a stimulus and a response will be generalised or applied superficially to a stimulus that is similar to the initial one. To expand upon this further, when making a learned association concerning a stimulus A, the strength of that association can be dispensed across a number of elements that make up A. When introducing a different object B, if it carries some of the same elements that A contained, the degree to which B inherits stimulus A's associative strength will depend upon the amount of similarities that they both share. The assumption of elements is made because the stimuli can be seen as “compounds composed of constituent elements (i.e., representational features)”. Pearce's Model. Generalisation decrement can be represented by Pearce's configurable model. Similarities of two stimuli is shown in the following equation: formula_6 In the above formula, formula_7quantifies the amount of elements that both stimuli contain, with formula_8and formula_9representing the total number of elements that are exclusive to each element. If a single stimulus gets paired with an unconditional stimulus, the strength of there being a response to the other stimulus has a positive relationship with the number of elements shared with the originally conditioned stimulus. On the other hand, there is an inverse relationship between the conditional stimuli and the elements that are exclusive to each of them. Expanding on the model further, if elements are added to the stimuli, the response increases as the quantity of common elements is increased. Conversely, if elements are removed, the response will decrease due to a reduction in the quantity of common elements in the stimuli. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta V_n=\\alpha\\beta(\\lambda-\\Sigma V_{n-1})" }, { "math_id": 1, "text": "\\Delta V_n" }, { "math_id": 2, "text": "\\Sigma V" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "N_c/N_{P1}*N_c/N_{P2}" }, { "math_id": 7, "text": "N_c" }, { "math_id": 8, "text": "N_{P1}" }, { "math_id": 9, "text": "N_{P2}" } ]
https://en.wikipedia.org/wiki?curid=60877658
60880859
Eddington experiment
1919 observational test which confirmed Einstein's theory of general relativity The Eddington experiment was an observational test of general relativity, organised by the British astronomers Frank Watson Dyson and Arthur Stanley Eddington in 1919. The observations were of the total solar eclipse of 29 May 1919 and were carried out by two expeditions, one to the West African island of Príncipe, and the other to the Brazilian town of Sobral. The aim of the expeditions was to measure the gravitational deflection of starlight passing near the Sun. The value of this deflection had been predicted by Albert Einstein in a 1911 paper; however, this initial prediction turned out not to be correct because it was based on an incomplete theory of general relativity. Einstein later improved his prediction after finalizing his theory in 1915 and obtaining the solution to his equations by Karl Schwarzschild. Following the return of the expeditions, the results were presented by Eddington to the Royal Society of London and, after some deliberation, were accepted. Widespread newspaper coverage of the results led to worldwide fame for Einstein and his theories. Background. One of the first considerations of gravitational deflection of light was published in 1801, when Johann Georg von Soldner pointed out that Newtonian gravity predicts that starlight will be deflected when it passes near a massive object. Initially, in a paper published in 1911, Einstein had incorrectly calculated that the amount of light deflection was the same as the Newtonian value, that is 0.83 seconds of arc for a star that would be just on the limb of the Sun in the absence of gravity. In October 1911, responding to Einstein's encouragement, German astronomer Erwin Freundlich contacted solar eclipse expert Charles D. Perrine in Berlin to inquire as to the suitability of existing solar eclipse photographs to prove Einstein's prediction of light deflection. Perrine, the director of the Argentine National Observatory at Cordoba, had participated in four solar eclipse expeditions while at the Lick Observatory in 1900, 1901, 1905, and 1908. He did not believe existing eclipse photos would be useful. In 1912 Freundlich asked if Perrine would include observation of light deflection as part of the Argentine Observatory's program for the solar eclipse of October 10, 1912, in Brazil. W. W. Campbell, director of the Lick Observatory, loaned Perrine its intramercurial camera lenses. Perrine and the Cordoba team were the only eclipse expedition to construct specialized equipment dedicated to observe light deflection. Unfortunately all the expeditions suffered from torrential rains which prevented any observations. Nevertheless Perrine was the first astronomer to make a dedicated attempt to observe light deflection to test Einstein's prediction. Eddington had taken part in a British expedition to Brazil to observe the 1912 eclipse but was interested in different measurements. Eddington and Perrine spent several days together in Brazil and may have discussed their observation programs including Einstein's prediction of light deflection. In 1914 three eclipse expeditions, from Argentina, Germany, and the US, were committed to testing Einstein's theory by observing for light deflection. The three directors were Erwin Finlay-Freundlich, from the Berlin Observatory, the US astronomer William Wallace Campbell, director of the Lick Observatory, and Charles D. Perrine, director of the Argentine National Observatory at Cordoba. The three expeditions travelled to the Crimea in the Russian Empire to observe the eclipse of 21 August. However, the First World War started in July of that year, and Germany declared war on Russia on 1 August. The German astronomers were either forced to return home or were taken prisoner by the Russians. Although the US and Argentine astronomers were not detained, clouds prevented clear observations being made during the eclipse. Perrine's photographs, although not clear enough to prove Einstein's prediction, were the first obtained in an attempt to test Einstein's prediction of light deflection. A second attempt by American astronomers to measure the effect during the 1918 eclipse was foiled by clouds in one location and by ambiguous results due to the lack of the correct equipment in another. Einstein's 1911 paper predicted deflection of star light on the limb of the Sun to be 0.83 seconds of arc and encouraged astronomers to test this prediction by observing stars near the Sun during a solar eclipse. It is fortunate for Einstein that the weather precluded results by Perrine in 1912 and Perrine, Freundlich, and Campbell in 1914. If results had been obtained they may have disproved the 1911 prediction setting back Einstein's reputation. In any case, Einstein corrected his prediction in his 1915 paper on General Relativity to 1.75 seconds of arc for a star on the limb. Einstein and subsequent astronomers both benefitted from this correction. Eddington's interest in general relativity began in 1916, during World War I, when he read papers by Einstein (presented in Berlin, Germany, in 1915), which had been sent by the neutral Dutch physicist Willem de Sitter to the Royal Astronomical Society in Britain. Eddington, later said to be one of the few people at the time to understand the theory, realised its significance and lectured on relativity at a meeting at the British Association in 1916. He emphasised the importance of testing the theory by methods such as eclipse observations of light deflection, and the Astronomer Royal, Frank Watson Dyson began to make plans for the eclipse of May 1919, which would be particularly suitable for such a test. Eddington also produced a major report on general relativity for the Physical Society, published as "Report on the Relativity Theory of Gravitation" (1918). Eddington also lectured on relativity at Cambridge University, where he had been professor of astronomy since 1913. Wartime conscription in Britain was introduced in 1917. At the age of 34, Eddington was eligible to be drafted into the military, but his exemption from this was obtained by his university on the grounds of national interest. This exemption was later appealed by the War Ministry, and at a series of hearings in June and July 1918, Eddington, who was a Quaker, stated that he was a conscientious objector, based on religious grounds. At the final hearing, the Astronomer Royal, Frank Watson Dyson, supported the exemption by proposing that Eddington undertake an expedition to observe the total eclipse in May the following year to test Einstein's General Theory of Relativity. The appeal board granted a twelve-month extension for Eddington to do so. Although this extension was rendered moot by the signing of the Armistice in November, ending the war, the expedition went ahead as planned. Theory. The theory behind the experiment concerns the predicted deflection of light by the Sun. The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The approximate angular deflection δ"φ" for a massless particle coming in from infinity and going back out to infinity is given by the following formula: formula_0 Here, "b" can be interpreted as the distance of closest approach. Although this formula is approximate, it is accurate for most measurements of gravitational lensing, due to the smallness of the ratio "rs/b". For light grazing the surface of the Sun, the approximate angular deflection is roughly 1.75 arcseconds. This is twice the value predicted by calculations using the Newtonian theory of gravity. It was this difference in the deflection between the two theories that Eddington's expedition and other later eclipse observers would attempt to observe. Expeditions and observations. The aim of the expeditions was to take advantage of the shielding effect of the Moon during a total solar eclipse, and to use astrometry to measure the positions of the stars in the sky around the Sun during the eclipse. These stars, not normally visible in the daytime due to the brightness of the Sun, would become visible during the moment of totality when the Moon covered the solar disc. A difference in the observed position of the stars during the eclipse, compared to their normal position (measured some months earlier at night, when the Sun is not in the field of view), would indicate that the light from these stars had bent as it passed close to the Sun. Dyson, when planning the expedition in 1916, had chosen the 1919 eclipse because it would take place with the Sun in front of a bright group of stars called the Hyades. The brightness of these stars would make it easier to measure any changes in position. Two teams of two people were to be sent to make observations of the eclipse at two locations: the West African island of Príncipe and the Brazilian town of Sobral. The Príncipe expedition members were Eddington and Edwin Turner Cottingham, from the Cambridge Observatory, while the Sobral expedition members were Andrew Crommelin and Charles Rundle Davidson, from the Greenwich Observatory in London. Eddington was Director of the Cambridge Observatory, and Cottingham was a clockmaker who worked on the observatory's instruments. Similarly, Crommelin was an assistant at the Greenwich Observatory, while Davidson was one of the observatory's computers. The expeditions were organised by the Joint Permanent Eclipse Committee, a joint committee between the Royal Society and the Royal Astronomical Society, chaired by Dyson, the Astronomer Royal. The funding application for the expedition was made to the Government Grant Committee, asking for £100 for instruments and £1000 for travel and other costs. Sobral. In mid-1918, researchers from the Brazilian National Observatory, determined that the city of Sobral, Ceará, was the best geographical position to observe the Solar Eclipse. Its director, Henrique Charles Morize, sent a report to worldwide scientific institutions on the subject, including the Royal Astronomical Society, London. The Greenwich Observatory team sent to Brazil consisted of Charles Davidson and Andrew Crommelin, with Frank Dyson coordinating everything from Europe and, later, being responsible for analyzing the team's data. The team arrived in Brazil on March 23, 1919, and its gear was waived without inspection as a courtesy from the Brazilian government. While Eddington took part in the Príncipe expedition, it is unknown why Dyson did not travel to Brazil. The gear was made by two astrographic telescopes coupled to mirror systems known as coelostats; a main telescope from the Royal Greenwich Observatory with a 13-inch aperture and mounted to a 16-inch coelostat and a small backup telescope with a 4-inch aperture borrowed from Aloysius Cortie. On April 30 the team arrived at Sobral. The eclipse day started cloudy, but the sky cleared and the Moon's disk began to obscure the Sun shortly before 8:56 am; the eclipse lasted 5 minutes 13 seconds. The team remained at Sobral until July to photograph the same star field at night. The main telescope recorded twelve stars, while the backup one recorded seven. The main telescope had blurred images, which were discarded from the final conclusion, while the smaller one had the clearest images and was the most trustworthy. Daniel Kennefick defends that without the Sobral photographs, the results of the 1919 eclipse would have been inconclusive and that the expeditions during future eclipses failed to improve the data. The British team was joined by the Brazilian team led by Henrique Charles Morize and the astronomers Lélio Gama, Domingos Fernandes da Costa, Allyrio Hugueney de Mattos and Teófilo Lee with the objective of producing spectroscopic observations of the Sun’s corona. The team set its gear at a plaza in front of the church of Patrocínio, where the Eclipse Museum is today. The team took several 24-by-18 and 9-by-12 cm plates capturing the Sun and the stars' positions near its edge, but unfortunately, no meaningful conclusions were drawn from the data produced by the Brazilian team, and its contribution was defined as just logistical support for the British team and climate observations. Its plates were restored by the National Observatory in 2015, while the British team plates were lost after 1979. The third expedition from that day was formed by Daniel Maynard Wise and Andrew Thomson, from the Carnegie Institution. Their goal was to study the eclipse effects on the magnetic field and atmospheric electricity. In 1925, Einstein stated to the Brazilian press about the results, "The problem conceived by my brain was solved by the bright Brazilian sky". Príncipe. The equipment used for the expedition to Príncipe, an island in the Gulf of Guinea off the coast of West Africa, was an astrographic lens borrowed from the Radcliffe Observatory in Oxford. Eddington sailed from England in March 1919. By mid-May he had his equipment set up on Príncipe near what was then Spanish Guinea. The eclipse was due to take place in the early afternoon of 29 May, at 2 pm, but that morning there was a storm with heavy rain. Eddington wrote: { Eddington developed the photographs on Príncipe, and attempted to measure the change in the stellar positions during the eclipse. On 3 June, despite the clouds that had reduced the quality of the plates, Eddington recorded in his notebook: "... one plate I measured gave a result agreeing with Einstein." British future astronomer and astrophysicist Cecilia Payne-Gaposchkin attended the expedition and Eddington's lecture and later related how strongly the voyage had affected her. Results and publication. The results were announced at a meeting of the Royal Society in November 1919, and published in the "Philosophical Transactions of the Royal Society" in 1920. Following the return of the expedition, Eddington was addressing a dinner held by the Royal Astronomical Society, and, showing his more light-hearted side, recited the following verse that he had composed in a style parodying the "Rubaiyat" of Omar Khayyam: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Later replications. The light deflection measurements were repeated by expeditions that observed the total solar eclipse of 21 September 1922 in Australia. An important role in this was played by the Lick Observatory and the Mount Wilson Observatory, both in California, USA. On April 12, 1923, William Wallace Campbell announced that the preliminary new results confirmed Einstein’s theory of relativity and prediction of the amount of light deflection with measurements from over 200 stars. Final results published in 1928 used measurements of over 3,000 star images. Reception. The presentation of the results at the joint November 6, 1919 session of Royal Society and Royal Astronomical Society led to intensive press coverage first in Great Britain and a few days later in the US press, notably in "The New York Times", and some days later still in the German press. While Einstein had been a moderately famous public figure in Germany for a few years by that time, the articles in question marked the beginning of his international celebrity status. A notable exception was Belgium, where the Eddington results were given the cold shoulder – partly because Einstein was seen as representing Germany, with the suffering of Belgium in World War I still very present in the country. The sudden popularity of Einstein's theories led to an "Einstein boom" of popular science books. While there is a later anecdote describing Einstein as unimpressed about the experimental results, and sure of his theory even in the absence of evidence (stating, when asked what he would have said if the results had been otherwise, "Then I would feel sorry for the dear Lord. The theory is correct anyway.") the evidence of Einstein's letters to other scientists indicates, on the contrary, that he was both impressed and moved by the new results, and regarded them as an important success. The 1919 results were also used as part of the systematic efforts by the Nobel laureate Philipp Lenard to discredit Einstein, whom Lenard, himself an avid national socialist and exponent of what he saw as "German physics," saw as a dangerous exponent of unnatural "Jewish physics". Lenard pointed to the 1801 prediction that Johann Georg von Soldner had derived from Newtonian gravity for starlight bending around a massive object, which corresponds to half the general-relativistic prediction derived by Einstein in 1915, and thus to Einstein's own earlier derivation of 1911, and claimed that it proved Einstein to be a plagiarist, and that von Soldner deserved to be given credit for the 1919 result. Both the 1919 results themselves and Eddington's textbook on general relativity, whose second edition including the results saw numerous translations as interest in Einstein's theory grew, played important roles in the reception of Einstein's theory in the scientific community. It is notable that while the Eddington results were seen as a confirmation of Einstein's prediction, and in that capacity soon found their way into general relativity text books, among other astronomers there followed a decade-long discussion of the quantitative values of light deflection, with the precise results in contention even after several expeditions had repeated Eddington's observations on the occasion of subsequent eclipses. The discussion concerned both the data analysis – such as the different weight assigned to different stars in the 1922 and 1929 eclipse expeditions – and the question of specific systematic effects that could skew the results. All in all, eclipse measurements of this kind, using visible light, retained considerable uncertainty, and it was only radio-astronomical measurements in the late 1960s that definitively showed that the amount of deflection was the full value predicted by general relativity, and not half that number as predicted by a "Newtonian" calculation. Those measurements and their successors are nowadays an important part of the so-called post-Newtonian tests of gravity, the systematic way of parametrizing the predictions of general relativity and other theories in terms of ten adjustable parameters in the context of the parameterized post-Newtonian formalism, where each parameter represents a possible departure from Newton's law of universal gravitation. The earliest parameterizations of the post-Newtonian approximation were performed by Eddington (1922). The parameter concerned with the amount of deflection of light by a gravitational source is the so-called Eddington parameter (γ), and it is currently the best-constrained of the ten post-Newtonian parameters. At about the time of the last serious photo-plate eclipse measurements, by a University of Texas expedition observing in Mauritania in 1973, doubts began to surface about whether or not the original Eddington measurements were sufficient to vindicate Einstein's prediction, or whether biased analysis by Eddington and his colleagues had skewed the results. Similar concerns about systematic errors and possibly confirmation bias were raised in the science history community and gained more prominence as part of the popular book "The Golem" by Trevor Pinch and Harry Collins. A modern reanalysis of the dataset, though, suggests that Eddington's analysis was accurate, and in fact less afflicted by bias than some of the analyses of solar eclipse data that followed. Part of the vindication comes from a 1979 reanalysis of the plates from the two Sobral instruments, using a much more modern plate-measuring machine than was available in 1919, which supports Eddington's results. In popular culture. The experiment was central to the plot of the 2008 BBC television film "Einstein and Eddington", with David Tennant in the role of Eddington. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\delta \\varphi \\approx \\frac{2r_{s}}{b} = \\frac{4GM}{c^{2}b}.\n" } ]
https://en.wikipedia.org/wiki?curid=60880859
60883993
Applications of dual quaternions to 2D geometry
Four-dimensional algebra over the real numbers In this article, we discuss certain applications of the dual quaternion algebra to 2D geometry. At this present time, the article is focused on a 4-dimensional subalgebra of the dual quaternions which we will call the "planar quaternions". The planar quaternions make up a four-dimensional algebra over the real numbers. Their primary application is in representing rigid body motions in 2D space. Unlike multiplication of dual numbers or of complex numbers, that of planar quaternions is non-commutative. Definition. In this article, the set of planar quaternions is denoted formula_1. A general element formula_2 of formula_1 has the form formula_3 where formula_4, formula_5, formula_6 and formula_7 are real numbers; formula_8 is a dual number that squares to zero; and formula_0, formula_9, and formula_10 are the standard basis elements of the quaternions. Multiplication is done in the same way as with the quaternions, but with the additional rule that formula_11 is nilpotent of index formula_12, i.e., formula_13, which in some circumstances makes formula_14 comparable to an infinitesimal number. It follows that the multiplicative inverses of planar quaternions are given by formula_15 The set formula_16 forms a basis of the vector space of planar quaternions, where the scalars are real numbers. The magnitude of a planar quaternion formula_2 is defined to be formula_17 For applications in computer graphics, the number formula_18 is commonly represented as the 4-tuple formula_19. Matrix representation. A planar quaternion formula_20 has the following representation as a 2x2 complex matrix: formula_21 It can also be represented as a 2×2 dual number matrix: formula_22 The above two matrix representations are related to the Möbius transformations and Laguerre transformations respectively. Terminology. The algebra discussed in this article is sometimes called the "dual complex numbers". This may be a misleading name because it suggests that the algebra should take the form of either: An algebra meeting either description exists. And both descriptions are equivalent. (This is due to the fact that the tensor product of algebras is commutative up to isomorphism). This algebra can be denoted as formula_23 using ring quotienting. The resulting algebra has a commutative product and is not discussed any further. Representing rigid body motions. Let formula_24 be a unit-length planar quaternion, i.e. we must have that formula_25 The Euclidean plane can be represented by the set formula_26. An element formula_27 on formula_28 represents the point on the Euclidean plane with Cartesian coordinate formula_29. formula_2 can be made to act on formula_30 by formula_31 which maps formula_30 onto some other point on formula_28. We have the following (multiple) polar forms for formula_2: Geometric construction. A principled construction of the planar quaternions can be found by first noticing that they are a subset of the dual-quaternions. There are two geometric interpretations of the "dual-quaternions", both of which can be used to derive the action of the planar quaternions on the plane: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "\\mathbb {DC}" }, { "math_id": 2, "text": "q" }, { "math_id": 3, "text": "A + Bi + C\\varepsilon j + D\\varepsilon k" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "C" }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "\\varepsilon" }, { "math_id": 9, "text": "j" }, { "math_id": 10, "text": "k" }, { "math_id": 11, "text": " \\varepsilon " }, { "math_id": 12, "text": "2" }, { "math_id": 13, "text": " \\varepsilon^2 = 0 " }, { "math_id": 14, "text": "\\varepsilon" }, { "math_id": 15, "text": " (A + Bi + C\\varepsilon j + D\\varepsilon k)^{-1} = \\frac{A - Bi - C\\varepsilon j - D\\varepsilon k}{A^2+B^2}" }, { "math_id": 16, "text": "\\{1, i, \\varepsilon j, \\varepsilon k\\}" }, { "math_id": 17, "text": "|q| = \\sqrt{A^2 + B^2}." }, { "math_id": 18, "text": "A + Bi + C\\varepsilon j + D\\varepsilon k" }, { "math_id": 19, "text": "(A,B,C,D)" }, { "math_id": 20, "text": "q = A + Bi + C\\varepsilon j + D\\varepsilon k" }, { "math_id": 21, "text": "\\begin{pmatrix}A + Bi & C + Di \\\\ 0 & A - Bi \\end{pmatrix}." }, { "math_id": 22, "text": "\\begin{pmatrix}A + C\\varepsilon & -B + D\\varepsilon \\\\ B + D\\varepsilon & A - C\\varepsilon\\end{pmatrix}." }, { "math_id": 23, "text": "\\mathbb C[x]/(x^2)" }, { "math_id": 24, "text": "q = A + Bi + C\\varepsilon j + D\\varepsilon k" }, { "math_id": 25, "text": "|q| = \\sqrt{A^2 + B^2} = 1." }, { "math_id": 26, "text": "\\Pi = \\{i + x \\varepsilon j + y \\varepsilon k \\mid x \\in \\Reals, y \\in \\Reals\\}" }, { "math_id": 27, "text": "v = i + x \\varepsilon j + y \\varepsilon k" }, { "math_id": 28, "text": "\\Pi" }, { "math_id": 29, "text": "(x,y)" }, { "math_id": 30, "text": "v" }, { "math_id": 31, "text": "qvq^{-1}," }, { "math_id": 32, "text": "B \\neq 0" }, { "math_id": 33, "text": "\\cos(\\theta/2) + \\sin(\\theta/2)(i + x\\varepsilon j + y\\varepsilon k)," }, { "math_id": 34, "text": "\\theta" }, { "math_id": 35, "text": "B = 0" }, { "math_id": 36, "text": "\\begin{aligned}&1 + i(x\\varepsilon j + y\\varepsilon k)\\\\ = {} & 1 - y\\varepsilon j + x\\varepsilon k,\\end{aligned}" }, { "math_id": 37, "text": "\\begin{pmatrix}x \\\\ y\\end{pmatrix}." }, { "math_id": 38, "text": "\\{i + x \\varepsilon j + y \\varepsilon k \\mid x \\in \\mathbb R, y \\in \\mathbb R\\}" }, { "math_id": 39, "text": "v \\in \\Pi" }, { "math_id": 40, "text": "qvq^{-1}" }, { "math_id": 41, "text": "B\\neq 0" }, { "math_id": 42, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=60883993
6088729
Jacket matrix
In mathematics, a jacket matrix is a square symmetric matrix formula_0 of order "n" if its entries are non-zero and real, complex, or from a finite field, and formula_1 where "I""n" is the identity matrix, and formula_2 where "T" denotes the transpose of the matrix. In other words, the inverse of a jacket matrix is determined by its element-wise or block-wise inverse. The definition above may also be expressed as: formula_3 The jacket matrix is a generalization of the Hadamard matrix; it is a diagonal block-wise inverse matrix. Motivation. As shown in the table, i.e. in the series, for example with "n"=2, forward: formula_4, inverse : formula_5, then, formula_6. That is, there exists an element-wise inverse. formula_7:formula_8 Example 1.. or more general formula_9:formula_10 Example 2.. For m x m matrices, formula_11 formula_12 denotes an mn x mn block diagonal Jacket matrix. formula_13 formula_14 Example 3.. Euler's formula: formula_15, formula_16 and formula_17. Therefore, formula_18. Also, formula_19 formula_20,formula_21. Finally, A·B = B·A = I Example 4.. Consider formula_22 be 2x2 block matrices of order formula_23 formula_24. If formula_25 and formula_26 are pxp Jacket matrix, then formula_27 is a block circulant matrix if and only if formula_28, where rt denotes the reciprocal transpose. Example 5.. Let formula_29 and formula_30, then the matrix formula_22 is given by formula_31, formula_32⇒formula_33 where "U", "C", "A", "G" denotes the amount of the DNA nucleobases and the matrix formula_32 is the block circulant Jacket matrix which leads to the principle of the Antagonism with Nirenberg Genetic Code matrix. References. [1] Moon Ho Lee, "The Center Weighted Hadamard Transform", "IEEE Transactions on Circuits" Syst. Vol. 36, No. 9, PP. 1247–1249, Sept. 1989. [2] Kathy Horadam, "Hadamard Matrices and Their Applications", Princeton University Press, UK, Chapter 4.5.1: The jacket matrix construction, PP. 85–91, 2007. [3] Moon Ho Lee, "Jacket Matrices: Constructions and Its Applications for Fast Cooperative Wireless Signal Processing", LAP LAMBERT Publishing, Germany, Nov. 2012. [4] Moon Ho Lee, et. al., "MIMO Communication Method and System using the Block Circulant Jacket Matrix," US patent, no. US 009356671B1, May, 2016. [5] S. K. Lee and M. H. Lee, “The COVID-19 DNA-RNA Genetic Code Analysis Using Information Theory of Double Stochastic Matrix,” IntechOpen, Book Chapter, April 17, 2022. [Available in Online: https://www.intechopen.com/chapters/81329].
[ { "math_id": 0, "text": "A= (a_{ij})" }, { "math_id": 1, "text": "\\ AB=BA=I_n " }, { "math_id": 2, "text": "\\ B ={1 \\over n}(a_{ij}^{-1})^T." }, { "math_id": 3, "text": "\\forall u,v \\in \\{1,2,\\dots,n\\}:~a_{iu},a_{iv} \\neq 0, ~~~~ \\sum_{i=1}^n a_{iu}^{-1}\\,a_{iv} = \n \\begin{cases}\n n, & u = v\\\\\n 0, & u \\neq v\n \n \\end{cases}\n" }, { "math_id": 4, "text": "2^2 = 4 " }, { "math_id": 5, "text": "(2^2)^{-1}={1 \\over 4} " }, { "math_id": 6, "text": " 4*{1\\over 4}=1" }, { "math_id": 7, "text": "\n A = \\left[ \\begin{array}{rrrr} 1 & 1 & 1 & 1 \\\\ 1 & -2 & 2 & -1 \\\\ 1 & 2 & -2 & -1 \\\\ 1 & -1 & -1 & 1 \\\\ \\end{array} \\right]," }, { "math_id": 8, "text": "B ={1 \\over 4} \\left[ \n \\begin{array}{rrrr} 1 & 1 & 1 & 1 \\\\[6pt] 1 & -{1 \\over 2} & {1 \\over 2} & -1 \\\\[6pt]\n 1 & {1 \\over 2} & -{1 \\over 2} & -1 \\\\[6pt] 1 & -1 & -1 & 1\\\\[6pt] \\end{array}\n \\right]." }, { "math_id": 9, "text": "\n A = \\left[ \\begin{array}{rrrr} a & b & b & a \\\\ b & -c & c & -b \\\\ b & c & -c & -b \\\\\n a & -b & -b & a \\end{array} \\right], " }, { "math_id": 10, "text": " B = {1 \\over 4} \\left[ \\begin{array}{rrrr} {1 \\over a} & {1 \\over b} & {1 \\over b} & {1 \\over a} \\\\[6pt] {1 \\over b} & -{1 \\over c} & {1 \\over c} & -{1 \\over b} \\\\[6pt] {1 \\over b} & {1 \\over c} & -{1 \\over c} & -{1 \\over b} \\\\[6pt] {1 \\over a} & -{1 \\over b} & -{1 \\over b} & {1 \\over a} \\end{array} \\right]," }, { "math_id": 11, "text": "\n\\mathbf {A_j}," }, { "math_id": 12, "text": "\\mathbf {A_j}=\\mathrm{diag}(A_1, A_2,.. A_n )" }, { "math_id": 13, "text": "\n J_4 = \\left[ \\begin{array}{rrrr} I_2 & 0 & 0 & 0 \\\\ 0 & \\cos\\theta & -\\sin\\theta & 0 \\\\ 0 & \\sin\\theta & \\cos\\theta & 0 \\\\\n 0 & 0 & 0 & I_2 \\end{array} \\right], " }, { "math_id": 14, "text": "\\ J^T_4 J_4 =J_4 J^T_4=I_4." }, { "math_id": 15, "text": "e^{i \\pi} + 1 = 0" }, { "math_id": 16, "text": "e^{i \\pi} =\\cos{ \\pi} +i\\sin{\\pi}=-1" }, { "math_id": 17, "text": "e^{-i \\pi} =\\cos{ \\pi} - i\\sin{\\pi}=-1" }, { "math_id": 18, "text": "e^{i \\pi}e^{-i \\pi}=(-1)(\\frac{1}{-1})=1" }, { "math_id": 19, "text": "y=e^{x}" }, { "math_id": 20, "text": "\\frac{dy}{dx}=e^{x}" }, { "math_id": 21, "text": "\\frac{dy}{dx}\\frac{dx}{dy}=e^{x}\\frac{1}{e^{x}}=1" }, { "math_id": 22, "text": "[\\mathbf {A}]_N" }, { "math_id": 23, "text": "N=2p" }, { "math_id": 24, "text": "\n[\\mathbf {A}]_N= \\left[ \\begin{array}{rrrr} \\mathbf {A}_0 & \\mathbf {A}_1 \\\\ \\mathbf {A}_1 & \\mathbf {A}_0 \\\\ \\end{array} \\right]," }, { "math_id": 25, "text": "[\\mathbf {A}_0]_p" }, { "math_id": 26, "text": "[\\mathbf {A}_1]_p" }, { "math_id": 27, "text": "[A]_N" }, { "math_id": 28, "text": "\\mathbf {A}_0 \\mathbf {A}_1^{rt}+\\mathbf {A}_1^{rt}\\mathbf {A}_0" }, { "math_id": 29, "text": "\\mathbf {A}_0= \\left[ \\begin{array}{rrrr} -1 & 1 \\\\ 1 & 1\\\\ \\end{array} \\right]," }, { "math_id": 30, "text": "\\mathbf {A}_1= \\left[ \\begin{array}{rrrr} -1 & -1 \\\\ -1 & 1\\\\ \\end{array} \\right]," }, { "math_id": 31, "text": "\n [\\mathbf {A}]_4= \\left[ \\begin{array}{rrrr} \\mathbf {A}_0 & \\mathbf {A}_1 \\\\ \\mathbf {A}_0 & \\mathbf {A}_1 \\\\ \\end{array} \\right]\n=\\left[ \\begin{array}{rrrr} -1 & 1 & -1 & -1\\\\ 1 & 1 & -1 & 1 \\\\ -1 & 1 & -1 & -1 \\\\ 1 & 1 & -1 & 1 \\\\ \\end{array} \\right]," }, { "math_id": 32, "text": "[\\mathbf {A}]_4 " }, { "math_id": 33, "text": "\n\\left[ \\begin{array}{rrrr} U & C & A & G\\\\ \\end{array} \\right]^T\\otimes\\left[ \\begin{array}{rrrr} U & C & A & G\\\\ \\end{array} \\right]\\otimes\\left[ \\begin{array}{rrrr} U & C & A & G\\\\ \\end{array} \\right]^T, " } ]
https://en.wikipedia.org/wiki?curid=6088729
60892603
Hard hadronic reaction
Hadron processes that can be calculated perturbatively Hard hadronic reactions are hadron reactions in which the main role is played by quarks and gluons and which are well described by perturbation theory in QCD. All hadrons discovered so far fit into the standard picture, in which they are colorless composite particles built from quarks and antiquarks. The characteristic energies associated with this internal quark structure (that is, the characteristic binding energies in potential models) are of the order of formula_0 GeV. There is a natural classification of hadron collision processes: In this case, good accuracy hadrons can be considered weakly coupled, and scattering occurs between the individual components of rapidly moving hadrons - partons. This behavior is called asymptotic freedom and is primarily associated with a decrease in the strong interaction constant with an increase in the transfer of momentum (for this discovery the 2004 Nobel Prize in Physics was awarded).
[ { "math_id": 0, "text": "Q_0 = 1" }, { "math_id": 1, "text": "Q_0" } ]
https://en.wikipedia.org/wiki?curid=60892603
60901165
Q-Gaussian process
q-Gaussian processes are deformations of the usual Gaussian distribution. There are several different versions of this; here we treat a multivariate deformation, also addressed as q-Gaussian process, arising from free probability theory and corresponding to deformations of the canonical commutation relations. For other deformations of Gaussian distributions, see q-Gaussian distribution and Gaussian q-distribution. History. The q-Gaussian process was formally introduced in a paper by Frisch and Bourret under the name of "parastochastics", and also later by Greenberg as an example of "infinite statistics". It was mathematically established and investigated in papers by Bozejko and Speicher and by Bozejko, Kümmerer, and Speicher in the context of non-commutative probability. It is given as the distribution of sums of creation and annihilation operators in a q-deformed Fock space. The calculation of moments of those operators is given by a q-deformed version of a Wick formula or Isserlis formula. The specification of a special covariance in the underlying Hilbert space leads to the q-Brownian motion, a special non-commutative version of classical Brownian motion. q-Fock space. In the following formula_0 is fixed. Consider a Hilbert space formula_1. On the algebraic full Fock space formula_2 where formula_3 with a norm one vector formula_4, called "vacuum", we define a q-deformed inner product as follows: formula_5 where formula_6 is the number of inversions of formula_7. The "q-Fock space" is then defined as the completion of the algebraic full Fock space with respect to this inner product formula_8 For formula_9 the q-inner product is strictly positive. For formula_10 and formula_11 it is positive, but has a kernel, which leads in these cases to the symmetric and anti-symmetric Fock spaces, respectively. For formula_12 we define the "q-creation operator" formula_13, given by formula_14 Its adjoint (with respect to the q-inner product), the "q-annihilation operator" formula_15, is given by formula_16 q-commutation relations. Those operators satisfy the q-commutation relations formula_17 For formula_18, formula_19, and formula_11 this reduces to the CCR-relations, the Cuntz relations, and the CAR-relations, respectively. With the exception of the case formula_20 the operators formula_21 are bounded. q-Gaussian elements and definition of multivariate q-Gaussian distribution (q-Gaussian process). Operators of the form formula_22 for formula_12 are called "q-Gaussian" (or "q-semicircular") elements. On formula_23 we consider the "vacuum expectation state" formula_24, for formula_25. The "(multivariate) q-Gaussian distribution" or "q-Gaussian process" is defined as the non commutative distribution of a collection of q-Gaussians with respect to the vacuum expectation state. For formula_26 the joint distribution of formula_27 with respect to formula_28 can be described in the following way,: for any formula_29 we have formula_30 where formula_31 denotes the number of crossings of the pair-partition formula_32. This is a q-deformed version of the Wick/Isserlis formula. q-Gaussian distribution in the one-dimensional case. For "p" = 1, the q-Gaussian distribution is a probability measure on the interval formula_33, with analytic formulas for its density. For the special cases formula_18, formula_19, and formula_11, this reduces to the classical Gaussian distribution, the Wigner semicircle distribution, and the symmetric Bernoulli distribution on formula_34. The determination of the density follows from old results on corresponding orthogonal polynomials. Operator algebraic questions. The von Neumann algebra generated by formula_35, for formula_36 running through an orthonormal system formula_37 of vectors in formula_1, reduces for formula_19 to the famous free group factors formula_38. Understanding the structure of those von Neumann algebras for general q has been a source of many investigations. It is now known, by work of Guionnet and Shlyakhtenko, that at least for finite I and for small values of q, the von Neumann algebra is isomorphic to the corresponding free group factor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " q\\in[-1,1] " }, { "math_id": 1, "text": " \\mathcal{H} " }, { "math_id": 2, "text": "\n \\mathcal{F}_\\text{alg}(\\mathcal{H})=\\bigoplus_{n\\geq 0}\\mathcal{H}^{\\otimes n},\n" }, { "math_id": 3, "text": "\\mathcal{H}^0=\\mathbb{C}\\Omega" }, { "math_id": 4, "text": "\\Omega" }, { "math_id": 5, "text": "\n \\langle h_1\\otimes\\cdots\\otimes h_n,g_1\\otimes\\cdots\\otimes g_m\\rangle_q = \\delta_{nm}\\sum_{\\sigma\\in S_n}\\prod^n_{r=1}\\langle h_r,g_{\\sigma(r)}\\rangle q^{i(\\sigma)},\n" }, { "math_id": 6, "text": "i(\\sigma)=\\#\\{(k,\\ell)\\mid 1\\leq k<\\ell\\leq n; \\sigma(k)>\\sigma(\\ell)\\}" }, { "math_id": 7, "text": "\\sigma\\in S_n" }, { "math_id": 8, "text": "\n \\mathcal{F}_q(\\mathcal{H})=\\overline{\\bigoplus_{n\\geq 0}\\mathcal{H}^{\\otimes n}}^{\\langle\\cdot,\\cdot\\rangle_q}.\n" }, { "math_id": 9, "text": " -1 < q < 1 " }, { "math_id": 10, "text": " q=1" }, { "math_id": 11, "text": " q=-1 " }, { "math_id": 12, "text": "h\\in\\mathcal{H}" }, { "math_id": 13, "text": "a^*(h)" }, { "math_id": 14, "text": "\n a^*(h)\\Omega=h,\\qquad\n a^*(h)h_1\\otimes\\cdots\\otimes h_n=h\\otimes h_1\\otimes\\cdots\\otimes h_n.\n" }, { "math_id": 15, "text": "a(h)" }, { "math_id": 16, "text": "\n a(h)\\Omega=0,\\qquad\n a(h)h_1\\otimes\\cdots\\otimes h_n=\\sum_{r=1}^n q^{r-1} \\langle h,h_r\\rangle h_1\\otimes \\cdots \\otimes h_{r-1}\\otimes h_{r+1}\\otimes\\cdots \\otimes h_n.\n" }, { "math_id": 17, "text": "a(f)a^*(g)-q a^*(g)a(f)=\\langle f,g\\rangle \\cdot 1\\qquad (f,g\\in \\mathcal{H})." }, { "math_id": 18, "text": " q=1 " }, { "math_id": 19, "text": " q=0 " }, { "math_id": 20, "text": "q=1," }, { "math_id": 21, "text": "a^*(f)" }, { "math_id": 22, "text": " s_q(h)={a(h)+a^*(h)}\n" }, { "math_id": 23, "text": "\\mathcal{F}_q(\\mathcal{H})" }, { "math_id": 24, "text": " \\tau(T)=\\langle \\Omega,T\\Omega \\rangle" }, { "math_id": 25, "text": "T\\in\\mathcal{B}(\\mathcal{F}(\\mathcal{H}))" }, { "math_id": 26, "text": "h_1,\\dots,h_p\\in\\mathcal{H}" }, { "math_id": 27, "text": "s_q(h_1),\\dots,s_q(h_p)" }, { "math_id": 28, "text": "\\tau" }, { "math_id": 29, "text": "i\\{1,\\dots,k\\}\\rightarrow\\{1,\\dots,p\\}" }, { "math_id": 30, "text": "\n \\tau\\left(s_q(h_{i(1)})\\cdots s_q(h_{i(k)})\\right)=\\sum_{\\pi\\in\\mathcal{P}_2(k)} q^{cr(\\pi)} \\prod_{(r,s)\\in\\pi} \\langle h_{i(r)}, h_{i(s)} \\rangle,\n" }, { "math_id": 31, "text": "cr(\\pi)" }, { "math_id": 32, "text": "\\pi" }, { "math_id": 33, "text": "[-2/\\sqrt{1-q}, 2/\\sqrt{1-q}]" }, { "math_id": 34, "text": "\\pm 1" }, { "math_id": 35, "text": " s_q(h_i) " }, { "math_id": 36, "text": " h_i " }, { "math_id": 37, "text": " (h_i)_{i\\in I} " }, { "math_id": 38, "text": " L(F_{\\vert I\\vert}) " } ]
https://en.wikipedia.org/wiki?curid=60901165
60912275
Combat effectiveness
Combat effectiveness is the capacity or performance of a military force to succeed in undertaking an operation, mission or objective. Determining optimal combat effectiveness is crucial in the armed forces, whether they are deployed on land, air or sea. Combat effectiveness is an aspect of military effectiveness and can be attributed to the strength of combat support including the quality and quantity of logistics, weapons and equipment as well as military tactics, the psychological states of soldiers, level of influence of leaders, skill and motivation that can arise from nationalism to survival are all capable of contributing to success on the battlefield. Quantitative measures. Philip Hayward proposes a measure for combat effectiveness, concentrating on the "probability of success" in a combat environment in relation to factors such as manpower and military stratagem. Combat effectiveness can be represented as a real and continuous function, formula_0 where formula_1 and formula_2 are two distinct military units. He analyses the measure against three main factors: capabilities—the quality and quantity of human and material resources of both friendly and enemy forces; environment—weather and terrain; and missions—region to hold in the specified objective and the latest time to do it while minimising the costs of achieving the objective. Hayward defines formula_3 to be the average probability of success in combat, summarised as formula_4 where formula_1 is the capabilities of friendly forces, formula_5 is the other factors, and formula_6 is the probability of situation formula_5 occurring in combat. Another measure for combat effectiveness is developed by Youngwoo Lee and Taesik Lee who use a "meta-network representation" approach with regards to the opportunities available for military units to make an attack. The number of enemy casualties is one of the main indicators of success in combat and was used by the United States army in the Vietnam and Korean wars. According to Lee and Lee, there are two types of direct engagements with enemy forces in the network model: isolated attacks and coordinated attacks. Let formula_7 and formula_8 be two friendly force units and formula_9 be an enemy unit. In an isolated attack between formula_7 and formula_9, formula_7 must carry out the responsibility of both detecting and advancing on formula_9. On the other hand, a coordinated attack allows formula_8 to communicate the detection of formula_9 to formula_7 if formula_7 does not have the capability or is not in a position to detect formula_9. If formula_7 is in a position to engage formula_9, formula_7 may carry out the attack through the organisation between the two friendly forces. Lee and Lee say that more complex combat situations can see these networks expand to include more combat units, locations, capabilities and actions, but the base structure is of an isolated or coordinated attack structure. The larger the network, the greater chance of opportunities for offensive action to become available. Psychological factors. The cohesiveness of the relationships formed between soldiers can affect their performance in combat and help them realise common goals. Cohesion relates to motivation and a group becomes stronger as they become more motivated. The organisation or structure of a military unit can contribute to cohesion, as William Henderson wrote in his work, "Cohesion: The Human Element in Combat" : a small unit creates stronger bonds between its members than a larger one and the higher the frequency of their interactions with one another, the stronger the bond. Soldiers become aware of the distinction between their groups through the structured associations between them. During wartime, resources and supplies including food, medical aid and technical equipment may be limited which can affect a military unit's resilience. As well as access to a sufficient level of resources, the adequate fulfilment of social needs aids survival in periods of hardship. Henderson states that soldiers turn to their peers for mental support in the absence of family or other influences from home and as the unit becomes more cohesive, its members devote greater effort into maintaining and improving their goals. Johan M.G van der Dennen says they are more readily able to endure combat through the camaraderie formed from the need for comfort from peers and understanding of their shared suffering. Soldiers may endure combat for personal reasons including survival which, in most circumstances, is obtained from the survival of their group and fear of social exclusion from it can spur their motivation for group cooperation. Henderson states that some soldiers may experience the urge to desert their duties or responsibilities for the return to civilian life before the duration of their service ends—if there are ways of attaining escape from service with little consequence or light punishment, a soldier's devotion to their unit may decrease. Soldiers who are unwilling to fight may face consequences of sanctions and in rare circumstances they are prosecuted for the refusal of deployment such as the case with British military members, Lance Corporal Glenton and Flight Lieutenant Kendall-Smith, who were charged and faced imprisonment for refusing to return to their deployments in Afghanistan and Iraq. The level of a unit's morale and motivation can give them needed leverage in combat situations. This leverage is also advantageous if their fighting force is not strong in numbers. Sergio Catignani comments that the system of values an army upholds can boost morale and improve motivation. As an example, the Israeli Defence Force aims to uphold the values of "responsibility," "credibility," "professionalism" and "sense of mission." They place emphasis on strengthening the cohesion and spirit of their units through the oath a soldier takes at the beginning of their military service. Oaths for some brigades are taken at historically significant locations such as the Western Wall in Jerusalem where the 1948 Arab–Israeli War occurred, to reinforce accomplishments of past comrades. Leonard Wainstein says morale may be threatened by sudden or traumatic losses. These losses may involve individuals of a military force turned into casualties by their own weapons such as artillery and mines. The death of a commander could have a large effect on their unit as they are relied upon to lead. Morale can also be undermined by individual level factors such as fatigue resulting from a lack of sleep, fear, and stress. Technical expertise. The abilities of a soldier such as their skill in utilising firearms, tactics and communications can affect their success in accomplishing a mission, and is described by Kirstin J. H. Brathwaite in "Effective in Battle: Conceptualizing Soldiers' Combat Effectiveness:" the quality of communication between combat units are a determinant of how organised the mission will be while weapons handling and tactics employed determine the execution of the mission itself. The Australian and New Zealand Army Corps (ANZAC) in World War I comprised a mix of soldiers with different levels of training. In the 1st Australian Division, 15% of the division was made of nineteen to twenty-year-old military men, 27% had served previously and 41% had no prior military experience. In February 1915, the 1st Australian Division were in Egypt, engaging in brigade exercises and moving into the battalion. At the same time, the 2nd Australian and New Zealand Division practised division-level exercises including marching and entrenching. No corps-level exercises nor combined arms training was undertaken by either division. The ANZACs did not use naval gunfire proficiently or communicate messages proficiently. In Malaya during World War II, the 8th Australian Division fought alongside India and Britain against Japanese forces. The headquarters of the 8th Australian Division issued instructions for training their soldiers to prepare for harsh jungle conditions and soldiers were able to coordinate their attacks, took initiative in patrolling and utilising guerrilla tactics and were able to adapt to their opponent's infiltration tactics. Tactics. Effective military tactics involve the consideration of different forms of terrain, enemy, surrounding dangers and the physical states of soldiers. Effective tactics are adaptable and flexible in the sense that the commander in charge of executing a military plan can adapt it to conform with changing situations such as an enemy's reactions. Ancient tactics. The Romans used conquered land to expand their military force up to 40,000 men by the First Punic War and distinct military formation to gain an advantage over opposing armies. This formation consisted of heavy and light infantry situated in the front and rear lines, each unit separated by a gap that was covered by the line before them, and the front lines took the brunt of the attack. The rear line was only called to assist if their preceding ones failed. The front lines could use the open order formation to retreat behind the rear and allow them to take charge. In ancient China, the country's geographical landscape influenced the ways armies fought battles. Rivers and mountain ranges that divided the land could be used to defend cities or towns. The terrain of valleys allowed ambushing soldiers at great heights to roll enormous stones onto armies passing below. During the Western Zhou dynasty, war chariots were used in conflicts with up to 500 allocated to an army. Chariot warfare was superseded by a focus on military strategy in the Warring States era. Such strategies involved deception and trickery in the midst or confusion of battle. Modern tactics. During World War I, British tactics consisted of concise objectives, large quantities of artillery and tools which involved the introduction of gas, trench mortars and wireless signals for communication. In the 9th (Scottish) Division during the Battle of Loos, there were four battalions that were separated into three sections, one in front of the other, and each battalion had another behind it in the same formation. The standard layout of the lines consisted of at least six men and the formation was adopted in most battles of the war such as the Battle of the Somme. Trench raiding was developed in World War I where surprise attacks were made on the enemy, usually at night for the purposes of stealth. Soldiers were equipped for light and stealthy manoeuvre through the trenches and were commonly equipped with bayonets, trench knives, homemade clubs and brass knuckles. The main intention of the raids was to eliminate enemies as silently as possible until the enemy trench was secured. Stephen Biddle argues that states that master and use the "modern system" of force employment have greater battlefield performance. The modern system of force employment entails interrelated techniques at the tactical level (cover and concealment, dispersion and small-unit independent maneuvers, suppression, and integration of weapon types), and at the operational level (depth, reserves and differential concentration). Biddle rejects that merely possessing superior military capabilities confers a battlefield advantage, arguing that combat effectiveness is in large part due to non-material variables, such as the tactics deployed. Leadership. Effective leaders reinforce the chain of command and are required to possess the fast decision-making skills necessary especially in high pressure environments both on the battlefield and in training. Effective military leadership requires leaders to maintain considerations such as the material welfare of their troops and they are expected to overcome obstacles and utilise their strengths. A member of the Holy Roman Imperial Forces, Gerat Barry, states in a 1634 military manual that some attributes of a commander include extensive military experience, courage, skill, authority and empathy. During World War I, German intelligence noted the difference between two British platoons in a German raid on the Royal Irish Rifles in 1916 wherein one platoon performed better than the other. The better platoon was led by British officer Lieutenant Hill who inspired his troops to continue fighting while the other platoon which had less leadership surrendered. In 1914, a medical officer named William Tyrrell noted that many soldiers experienced mental breakdowns after their officer experienced one. In World War II, General George C. Marshall prepared the U.S. Army for modern war by managing the organisational aspect of the Army including improving the Army's relations with their affiliates and improving its organisational efficiency. During the English Civil War in 1642, military leaders such as Cromwell, Fairfax and Lambert held moral authority over their troops who had confidence in their leadership and were highly motivated and prepared for battle. By 1648, the New Model Army had high combat effectiveness and military expertise. Logistics and firepower. Ancient. In ancient times, adequate supplies of consumables including food and water for both men and animals was considered a basic aspect of military success, strategically and tactically. Donald Engels comments that an adequate supply could support the morale and combat potential of an army. There were limitations to the load carried by pack animals in past militaries. The limitations varied depending on the type of animal and its travelling speed, the amount of time needed for the animal to travel throughout the campaign, the weight to be carried, weather and terrain. Weight needed to be distributed evenly on either side of the animal to avoid losing them to injury and overworking them. Additionally, food needed to be transported for both men and animals and if they were not sufficiently fed, they would struggle to perform tasks effectively and efficiently.  In terms of artillery and weapons used in Greece and Rome, a wide range of war machines such as siege towers and siege engines, rams and throwing machines were advantageous to armies during the period between 70 BC to 15 AD when fighting against those who were less technologically advanced. Throwing machines posed as useful and versatile tools that were used not only in sieges but open battlefields and infantry support weapons. Throwing machines include the ballista which was a torsion machine operated by a crew of two people including the catapult and carroballista. Modern. 19th and 20th centuries. In the early 19th century, weapons in general use included large-calibre ordnance, breech-loading artillery, muskets and armoured warships that were powered by steam. Western countries such as the United States and France could produce transport, ammunition, food supplies and other resources with more ease than the period before the Industrial Revolution. In the 19th and the 20th centuries, improved means of communication were introduced in the forms of radio, television, high-performance computer systems and telephones. Radio communication was one of the main forms of communication used on the battlefield. It became a factor for the Allied victory in the First World War when codebreakers were able to decode the radio communications of German, Japanese and Italian forces. Firepower grew from the 20th century since the number of soldiers and battle equipment increased rapidly. During the American Civil War, an infantry division had around 5,000 soldiers with up to 24 pieces of artillery, and the numbers grew by World War II, when an American division had up to 15,000 soldiers with 328 pieces of artillery. In 1918, on the Western Front, artillery of the Allied Forces became a major weapon suppressing enemy defences. Aerial photographic reconnaissance, flash-spotting and sound-ranging improved target acquisition and could predict map-shooting. The maintenance of operating histories for each gun had an improved accuracy since each weapon could be individually calibrated in reference to factors of weather such as wind speed, wind direction, humidity, and temperature. 21st century. In the United States, improvements in support vehicles and packaging for shipping allowed for greater mobility of U.S. forces. During the Iraq War, radio frequency identification tags (RFID) provided unique codes for packages and systems and were attached to small radio transponders. The tags allowed for the speedy updates of online databases worldwide.  Armoured uniforms have been developed to protect soldiers from incoming bullets and damage from explosions as well as military robots to aid in reconnaissance and bomb disposal. Domestic political factors. A number of scholars have posited that domestic political factors strongly affect the skills, cohesion, will, and organizational structures of military organizations, with implications for their combat effectiveness. The implication of this scholarship is that military capabilities are not necessarily the key determinant of combat effectiveness. In a 2010 study, Michael Beckley pushed on this scholarship, finding that economic development (an indicator) of power was a strong predictor of victory in war, and that factors such as "democracy, Western culture, high levels of human capital, and amicable civil-military relations" were not consequential. Civil-military relations. Civil-military relations may shape combat effectiveness, as adverse civil-military relations can lead to poor strategic assessments, and undermine battlefield flexibility and survivability. When regimes are concerned about the prospects of coups, military organizations may be constructed in a way so as to reduce the risk of a coup (through "coup-proofing"), but this limits the conventional military capabilities of those military organizations. Military inequality. According to research by Jason Lyall, state discrimination against the ethnic groups that comprise the state's military adversely affects the military's battlefield performance. In societies where ethnic groups are marginalized or repressed, militaries struggle to simultaneously obtain cohesion and combat power, as the soldiers will lack belief in a shared common purpose, as well as lower trust. Lyall shows that as a state's pre-war ethnic repression and marginalization is higher, the military will have greater casualties, mass defections, and mass desertions. Such militaries will also be more likely to use barrier troops. It is not ethnic diversity that undermines battlefield performance, but whether ethnic groups are discriminated against. Elizabeth Kier similarly argued in 1995 that the U.S. military's discrimination against LGBT service members undermined U.S. military readiness. Regime type. According to Allan C. Stam and Dan Reiter, liberal democracies have an advantage in battlefield performance than non-democracies and illiberal democracies. They argue that this democratic advantage is derived from the fact that democratic soldiers fight harder, democratic states tend to ally together in war, and democracies can employ more economic resources towards combat. However, critics argue that democracy itself makes little difference in war and that some other factors, such as overall power, determine whether a country would achieve victory or face defeat. In some cases, such as the Vietnam War, democracy may even have contributed to defeat. Jasen Castillo argues that autocratic states may in certain circumstances have an advantage over democracies; for example, authoritarian regimes may have ideologies that require unconditional loyalty, which may contribute to military cohesion. Organizational culture. Scholars such as Elizabeth Kier and Jeffrey Legro argue that organizational cultures in the military shape military doctrines. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(x) \\geqq F(y)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "\\bar{P(S)}" }, { "math_id": 4, "text": "\\bar{P(S)} = \\int f(x,z)p(z) dz=G(x) " }, { "math_id": 5, "text": "z" }, { "math_id": 6, "text": "p(z)" }, { "math_id": 7, "text": "A_1" }, { "math_id": 8, "text": "A_2" }, { "math_id": 9, "text": "B" } ]
https://en.wikipedia.org/wiki?curid=60912275
609125
Expression (mathematics)
A finite combination of symbols that describes a mathematical object In mathematics, an expression is a written arrangement of symbols following the context-dependent, syntactic conventions of mathematical notation. Symbols can denote numbers (constants), variables, operations, functions. Other symbols include punctuation signs and brackets (often used for grouping, that is for considering a part of the expression as a single symbol). Many authors distinguish an expression from a "formula", the former denoting a mathematical object, and the latter denoting a statement about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, formula_0 is an expression, while formula_1 is a formula. Expressions can be "evaluated" or "partially evaluated" by replacing operations that appear in them with their result. For example, the expression formula_2 evaluates partially to formula_3 and totally to formula_4 An expression is often used to define a function, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the total evaluation of the resulting expression. For example, formula_5 and formula_6 define the function that associates to each number its square plus one. An expression with no variables would define a constant function. Examples. The use of expressions ranges from the simple: formula_7 formula_0   (linear polynomial) formula_8   (quadratic polynomial) formula_9   (rational fraction) to the complex: formula_10formula_11 Variables and evaluation. Many expressions include variables. Any variable can be classified as being either a free variable or a bound variable. For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined. Thus an expression represents an operation over constants and free variables and whose output is the resulting value of the expression. For example, if the expression formula_12 is evaluated with "x" = 10, "y" = 5, it evaluates to 2; this is denoted formula_13 The evaluation is undefined for "y" = 0 Two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. The equivalence between two expressions is called an identity and is often denoted with formula_14 For example, in the expression formula_15 the variable "n" is bound, and the variable "x" is free. This expression is equivalent to the simpler expression 12 "x"; that is formula_16 The value for "x" = 3 is 36, which can be denoted formula_17 Syntax versus semantics. Syntax. An expression is a syntactic construct. It must be well-formed. It can be described somewhat informally as follows: the allowed operators must have the correct number of inputs in the correct places, the characters that make up these inputs must be valid, have a clear order of operations, etc. Strings of symbols that violate the rules of syntax are not well-formed and are not valid mathematical expressions. For example, in arithmetic, the expression "1 + 2 × 3" is well-formed, but formula_18. is not. Semantics. Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions. In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics attached to the symbols of the expression. The choice of semantics depends on the context of the expression. The same syntactic expression "1 + 2 × 3" can have different values (mathematically 7, but also 9), depending on the order of operations implied by the context (See also Operations § Calculators). The semantic rules may declare that certain expressions do not designate any value (for instance when they involve division by 0); such expressions are said to have an undefined value, but they are well-formed expressions nonetheless. In general the meaning of expressions is not limited to designating values; for instance, an expression might designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator formula_19 to designate an internal direct sum. Formal definition. A well-formed expression in mathematics can be described as part of a formal language, and defined recursively as follows: The alphabet consists of: With this alphabet, the recursive rules for forming well-formed expression (WFE) are as follows: Then formula_25 is also a WFE. For instance, if the domain of discorse is the real numbers, formula_23 can denote the binary operation +, then formula_26 is a WFE. Or formula_23 can be the unary operation formula_27, then formula_28 is as well. Brackets are initially around each non-atomic expression, but they can be deleted in cases where there is a defined order of operations, or where order doesn't matter (i.e. where operations are associative) A well-formed expression can be thought as a syntax tree. The leaf nodes are always atomic expressions. Operations formula_29 and formula_30 have exactly two child nodes, while operations formula_31, formula_32 and formula_33 have exactly one. There are countably infinitely many WFE's, however, each WFE has a finite number of nodes. Lambda calculus. Formal languages allow formalizing the concept of well-formed expressions. In the 1930s, a new type of expressions, called lambda expressions, were introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. They form the basis for lambda calculus, a formal system used in mathematical logic and the theory of programming languages. The equivalence of two lambda expressions is undecidable. This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations, the logarithm and the exponential (Richardson's theorem) Types of expressions. Algebraic expression. An "algebraic expression" is an expression built up from algebraic constants, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by a rational number). For example, 3"x"2 − 2"xy" + "c" is an algebraic expression. Since taking the square root is the same as raising to the power , the following is also an algebraic expression: formula_34 See also: Algebraic equation and Algebraic closure Polynomial expression. A polynomial expression is an expression built with scalars (numbers of elements of some field), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers; for example formula_35 Using associativity, commutativity and distributivity, every polynomial expression is equivalent to a polynomial, that is an expression that is a linear combination of products of integer powers of the indeterminates. For example the above polynomial expression is equivalent (denote the same polynomial as formula_36 Many author do not distinguish polynomials and polynomial expressions. In this case the expression of a polynomial expression as a linear combination is called the "canonical form", "normal form", or "expanded form" of the polynomial. Computational expression. In computer science, an "expression" is a syntactic entity in a programming language that may be evaluated to determine its value or fail to terminate, in which case the expression is undefined. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called "evaluation". In simple settings, the resulting value is usually one of various primitive types, such as string, Boolean, or numerical (such as integer, floating-point, or complex). In computer algebra, formulas are viewed as expressions that can be evaluated as a Boolean, depending on the values that are given to the variables occurring in the expressions. For example formula_37 takes the value "false" if x is given a value less than 1, and the value "true" otherwise. Expressions are often contrasted with statements—syntactic entities that have no value (an instruction). Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. See: Computer algebra expression Logical expression. In mathematical logic, a "logical expression" can refer to either terms or formulas. A term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. A first-order term is recursively constructed from constant symbols, variables, and function symbols. An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula, which evaluates to true or false in bivalent logics, given an interpretation. For example, &amp;NoBreak;&amp;NoBreak; is a term built from the constant 1, the variable x, and the binary function symbols &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak;; it is part of the atomic formula &amp;NoBreak;&amp;NoBreak; which evaluates to true for each real-numbered value of x. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "8x-5" }, { "math_id": 1, "text": "8x-5 \\geq 3 " }, { "math_id": 2, "text": "8\\times 2-5" }, { "math_id": 3, "text": "16-5" }, { "math_id": 4, "text": "11." }, { "math_id": 5, "text": "x\\mapsto x^2+1" }, { "math_id": 6, "text": "f(x) = x^2 + 1" }, { "math_id": 7, "text": "3+8" }, { "math_id": 8, "text": "7{{x}^{2}}+4x-10" }, { "math_id": 9, "text": "\\frac{x-1}{{{x}^{2}}+12}" }, { "math_id": 10, "text": "f(a)+\\sum_{k=1}^n\\left.\\frac{1}{k!}\\frac{d^k}{dt^k}\\right|_{t=0}f(u(t)) " }, { "math_id": 11, "text": "+ \\int_0^1 \\frac{(1-t)^n }{n!} \\frac{d^{n+1}}{dt^{n+1}} f(u(t))\\, dt." }, { "math_id": 12, "text": " x/y " }, { "math_id": 13, "text": " x/y \\text{ }|_{x=10,\\, y=5}=2." }, { "math_id": 14, "text": "\\equiv." }, { "math_id": 15, "text": "\\sum_{n=1}^{3} (2nx)," }, { "math_id": 16, "text": "\\sum_{n=1}^{3} (2nx)\\equiv 12x." }, { "math_id": 17, "text": "\\sum_{n=1}^{3} (2nx)\\Big|_{x=3}= 36." }, { "math_id": 18, "text": "\\times4)x+,/y" }, { "math_id": 19, "text": "\\oplus" }, { "math_id": 20, "text": "\\varnothing, \\{1,2,3\\}" }, { "math_id": 21, "text": "2" }, { "math_id": 22, "text": "x" }, { "math_id": 23, "text": "F" }, { "math_id": 24, "text": "\\phi_1, \\phi_2, ... \\phi_n" }, { "math_id": 25, "text": "F(\\phi_1, \\phi_2, ... \\phi_n)" }, { "math_id": 26, "text": "\\phi_1 + \\phi_2" }, { "math_id": 27, "text": "\\surd" }, { "math_id": 28, "text": "\\sqrt{\\phi_1}" }, { "math_id": 29, "text": " + " }, { "math_id": 30, "text": " \\cup " }, { "math_id": 31, "text": "\\sqrt{x} " }, { "math_id": 32, "text": "\\text{ln}(x)" }, { "math_id": 33, "text": " \\frac{d}{dx} " }, { "math_id": 34, "text": "\\sqrt{\\frac{1-x^2}{1+x^2}}" }, { "math_id": 35, "text": "3(x+1)^2 - xy." }, { "math_id": 36, "text": "3x^2-xy+6x+3." }, { "math_id": 37, "text": "8x-5 \\geq 3" } ]
https://en.wikipedia.org/wiki?curid=609125
6091657
E8 manifold
Topological manifold in mathematics In low-dimensional topology, a branch of mathematics, the "E"8 manifold is the unique compact, simply connected topological 4-manifold with intersection form the "E"8 lattice. History. The formula_0 manifold was discovered by Michael Freedman in 1982. Rokhlin's theorem shows that it has no smooth structure (as does Donaldson's theorem), and in fact, combined with the work of Andrew Casson on the Casson invariant, this shows that the formula_0 manifold is not even triangulable as a simplicial complex. Construction. The manifold can be constructed by first plumbing together disc bundles of Euler number 2 over the sphere, according to the Dynkin diagram for formula_0. This results in formula_1, a 4-manifold whose boundary is homeomorphic to the Poincaré homology sphere. Freedman's theorem on fake 4-balls then says we can cap off this homology sphere with a fake 4-ball to obtain the formula_0 manifold. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "E_8" }, { "math_id": 1, "text": "P_{E_8}" } ]
https://en.wikipedia.org/wiki?curid=6091657
609197
Studentized residual
In statistics, a studentized residual is the dimensionless ratio resulting from the division of a residual by an estimate of its standard deviation, both expressed in the same units. It is a form of a Student's "t"-statistic, with the estimate of error varying between points. This is an important technique in the detection of outliers. It is among several named in honor of William Sealey Gosset, who wrote under the pseudonym "Student" (e.g., Student's distribution). Dividing a statistic by a sample standard deviation is called studentizing, in analogy with "standardizing" and "normalizing". Motivation. The key reason for studentizing is that, in regression analysis of a multivariate distribution, the variances of the "residuals" at different input variable values may differ, even if the variances of the "errors" at these different input variable values are equal. The issue is the difference between errors and residuals in statistics, particularly the behavior of residuals in regressions. Consider the simple linear regression model formula_0 Given a random sample ("X""i", "Y""i"), "i" = 1, ..., "n", each pair ("X""i", "Y""i") satisfies formula_1 where the "errors" formula_2, are independent and all have the same variance formula_3. The residuals are not the true errors, but "estimates", based on the observable data. When the method of least squares is used to estimate formula_4 and formula_5, then the residuals formula_6, unlike the errors formula_7, cannot be independent since they satisfy the two constraints formula_8 and formula_9 The residuals, unlike the errors, "do not all have the same variance:" the variance decreases as the corresponding "x"-value gets farther from the average "x"-value. This is not a feature of the data itself, but of the regression better fitting values at the ends of the domain. It is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence. This can also be seen because the residuals at endpoints depend greatly on the slope of a fitted line, while the residuals at the middle are relatively insensitive to the slope. The fact that "the variances of the residuals differ," even though "the variances of the true errors are all equal" to each other, is the "principal reason" for the need for studentization. It is not simply a matter of the population parameters (mean and standard deviation) being unknown – it is that "regressions" yield "different residual distributions" at "different data points," unlike "point estimators" of univariate distributions, which share a "common distribution" for residuals. Background. For this simple model, the design matrix is formula_11 and the hat matrix "H" is the matrix of the orthogonal projection onto the column space of the design matrix: formula_12 The leverage "h""ii" is the "i"th diagonal entry in the hat matrix. The variance of the "i"th residual is formula_13 In case the design matrix "X" has only two columns (as in the example above), this is equal to formula_14 In the case of an arithmetic mean, the design matrix "X" has only one column (a vector of ones), and this is simply: formula_15 Calculation. Given the definitions above, the Studentized residual is then formula_16 where "h""ii" is the leverage, where formula_17 is an appropriate estimate of "σ" (see below). In the case of a mean, this is equal to: formula_18 Internal and external studentization. The usual estimate of "σ"2 is the "internally studentized" residual formula_19 where "m" is the number of parameters in the model (2 in our example). But if the "i" th case is suspected of being improbably large, then it would also not be normally distributed. Hence it is prudent to exclude the "i" th observation from the process of estimating the variance when one is considering whether the "i" th case may be an outlier, and instead use the "externally studentized" residual, which is formula_20 based on all the residuals "except" the suspect "i" th residual. Here is to emphasize that formula_21 for suspect "i" are computed with "i" th case excluded. If the estimate "σ"2 "includes" the "i" th case, then it is called the internally studentized residual, formula_22 (also known as the "standardized residual" ). If the estimate formula_23 is used instead, "excluding" the "i" th case, then it is called the externally studentized, formula_24. Distribution. If the errors are independent and normally distributed with expected value 0 and variance "σ"2, then the probability distribution of the "i"th externally studentized residual formula_24 is a Student's t-distribution with "n" − "m" − 1 degrees of freedom, and can range from formula_25 to formula_26. On the other hand, the internally studentized residuals are in the range formula_27, where "ν" = "n" − "m" is the number of residual degrees of freedom. If "t""i" represents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then: formula_28 where "t" is a random variable distributed as Student's t-distribution with "ν" − 1 degrees of freedom. In fact, this implies that "t""i"2 /"ν" follows the beta distribution "B"(1/2,("ν" − 1)/2). The distribution above is sometimes referred to as the tau distribution; it was first derived by Thompson in 1935. When "ν" = 3, the internally studentized residuals are uniformly distributed between formula_29 and formula_30. If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, the "t""i" are all either +1 or −1, with 50% chance for each. The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all the "t""i" of a particular experiment is 1. For instance, the internally studentized residuals when fitting a straight line going through (0, 0) to the points (1, 4), (2, −1), (2, −1) are formula_31, and the standard deviation of these is not 1. Note that any pair of studentized residual "t""i" and "t""j" (where formula_32), are NOT i.i.d. They have the same distribution, but are not independent due to constraints on the residuals having to sum to 0 and to have them be orthogonal to the design matrix. Software implementations. Many programs and statistics packages, such as R, Python, etc., include implementations of Studentized residual. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Y = \\alpha_0 + \\alpha_1 X + \\varepsilon. \\, " }, { "math_id": 1, "text": " Y_i = \\alpha_0 + \\alpha_1 X_i + \\varepsilon_i,\\," }, { "math_id": 2, "text": "\\varepsilon_i" }, { "math_id": 3, "text": "\\sigma^2" }, { "math_id": 4, "text": "\\alpha_0" }, { "math_id": 5, "text": "\\alpha_1" }, { "math_id": 6, "text": "\\widehat{\\varepsilon\\,}" }, { "math_id": 7, "text": "\\varepsilon" }, { "math_id": 8, "text": "\\sum_{i=1}^n \\widehat{\\varepsilon\\,}_i=0" }, { "math_id": 9, "text": "\\sum_{i=1}^n \\widehat{\\varepsilon\\,}_i x_i=0." }, { "math_id": 10, "text": "\\widehat{\\varepsilon\\,}_i" }, { "math_id": 11, "text": "X=\\left[\\begin{matrix}1 & x_1 \\\\ \\vdots & \\vdots \\\\ 1 & x_n \\end{matrix}\\right]" }, { "math_id": 12, "text": "H=X(X^T X)^{-1}X^T.\\," }, { "math_id": 13, "text": "\\operatorname{var}(\\widehat{\\varepsilon\\,}_i)=\\sigma^2(1-h_{ii})." }, { "math_id": 14, "text": "\\operatorname{var}(\\widehat{\\varepsilon\\,}_i)=\\sigma^2\\left( 1 - \\frac1n -\\frac{(x_i-\\bar x)^2}{\\sum_{j=1}^n (x_j - \\bar x)^2 } \\right). " }, { "math_id": 15, "text": "\\operatorname{var}(\\widehat{\\varepsilon\\,}_i)=\\sigma^2\\left( 1 - \\frac1n \\right). " }, { "math_id": 16, "text": "t_i = {\\widehat{\\varepsilon\\,}_i\\over \\widehat{\\sigma} \\sqrt{1-h_{ii}\\ }}" }, { "math_id": 17, "text": "\\widehat{\\sigma}" }, { "math_id": 18, "text": "t_i = {\\widehat{\\varepsilon\\,}_i\\over \\widehat{\\sigma} \\sqrt{(n-1)/n}}" }, { "math_id": 19, "text": "\\widehat{\\sigma}^2={1 \\over n-m}\\sum_{j=1}^n \\widehat{\\varepsilon\\,}_j^{\\,2}." }, { "math_id": 20, "text": "\\widehat{\\sigma}_{(i)}^2={1 \\over n-m-1}\\sum_{\\begin{smallmatrix}j = 1\\\\j \\ne i\\end{smallmatrix}}^n \\widehat{\\varepsilon\\,}_j^{\\,2}," }, { "math_id": 21, "text": "\\widehat{\\varepsilon\\,}_j^{\\,2} (j \\ne i)" }, { "math_id": 22, "text": "t_i" }, { "math_id": 23, "text": "\\widehat{\\sigma}_{(i)}^2" }, { "math_id": 24, "text": "t_{i(i)}" }, { "math_id": 25, "text": "\\scriptstyle-\\infty" }, { "math_id": 26, "text": "\\scriptstyle+\\infty" }, { "math_id": 27, "text": "\\scriptstyle 0 \\,\\pm\\, \\sqrt{\\nu}" }, { "math_id": 28, "text": "t_i \\sim \\sqrt{\\nu} {t \\over \\sqrt{t^2+\\nu-1}}" }, { "math_id": 29, "text": "\\scriptstyle-\\sqrt{3}" }, { "math_id": 30, "text": "\\scriptstyle+\\sqrt{3}" }, { "math_id": 31, "text": "\\sqrt{2},\\ -\\sqrt{5}/5,\\ -\\sqrt{5}/5" }, { "math_id": 32, "text": "i \\neq j" } ]
https://en.wikipedia.org/wiki?curid=609197
60921299
Hasse–Schmidt derivation
In mathematics, a Hasse–Schmidt derivation is an extension of the notion of a derivation. The concept was introduced by . Definition. For a (not necessarily commutative nor associative) ring "B" and a "B"-algebra "A", a Hasse–Schmidt derivation is a map of "B"-algebras formula_0 taking values in the ring of formal power series with coefficients in "A". This definition is found in several places, such as , which also contains the following example: for "A" being the ring of infinitely differentiable functions (defined on, say, R"n") and "B"=R, the map formula_1 is a Hasse–Schmidt derivation, as follows from applying the Leibniz rule iteratedly. Equivalent characterizations. shows that a Hasse–Schmidt derivation is equivalent to an action of the bialgebra formula_2 of noncommutative symmetric functions in countably many variables "Z"1, "Z"2, ...: the part formula_3 of "D" which picks the coefficient of formula_4, is the action of the indeterminate "Z""i". Applications. Hasse–Schmidt derivations on the exterior algebra formula_5 of some "B"-module "M" have been studied by . Basic properties of derivations in this context lead to a conceptual proof of the Cayley–Hamilton theorem. See also .
[ { "math_id": 0, "text": "D: A \\to A[\\![t]\\!]" }, { "math_id": 1, "text": "f \\mapsto \\exp\\left(t \\frac d {dx}\\right) f(x) = f + t \\frac {df}{dx} + \\frac {t^2}2 \\frac {d^2 f}{dx^2} + \\cdots" }, { "math_id": 2, "text": "\\operatorname{NSymm} = \\mathbf Z \\langle Z_1, Z_2, \\ldots \\rangle" }, { "math_id": 3, "text": "D_i : A \\to A" }, { "math_id": 4, "text": "t^i" }, { "math_id": 5, "text": "A = \\bigwedge M" } ]
https://en.wikipedia.org/wiki?curid=60921299
60923459
Cube root law
Concept in political science The cube root law is an observation in political science that the number of members of a unicameral legislature, or of the lower house of a bicameral legislature, is about the cube root of the population being represented. The rule was devised by Estonian political scientist Rein Taagepera in his 1972 paper "The size of national assemblies". The law has led to a proposal to increase the size of the United States House of Representatives so that the number of representatives would be the cube root of the US population as calculated in the most recent census. The House of Representatives has had 435 members since the Reapportionment Act of 1929 was passed; if the US followed the cube root rule, there would be 693 members of the House of Representatives based on the population at the 2020 Census. This proposal was endorsed by the "New York Times" editorial board in 2018. Subsequent analysis. It has been claimed by Giorgio Margaritondo that the experimental data, including the dataset originally used by Taagepera in 1972, actually fits better to a function with a higher exponent, and that there is sufficient deviation from the cube root rule to question its usefulness. In this regard, analysis by Margaritondo gives an optimal formula of: formula_0, where A is the size of the assembly, P is the population, and E = 0.45±0.03. Applying this formula to the U.S. House of Representatives as of the 2020 Census would give a House of between 379 and 1231 members, while using an exponent of 0.4507 gives 693 (the same result using the cube root rule). Table comparing OECD nations in 2019 with EIU Democracy Index ranking. Out of the countries listed, Lithuania is the only one to exactly match the cube root rule. Moreover, Denmark, Canada, and Mexico come close to matching the rule. Some of these countries (eg Germany) have overhang seats in a mixed member proportional system, as a result the size of their parliaments can vary significantly between elections. &lt;templatestyles src="template:sticky header/styles.css"/&gt; Historical US House sizes. The following table describes how the US House of Representatives would have looked historically under the cube root rule according to the Huntington–Hill method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = 0.1P^{E}" } ]
https://en.wikipedia.org/wiki?curid=60923459
60927729
Marine food web
Marine consumer-resource system A marine food web is a food web of marine life. At the base of the ocean food web are single-celled algae and other plant-like organisms known as phytoplankton. The second trophic level (primary consumers) is occupied by zooplankton which feed off the phytoplankton. Higher order consumers complete the web. There has been increasing recognition in recent years that marine microorganisms. Habitats lead to variations in food webs. Networks of trophic interactions can also provide a lot of information about the functioning of marine ecosystems. Compared to terrestrial environments, marine environments have biomass pyramids which are inverted at the base. In particular, the biomass of consumers (copepods, krill, shrimp, forage fish) is larger than the biomass of primary producers. This happens because the ocean's primary producers are tiny phytoplankton which grow and reproduce rapidly, so a small mass can have a fast rate of primary production. In contrast, many significant terrestrial primary producers, such as mature forests, grow and reproduce slowly, so a much larger mass is needed to achieve the same rate of primary production. Because of this inversion, it is the zooplankton that make up most of the marine animal biomass.&lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Food chains and trophic levels. Food webs are built from food chains. All forms of life in the sea have the potential to become food for another life form. In the ocean, a food chain typically starts with energy from the sun powering phytoplankton, and follows a course such as: phytoplankton → herbivorous zooplankton → carnivorous zooplankton → filter feeder → predatory vertebrate Phytoplankton don't need other organisms for food, because they have the ability to manufacture their own food directly from inorganic carbon, using sunlight as their energy source. This process is called photosynthesis, and results in the phytoplankton converting naturally occurring carbon into protoplasm. For this reason, phytoplankton are said to be the primary producers at the bottom or the first level of the marine food chain. Since they are at the first level they are said to have a trophic level of 1 (from the Greek "trophē" meaning food). Phytoplankton are then consumed at the next trophic level in the food chain by microscopic animals called zooplankton. Zooplankton constitute the second trophic level in the food chain, and include microscopic one-celled organisms called protozoa as well as small crustaceans, such as copepods and krill, and the larva of fish, squid, lobsters and crabs. Organisms at this level can be thought of as primary consumers. In turn, the smaller herbivorous zooplankton are consumed by larger carnivorous zooplankters, such as larger predatory protozoa and krill, and by forage fish, which are small, schooling, filter-feeding fish. This makes up the third trophic level in the food chain. The fourth trophic level consists of predatory fish, marine mammals and seabirds that consume forage fish. Examples are swordfish, seals and gannets. Apex predators, such as orcas, which can consume seals, and shortfin mako sharks, which can consume swordfish, make up a fifth trophic level. Baleen whales can consume zooplankton and krill directly, leading to a food chain with only three or four trophic levels. In practice, trophic levels are not usually simple integers because the same consumer species often feeds across more than one trophic level. For example, a large marine vertebrate may eat smaller predatory fish but may also eat filter feeders; the stingray eats crustaceans, but the hammerhead eats both crustaceans and stingrays. Animals can also eat each other; the cod eats smaller cod as well as crayfish, and crayfish eat cod larvae. The feeding habits of a juvenile animal, and, as a consequence, its trophic level, can change as it grows up. The fisheries scientist Daniel Pauly sets the values of trophic levels to one in primary producers and detritus, two in herbivores and detritivores (primary consumers), three in secondary consumers, and so on. The definition of the trophic level, TL, for any consumer species is formula_0 where formula_1 is the fractional trophic level of the prey "j", and formula_2 represents the fraction of "j" in the diet of "i". In the case of marine ecosystems, the trophic level of most fish and other marine consumers takes value between 2.0 and 5.0. The upper value, 5.0, is unusual, even for large fish, though it occurs in apex predators of marine mammals, such as polar bears and killer whales. As a point of contrast, humans have a mean trophic level of about 2.21, about the same as a pig or an anchovy. By taxon. Primary producers. At the base of the ocean food web are single-celled algae and other plant-like organisms known as phytoplankton. Phytoplankton are a group of microscopic autotrophs divided into a diverse assemblage of taxonomic groups based on morphology, size, and pigment type. Marine phytoplankton mostly inhabit sunlit surface waters as photoautotrophs, and require nutrients such as nitrogen and phosphorus, as well as sunlight to fix carbon and produce oxygen. However, some marine phytoplankton inhabit the deep sea, often near deep sea vents, as chemoautotrophs which use inorganic electron sources such as hydrogen sulfide, ferrous iron and ammonia. An ecosystem cannot be understood without knowledge of how its food web determines the flow of materials and energy. Phytoplankton autotrophically produces biomass by converting inorganic compounds into organic ones. In this way, phytoplankton functions as the foundation of the marine food web by supporting all other life in the ocean. The second central process in the marine food web is the microbial loop. This loop degrades marine bacteria and archaea, remineralises organic and inorganic matter, and then recycles the products either within the pelagic food web or by depositing them as marine sediment on the seafloor. Marine phytoplankton form the basis of the marine food web, account for approximately half of global carbon fixation and oxygen production by photosynthesis and are a key link in the global carbon cycle. Like plants on land, phytoplankton use chlorophyll and other light-harvesting pigments to carry out photosynthesis, absorbing atmospheric carbon dioxide to produce sugars for fuel. Chlorophyll in the water changes the way the water reflects and absorbs sunlight, allowing scientists to map the amount and location of phytoplankton. These measurements give scientists valuable insights into the health of the ocean environment, and help scientists study the ocean carbon cycle. If phytoplankton dies before it is eaten, it descends through the euphotic zone as part of the marine snow and settles into the depths of sea. In this way, phytoplankton sequester about 2 billion tons of carbon dioxide into the ocean each year, causing the ocean to become a sink of carbon dioxide holding about 90% of all sequestered carbon. The ocean produces about half of the world's oxygen and stores 50 times more carbon dioxide than the atmosphere. Among the phytoplankton are members from a phylum of bacteria called cyanobacteria. Marine cyanobacteria include the smallest known photosynthetic organisms. The smallest of all, "Prochlorococcus", is just 0.5 to 0.8 micrometres across. In terms of individual numbers, "Prochlorococcus" is possibly the most plentiful species on Earth: a single millilitre of surface seawater can contain 100,000 cells or more. Worldwide there are estimated to be several octillion (1027) individuals. "Prochlorococcus" is ubiquitous between 40°N and 40°S and dominates in the oligotrophic (nutrient poor) regions of the oceans. The bacterium accounts for about 20% of the oxygen in the Earth's atmosphere. In oceans, most primary production is performed by algae. This is a contrast to on land, where most primary production is performed by vascular plants. Algae ranges from single floating cells to attached seaweeds, while vascular plants are represented in the ocean by groups such as the seagrasses and the mangroves. Larger producers, such as seagrasses and seaweeds, are mostly confined to the littoral zone and shallow waters, where they attach to the underlying substrate and are still within the photic zone. But most of the primary production by algae is performed by the phytoplankton. Thus, in ocean environments, the first bottom trophic level is occupied principally by phytoplankton, microscopic drifting organisms, mostly one-celled algae, that float in the sea. Most phytoplankton are too small to be seen individually with the unaided eye. They can appear as a (often green) discoloration of the water when they are present in high enough numbers. Since they increase their biomass mostly through photosynthesis they live in the sun-lit surface layer (euphotic zone) of the sea. The most important groups of phytoplankton include the diatoms and dinoflagellates. Diatoms are especially important in oceans, where according to some estimates they contribute up to 45% of the total ocean's primary production. Diatoms are usually microscopic, although some species can reach up to 2 millimetres in length. Primary consumers. The second trophic level (primary consumers) is occupied by zooplankton which feed off the phytoplankton. Together with the phytoplankton, they form the base of the food pyramid that supports most of the world's great fishing grounds. Many zooplankton are tiny animals found with the phytoplankton in oceanic surface waters, and include tiny crustaceans, and fish larvae and fry (recently hatched fish). Most zooplankton are filter feeders, and they use appendages to strain the phytoplankton in the water. Some larger zooplankton also feed on smaller zooplankton. Some zooplankton can jump about a bit to avoid predators, but they can't really swim. Like phytoplankton, they float with the currents, tides and winds instead. Zooplankton can reproduce rapidly, their populations can increase up to thirty per cent a day under favourable conditions. Many live short and productive lives and reach maturity quickly. The oligotrichs are a group of ciliates which have prominent oral cilia arranged like a collar and lapel. They are very common in marine plankton communities, usually found in concentrations of about one per millilitre. They are the most important herbivores in the sea, the first link in the food chain. Other particularly important groups of zooplankton are the copepods and krill. Copepods are a group of small crustaceans found in ocean and freshwater habitats. They are the biggest source of protein in the sea, and are important prey for forage fish. Krill constitute the next biggest source of protein. Krill are particularly large predator zooplankton which feed on smaller zooplankton. This means they really belong to the third trophic level, secondary consumers, along with the forage fish. Together, phytoplankton and zooplankton make up most of the plankton in the sea. Plankton is the term applied to any small drifting organisms that float in the sea (Greek "planktos" = wanderer or drifter). By definition, organisms classified as plankton are unable to swim against ocean currents; they cannot resist the ambient current and control their position. In ocean environments, the first two trophic levels are occupied mainly by plankton. Plankton can be divided into producers and consumers. The producers are the phytoplankton (Greek "phyton" = plant) and the consumers, who eat the phytoplankton, are the zooplankton (Greek "zoon" = animal). Jellyfish are slow swimmers, and most species form part of the plankton. Traditionally, jellyfish have been viewed as trophic dead ends. With body plans largely based on water, they were typically considered to have a limited impact on marine ecosystems, attracting the attention of specialized predators such as the ocean sunfish and the leatherback sea turtle. That view has recently been challenged. Jellyfish, and more generally gelatinous zooplankton which include salps and ctenophores, are very diverse, fragile with no hard parts, difficult to see and monitor, subject to rapid population swings and often live inconveniently far from shore or deep in the ocean. It is difficult for scientists to detect and analyse jellyfish in the guts of predators, since they turn to mush when eaten and are rapidly digested. But jellyfish bloom in vast numbers, and it has been shown they form major components in the diets of tuna, spearfish and swordfish as well as various birds and invertebrates such as octopus, sea cucumbers, crabs and amphipods. "Despite their low energy density, the contribution of jellyfish to the energy budgets of predators may be much greater than assumed because of rapid digestion, low capture costs, availability, and selective feeding on the more energy-rich components. Feeding on jellyfish may make marine predators susceptible to ingestion of plastics." Higher order consumers. In 2010, researchers found whales carry nutrients from the depths of the ocean back to the surface using a process they called the whale pump. Whales feed at deeper levels in the ocean where krill is found, but return regularly to the surface to breathe. There whales defecate a liquid rich in nitrogen and iron. Instead of sinking, the liquid stays at the surface where phytoplankton consume it. In the Gulf of Maine, the whale pump provides more nitrogen than the rivers. Microorganisms. There has been increasing recognition in recent years that marine microorganisms play much bigger roles in marine ecosystems than was previously thought. Developments in metagenomics gives researchers an ability to reveal previously hidden diversities of microscopic life, offering a powerful lens for viewing the microbial world and the potential to revolutionise understanding of the living world. Metabarcoding dietary analysis techniques are being used to reconstruct food webs at higher levels of taxonomic resolution and are revealing deeper complexities in the web of interactions. Microorganisms play key roles in marine food webs. The viral shunt pathway is a mechanism that prevents marine microbial particulate organic matter (POM) from migrating up trophic levels by recycling them into dissolved organic matter (DOM), which can be readily taken up by microorganisms. Viral shunting helps maintain diversity within the microbial ecosystem by preventing a single species of marine microbe from dominating the micro-environment. The DOM recycled by the viral shunt pathway is comparable to the amount generated by the other main sources of marine DOM. In general, dissolved organic carbon (DOC) is introduced into the ocean environment from bacterial lysis, the leakage or exudation of fixed carbon from phytoplankton (e.g., mucilaginous exopolymer from diatoms), sudden cell senescence, sloppy feeding by zooplankton, the excretion of waste products by aquatic animals, or the breakdown or dissolution of organic particles from terrestrial plants and soils. Bacteria in the microbial loop decompose this particulate detritus to utilize this energy-rich matter for growth. Since more than 95% of organic matter in marine ecosystems consists of polymeric, high molecular weight (HMW) compounds (e.g., protein, polysaccharides, lipids), only a small portion of total dissolved organic matter (DOM) is readily utilizable to most marine organisms at higher trophic levels. This means that dissolved organic carbon is not available directly to most marine organisms; marine bacteria introduce this organic carbon into the food web, resulting in additional energy becoming available to higher trophic levels. Viruses are the "most abundant biological entities on the planet", particularly in the oceans which occupy over 70% of the Earth's surface. The realisation in 1989 that there are typically about 100 marine viruses in every millilitre of seawater gave impetus to understand their diversity and role in the marine environment. Viruses are now considered to play key roles in marine ecosystems by controlling microbial community dynamics, host metabolic status, and biogeochemical cycling via lysis of hosts. A giant marine virus CroV infects and causes the death by lysis of the marine zooflagellate "Cafeteria roenbergensis". This impacts coastal ecology because "Cafeteria roenbergensis" feeds on bacteria found in the water. When there are low numbers of "Cafeteria roenbergensis" due to extensive CroV infections, the bacterial populations rise exponentially. The impact of CroV on natural populations of "C. roenbergensis" remains unknown; however, the virus has been found to be very host specific, and does not infect other closely related organisms. Cafeteria roenbergensis is also infected by a second virus, the Mavirus virophage, which is a satellite virus, meaning it is able to replicate only in the presence of another specific virus, in this case in the presence of CroV. This virus interferes with the replication of CroV, which leads to the survival of "C. roenbergensis" cells. Mavirus is able to integrate into the genome of cells of "C. roenbergensis", and thereby confer immunity to the population. Fungi. Parasitic chytrids can transfer material from large inedible phytoplankton to zooplankton. Chytrids zoospores are excellent food for zooplankton in terms of size (2–5 μm in diameter), shape, nutritional quality (rich in polyunsaturated fatty acids and cholesterols). Large colonies of host phytoplankton may also be fragmented by chytrid infections and become edible to zooplankton. Parasitic fungi, as well as saprotrophic fungi, directly assimilate phytoplankton organic carbon. By releasing zoospores, the fungi bridge the trophic linkage to zooplankton, known as the mycoloop. By modifying the particulate and dissolved organic carbon, they can affect bacteria and the microbial loop. These processes may modify marine snow chemical composition and the subsequent functioning of the biological carbon pump. By habitat. Pelagic webs. For pelagic ecosystems, Legendre and Rassoulzadagan proposed in 1995 a continuum of trophic pathways with the herbivorous food-chain and microbial loop as food-web end members. The classical linear food-chain end-member involves grazing by zooplankton on larger phytoplankton and subsequent predation on zooplankton by either larger zooplankton or another predator. In such a linear food-chain a predator can either lead to high phytoplankton biomass (in a system with phytoplankton, herbivore and a predator) or reduced phytoplankton biomass (in a system with four levels). Changes in predator abundance can, thus, lead to trophic cascades. The microbial loop end-member involves not only phytoplankton, as basal resource, but also dissolved organic carbon. Dissolved organic carbon is used by heterotrophic bacteria for growth are predated upon by larger zooplankton. Consequently, dissolved organic carbon is transformed, via a bacterial-microzooplankton loop, to zooplankton. These two end-member carbon processing pathways are connected at multiple levels. Small phytoplankton can be consumed directly by microzooplankton. As illustrated in the diagram on the right, dissolved organic carbon is produced in multiple ways and by various organisms, both by primary producers and consumers of organic carbon. DOC release by primary producers occurs passively by leakage and actively during unbalanced growth during nutrient limitation. Another direct pathway from phytoplankton to dissolved organic pool involves viral lysis. Marine viruses are a major cause of phytoplankton mortality in the ocean, particularly in warmer, low-latitude waters. Sloppy feeding by herbivores and incomplete digestion of prey by consumers are other sources of dissolved organic carbon. Heterotrophic microbes use extracellular enzymes to solubilize particulate organic carbon and use this and other dissolved organic carbon resources for growth and maintenance. Part of the microbial heterotrophic production is used by microzooplankton; another part of the heterotrophic community is subject to intense viral lysis and this causes release of dissolved organic carbon again. The efficiency of the microbial loop depends on multiple factors but in particular on the relative importance of predation and viral lysis to the mortality of heterotrophic microbes. Scientists are starting to explore in more detail the largely unknown twilight zone of the mesopelagic, 200 to 1,000 metres deep. This layer is responsible for removing about 4 billion tonnes of carbon dioxide from the atmosphere each year. The mesopelagic layer is inhabited by most of the marine fish biomass. According to a 2017 study, narcomedusae consume the greatest diversity of mesopelagic prey, followed by physonect siphonophores, ctenophores and cephalopods. The importance of the so-called "jelly web" is only beginning to be understood, but it seems medusae, ctenophores and siphonophores can be key predators in deep pelagic food webs with ecological impacts similar to predator fish and squid. Traditionally gelatinous predators were thought ineffectual providers of marine trophic pathways, but they appear to have substantial and integral roles in deep pelagic food webs. Diel vertical migration, an important active transport mechanism, allows mesozooplankton to sequester carbon dioxide from the atmosphere as well as supply carbon needs for other mesopelagic organisms. A 2020 study reported that by 2050 global warming could be spreading in the deep ocean seven times faster than it is now, even if emissions of greenhouse gases are cut. Warming in mesopelagic and deeper layers could have major consequences for the deep ocean food web, since ocean species will need to move to stay at survival temperatures. At the ocean surface. Ocean surface habitats sit at the interface between the ocean and the atmosphere. The biofilm-like habitat at the surface of the ocean harbours surface-dwelling microorganisms, commonly referred to as neuston. This vast air–water interface sits at the intersection of major air–water exchange processes spanning more than 70% of the global surface area . Bacteria in the surface microlayer of the ocean, the so-called bacterioneuston, are of interest due to practical applications such as air-sea gas exchange of greenhouse gases, production of climate-active marine aerosols, and remote sensing of the ocean. Of specific interest is the production and degradation of surfactants (surface active materials) via microbial biochemical processes. Major sources of surfactants in the open ocean include phytoplankton, terrestrial runoff, and deposition from the atmosphere. Unlike coloured algal blooms, surfactant-associated bacteria may not be visible in ocean colour imagery. Having the ability to detect these "invisible" surfactant-associated bacteria using synthetic aperture radar has immense benefits in all-weather conditions, regardless of cloud, fog, or daylight. This is particularly important in very high winds, because these are the conditions when the most intense air-sea gas exchanges and marine aerosol production take place. Therefore, in addition to colour satellite imagery, SAR satellite imagery may provide additional insights into a global picture of biophysical processes at the boundary between the ocean and atmosphere, air-sea greenhouse gas exchanges and production of climate-active marine aerosols. At the ocean floor. Ocean floor (benthic) habitats sit at the interface between the ocean and the interior of the Earth. Coastal webs. Coastal waters include the waters in estuaries and over continental shelves. They occupy about 8 per cent of the total ocean area and account for about half of all the ocean productivity. The key nutrients determining eutrophication are nitrogen in coastal waters and phosphorus in lakes. Both are found in high concentrations in guano (seabird feces), which acts as a fertilizer for the surrounding ocean or an adjacent lake. Uric acid is the dominant nitrogen compound, and during its mineralization different nitrogen forms are produced. Ecosystems, even those with seemingly distinct borders, rarely function independently of other adjacent systems. Ecologists are increasingly recognizing the important effects that cross-ecosystem transport of energy and nutrients have on plant and animal populations and communities. A well known example of this is how seabirds concentrate marine-derived nutrients on breeding islands in the form of feces (guano) which contains ≈15–20% nitrogen (N), as well as 10% phosphorus. These nutrients dramatically alter terrestrial ecosystem functioning and dynamics and can support increased primary and secondary productivity. However, although many studies have demonstrated nitrogen enrichment of terrestrial components due to guano deposition across various taxonomic groups, only a few have studied its retroaction on marine ecosystems and most of these studies were restricted to temperate regions and high nutrient waters. In the tropics, coral reefs can be found adjacent to islands with large populations of breeding seabirds, and could be potentially affected by local nutrient enrichment due to the transport of seabird-derived nutrients in surrounding waters. Studies on the influence of guano on tropical marine ecosystems suggest nitrogen from guano enriches seawater and reef primary producers. Reef building corals have essential nitrogen needs and, thriving in nutrient-poor tropical waters where nitrogen is a major limiting nutrient for primary productivity, they have developed specific adaptations for conserving this element. Their establishment and maintenance are partly due to their symbiosis with unicellular dinoflagellates, Symbiodinium spp. (zooxanthellae), that can take up and retain dissolved inorganic nitrogen (ammonium and nitrate) from the surrounding waters. These zooxanthellae can also recycle the animal wastes and subsequently transfer them back to the coral host as amino acids, ammonium or urea. Corals are also able to ingest nitrogen-rich sediment particles and plankton. Coastal eutrophication and excess nutrient supply can have strong impacts on corals, leading to a decrease in skeletal growth, In the diagram above on the right: (1) ammonification produces and NH4+ and (2) nitrification produces NO3− by NH4+ oxidation. (3) under the alkaline conditions, typical of the seabird feces, the is rapidly volatilised and transformed to NH4+, (4) which is transported out of the colony, and through wet-deposition exported to distant ecosystems, which are eutrophised. The phosphorus cycle is simpler and has reduced mobility. This element is found in a number of chemical forms in the seabird fecal material, but the most mobile and bioavailable is orthophosphate, (5) which can be leached by subterranean or superficial waters. DNA barcoding can be used to construct food web structures with better taxonomic resolution at the web nodes. This provides more specific species identification and greater clarity about exactly who eats whom. "DNA barcodes and DNA information may allow new approaches to the construction of larger interaction webs, and overcome some hurdles to achieving adequate sample size". A newly applied method for species identification is DNA metabarcoding. Species identification via morphology is relatively difficult and requires a lot of time and expertise. High throughput sequencing DNA metabarcoding enables taxonomic assignment and therefore identification for the complete sample regarding the group specific primers chosen for the previous DNA amplification. Polar webs. Arctic and Antarctic marine systems have very different topographical structures and as a consequence have very different food web structures. Both Arctic and Antarctic pelagic food webs have characteristic energy flows controlled largely by a few key species. But there is no single generic web for either. Alternative pathways are important for resilience and maintaining energy flows. However, these more complicated alternatives provide less energy flow to upper trophic-level species. "Food-web structure may be similar in different regions, but the individual species that dominate mid-trophic levels vary across polar regions". The Arctic food web is complex. The loss of sea ice can ultimately affect the entire food web, from algae and plankton to fish to mammals. The impact of climate change on a particular species can ripple through a food web and affect a wide range of other organisms... Not only is the decline of sea ice impairing polar bear populations by reducing the extent of their primary habitat, it is also negatively impacting them via food web effects. Declines in the duration and extent of sea ice in the Arctic leads to declines in the abundance of ice algae, which thrive in nutrient-rich pockets in the ice. These algae are eaten by zooplankton, which are in turn eaten by Arctic cod, an important food source for many marine mammals, including seals. Seals are eaten by polar bears. Hence, declines in ice algae can contribute to declines in polar bear populations. In 2020 researchers reported that measurements over the last two decades on primary production in the Arctic Ocean show an increase of nearly 60% due to higher concentrations of phytoplankton. They hypothesize that new nutrients are flowing in from other oceans and suggest this means the Arctic Ocean may be able to support higher trophic level production and additional carbon fixation in the future. Polar microorganisms. In addition to the varied topographies and in spite of an extremely cold climate, polar aquatic regions are teeming with microbial life. Even in sub-glacial regions, cellular life has adapted to these extreme environments where perhaps there are traces of early microbes on Earth. As grazing by macrofauna is limited in most of these polar regions, viruses are being recognised for their role as important agents of mortality, thereby influencing the biogeochemical cycling of nutrients that, in turn, impact community dynamics at seasonal and spatial scales. Microorganisms are at the heart of Arctic and Antarctic food webs. These polar environments contain a diverse range of bacterial, archaeal, and eukaryotic microbial communities that, along with viruses, are important components of the polar ecosystems. They are found in a range of habitats, including subglacial lakes and cryoconite holes, making the cold biomes of these polar regions replete with metabolically diverse microorganisms and sites of active biogeochemical cycling. These environments, that cover approximately one-fifth of the surface of the Earth and that are inhospitable to human life, are home to unique microbial communities. The resident microbiota of the two regions has a similarity of only about 30%—not necessarily surprising given the limited connectivity of the polar oceans and the difference in freshwater supply, coming from glacial melts and rivers that drain into the Southern Ocean and the Arctic Ocean, respectively. The separation is not just by distance: Antarctica is surrounded by the Southern Ocean that is driven by the strong Antarctic Circumpolar Current, whereas the Arctic is ringed by landmasses. Such different topographies resulted as the two continents moved to the opposite polar regions of the planet ≈40–25 million years ago. Magnetic and gravity data point to the evolution of the Arctic, driven by the Amerasian and Eurasian basins, from 145 to 161 million years ago to a cold polar region of water and ice surrounded by land. Antarctica was formed from the breakup of the super-continent, Gondwana, a landmass surrounded by the Southern Ocean. The Antarctic continent is permanently covered with glacial ice, with only 0.4% of its area comprising exposed land dotted with lakes and ponds. Microbes, both prokaryotic and eukaryotic that are present in these environments, are largely different between the two poles. For example, 78% of bacterial operational taxonomic units (OTUs) of surface water communities of the Southern Ocean and 70% of the Arctic Ocean are unique to each pole. Polar regions are variable in time and space—analysis of the V6 region of the small subunit (SSU) rRNA gene has resulted in about 400,000 gene sequences and over 11,000 OTUs from 44 polar samples of the Arctic and the Southern Ocean. These OTUs cluster separately for the two polar regions and, additionally, exhibit significant differences in just the polar bacterioplankton communities from different environments (coastal and open ocean) and different seasons. The polar regions are characterised by truncated food webs, and the role of viruses in ecosystem function is likely to be even greater than elsewhere in the marine food web. Their diversity is still relatively under-explored, and the way in which they affect polar communities is not well understood, particularly in nutrient cycling. Foundation and keystone species. The concept of a foundation species was introduced in 1972 by Paul K. Dayton, who applied it to certain members of marine invertebrate and algae communities. It was clear from studies in several locations that there were a small handful of species whose activities had a disproportionate effect on the rest of the marine community and they were therefore key to the resilience of the community. Dayton's view was that focusing on foundation species would allow for a simplified approach to more rapidly understand how a community as a whole would react to disturbances, such as pollution, instead of attempting the extremely difficult task of tracking the responses of all community members simultaneously. Foundation species are species that have a dominant role structuring an ecological community, shaping its environment and defining its ecosystem. Such ecosystems are often named after the foundation species, such as seagrass meadows, oyster beds, coral reefs, kelp forests and mangrove forests. For example, the red mangrove is a common foundation species in mangrove forests. The mangrove's root provides nursery grounds for young fish, such as snapper. A foundation species can occupy any trophic level in a food web but tend to be a producer. The concept of the keystone species was introduced in 1969 by the zoologist Robert T. Paine. Paine developed the concept to explain his observations and experiments on the relationships between marine invertebrates of the intertidal zone (between the high and low tide lines), including starfish and mussels. Some sea stars prey on sea urchins, mussels, and other shellfish that have no other natural predators. If the sea star is removed from the ecosystem, the mussel population explodes uncontrollably, driving out most other species. Keystone species are species that have large effects, disproportionate to their numbers, within ecosystem food webs. An ecosystem may experience a dramatic shift if a keystone species is removed, even though that species was a small part of the ecosystem by measures of biomass or productivity. Sea otters limit the damage sea urchins inflict on kelp forests. When the sea otters of the North American west coast were hunted commercially for their fur, their numbers fell to such low levels that they were unable to control the sea urchin population. The urchins in turn grazed the holdfasts of kelp so heavily that the kelp forests largely disappeared, along with all the species that depended on them. Reintroducing the sea otters has enabled the kelp ecosystem to be restored. Topological position. Networks of trophic interactions can provide a lot of information about the functioning of marine ecosystems. Beyond feeding habits, three additional traits (mobility, size, and habitat) of various organisms can complement this trophic view. In order to sustain the proper functioning of ecosystems, there is a need to better understand the simple question asked by Lawton in 1994: What do species do in ecosystems? Since ecological roles and food web positions are not independent, the question of what kind of species occupy various of network positions needs to be asked. Since the very first attempts to identify keystone species, there has been an interest in their place in food webs. First they were suggested to have been top predators, then also plants, herbivores, and parasites. For both community ecology and conservation biology, it would be useful to know where are they in complex trophic networks. An example of this kind of network analysis is shown in the diagram, based on data from a marine food web. It shows relationships between the topological positions of web nodes and the mobility values of the organism's involved. The web nodes are shape-coded according to their mobility, and colour-coded using indices which emphasise (A) bottom-up groups (sessile and drifters), and (B) groups at the top of the food web. The relative importance of organisms varies with time and space, and looking at large databases may provide general insights into the problem. If different kinds of organisms occupy different types of network positions, then adjusting for this in food web modelling will result in more reliable predictions. Comparisons of centrality indices with each other (the similarity of degree centrality and closeness centrality, keystone and keystoneness indexes, and centrality indices versus trophic level (most high-centrality species at medium trophic levels) were done to better understand critically important positions of organisms in food webs. Extending this interest by adding trait data to trophic groups helps the biological interpretation of the results. Relationships between centrality indices have been studied for other network types as well, including habitat networks. With large databases and new statistical analyses, questions like these can be re-investigated and knowledge can be updated. Cryptic interactions. Cryptic interactions, interactions which are "hidden in plain sight", occur throughout the marine planktonic foodweb but are currently largely overlooked by established methods, which mean large‐scale data collection for these interactions is limited. Despite this, current evidence suggests some of these interactions may have perceptible impacts on foodweb dynamics and model results. Incorporation of cryptic interactions into models is especially important for those interactions involving the transport of nutrients or energy. The diagram illustrates the material fluxes, populations, and molecular pools that are impacted by five cryptic interactions: mixotrophy, ontogenetic and species differences, microbial cross‐feeding, auxotrophy and cellular carbon partitioning. These interactions may have synergistic effects as the regions of the food web that they impact overlap. For example, cellular carbon partition in phytoplankton can affect both downstream pools of organic matter utilised in microbial cross‐feeding and exchanged in cases of auxotrophy, as well as prey selection based on ontogenetic and species differences. Simplifications such as "zooplankton consume phytoplankton", "phytoplankton take up inorganic nutrients", "gross primary production determines the amount of carbon available to the food web", etc. have helped scientists explain and model general interactions in the aquatic environment. Traditional methods have focused on quantifying and qualifying these generalisations, but rapid advancements in genomics, sensor detection limits, experimental methods, and other technologies in recent years have shown that generalisation of interactions within the plankton community may be too simple. These enhancements in technology have exposed a number of interactions which appear as cryptic because bulk sampling efforts and experimental methods are biased against them. Complexity and stability. Food webs provide a framework within which a complex network of predator–prey interactions can be organised. A food web model is a network of food chains. Each food chain starts with a primary producer or autotroph, an organism, such as an alga or a plant, which is able to manufacture its own food. Next in the chain is an organism that feeds on the primary producer, and the chain continues in this way as a string of successive predators. The organisms in each chain are grouped into trophic levels, based on how many links they are removed from the primary producers. The length of the chain, or trophic level, is a measure of the number of species encountered as energy or nutrients move from plants to top predators. Food energy flows from one organism to the next and to the next and so on, with some energy being lost at each level. At a given trophic level there may be one species or a group of species with the same predators and prey. In 1927, Charles Elton published an influential synthesis on the use of food webs, which resulted in them becoming a central concept in ecology. In 1966, interest in food webs increased after Robert Paine's experimental and descriptive study of intertidal shores, suggesting that food web complexity was key to maintaining species diversity and ecological stability. Many theoretical ecologists, including Robert May and Stuart Pimm, were prompted by this discovery and others to examine the mathematical properties of food webs. According to their analyses, complex food webs should be less stable than simple food webs. The apparent paradox between the complexity of food webs observed in nature and the mathematical fragility of food web models is currently an area of intensive study and debate. The paradox may be due partially to conceptual differences between persistence of a food web and equilibrial stability of a food web. A trophic cascade can occur in a food web if a trophic level in the web is suppressed. For example, a top-down cascade can occur if predators are effective enough in predation to reduce the abundance, or alter the behavior, of their prey, thereby releasing the next lower trophic level from predation. A top-down cascade is a trophic cascade where the top consumer/predator controls the primary consumer population. In turn, the primary producer population thrives. The removal of the top predator can alter the food web dynamics. In this case, the primary consumers would overpopulate and exploit the primary producers. Eventually there would not be enough primary producers to sustain the consumer population. Top-down food web stability depends on competition and predation in the higher trophic levels. Invasive species can also alter this cascade by removing or becoming a top predator. This interaction may not always be negative. Studies have shown that certain invasive species have begun to shift cascades; and as a consequence, ecosystem degradation has been repaired. An example of a cascade in a complex, open-ocean ecosystem occurred in the northwest Atlantic during the 1980s and 1990s. The removal of Atlantic cod ("Gadus morhua") and other ground fishes by sustained overfishing resulted in increases in the abundance of the prey species for these ground fishes, particularly smaller forage fishes and invertebrates such as the northern snow crab ("Chionoecetes opilio") and northern shrimp ("Pandalus borealis"). The increased abundance of these prey species altered the community of zooplankton that serve as food for smaller fishes and invertebrates as an indirect effect. Top-down cascades can be important for understanding the knock-on effects of removing top predators from food webs, as humans have done in many places through hunting and fishing. In a bottom-up cascade, the population of primary producers will always control the increase/decrease of the energy in the higher trophic levels. Primary producers are plants, phytoplankton and zooplankton that require photosynthesis. Although light is important, primary producer populations are altered by the amount of nutrients in the system. This food web relies on the availability and limitation of resources. All populations will experience growth if there is initially a large amount of nutrients. Terrestrial comparisons. Marine environments can have inversions in their biomass pyramids. In particular, the biomass of consumers (copepods, krill, shrimp, forage fish) is generally larger than the biomass of primary producers. Because of this inversion, it is the zooplankton that make up most of the marine animal biomass. As primary consumers, zooplankton are the crucial link between the primary producers (mainly phytoplankton) and the rest of the marine food web (secondary consumers); the ocean's primary producers are mostly tiny phytoplankton which have r-strategist traits of growing and reproducing rapidly, so a small mass can have a fast rate of primary production. In contrast, many terrestrial primary producers, such as mature forests, have K-strategist traits of growing and reproducing slowly, so a much larger mass is needed to achieve the same rate of primary production. The rate of production divided by the average amount of biomass that achieves it is known as an organism's Production/Biomass (P/B) ratio. Production is measured in terms of the amount of movement of mass or energy per area per unit of time. In contrast, the biomass measurement is in units of mass per unit area or volume. The P/B ratio utilizes inverse time units (example: 1/month). This ratio allows for an estimate of the amount of energy flow compared to the amount of biomass at a given trophic level, allowing for demarcations to be made between trophic levels. The P/B ratio most commonly decreases as trophic level and organismal size increases, with small, ephemeral organisms containing a higher P/B ratio than large, long-lasting ones. Examples: The bristlecone pine can live for thousands of years, and has a very low production/biomass ratio. The cyanobacterium "Prochlorococcus" lives for about 24 hours, and has a very high production/biomass ratio. In oceans, most primary production is performed by algae. This is a contrast to on land, where most primary production is performed by vascular plants. Aquatic producers, such as planktonic algae or aquatic plants, lack the large accumulation of secondary growth that exists in the woody trees of terrestrial ecosystems. However, they are able to reproduce quickly enough to support a larger biomass of grazers. This inverts the pyramid. Primary consumers have longer lifespans and slower growth rates that accumulates more biomass than the producers they consume. Phytoplankton live just a few days, whereas the zooplankton eating the phytoplankton live for several weeks and the fish eating the zooplankton live for several consecutive years. Aquatic predators also tend to have a lower death rate than the smaller consumers, which contributes to the inverted pyramidal pattern. Population structure, migration rates, and environmental refuge for prey are other possible causes for pyramids with biomass inverted. Energy pyramids, however, will always have an upright pyramid shape if all sources of food energy are included, since this is dictated by the second law of thermodynamics." Most organic matter produced is eventually consumed and respired to inorganic carbon. The rate at which organic matter is preserved via burial by accumulating sediments is only between 0.2 and 0.4 billion tonnes per year, representing a very small fraction of the total production. Global phytoplankton production is about 50 billion tonnes per year and phytoplankton biomass is about one billion tonnes, implying a turnover time of one week. Marine macrophytes have a similar global biomass but a production of only one billion tonnes per year, implying a turnover time of one year. These high turnover rates (compared with global terrestrial vegetation turnover of one to two decades) imply not only steady production, but also efficient consumption of organic matter. There are multiple organic matter loss pathways (respiration by autotrophs and heterotrophs, grazing, viral lysis, detrital route), but all eventually result in respiration and release of inorganic carbon. Anthropogenic effects. Pteropods and brittle stars together form the base of the Arctic food webs and both are seriously damaged by acidification. Pteropods shells dissolve with increasing acidification and brittle stars lose muscle mass when re-growing appendages. Additionally the brittle star's eggs die within a few days when exposed to expected conditions resulting from Arctic acidification. Acidification threatens to destroy Arctic food webs from the base up. Arctic waters are changing rapidly and are advanced in the process of becoming undersaturated with aragonite. Arctic food webs are considered simple, meaning there are few steps in the food chain from small organisms to larger predators. For example, pteropods are "a key prey item of a number of higher predators – larger plankton, fish, seabirds, whales". Ecosystems in the ocean are more sensitive to climate change than anywhere else on Earth. This is due to warmer temperatures and ocean acidification. With the ocean temperatures increasing, it is predicted that fish species will move from their known ranges and locate new areas. During this change, the numbers within each species will drop significantly. Currently there are many relationships between predators and prey, where they rely on one another to survive. With a shift in where species will be located, the predator-prey relationships/interactions will be greatly impacted. Studies are still being done to understand how these changes will affect the food-web dynamics. Using modeling, scientists are able to analyze the trophic interactions that certain species thrive in and due to other species also found in these areas. Through recent models, it is seen that many of the larger marine species will end up shifting their ranges at a slower pace than climate change suggests. This would impact the predator-prey relationship even more. As the smaller species and organisms are more likely to be influenced from the oceans warming and moving sooner than the larger mammals. These predators are seen to stay longer in their historical ranges before moving because of the movement of the smaller species moving. With "new" species entering the space of the larger mammals, the ecology changes and more prey for them to feed upon. The smaller species would end up having a smaller range, whereas the larger mammals would have extended their range. The shifting dynamics will have great effects on all species within the ocean and will result in many more changes impacting our entire ecosystem. With the movement in where predators can find prey within the ocean, will also impact the fisheries industry. Where fishermen currently know where certain fish species occupy, as the shift occurs it will be more difficult to figure out where they are spending their time, costing them more money as they may have to travel further. As a result, this could impact the current fishing regulations set up for certain areas with the movement of these fish populations. Through a survey conducted at Princeton University, researchers found that the marine species are consistently keeping pace with "climate velocity" or speed and direction in which it is moving. Looking at data from 1968 to 2011, it was found that 70 per cent of the shifts in animals' depths and 74 per cent of changes in latitude correlated with regional-scale fluctuations in ocean temperature. These movements are causing species to move between 4.5 and 40 miles per decade further away from the equator. With the help of models, regions can predict where the species may end up. Models will have to adapt to the changes as more is learned about how climate is affecting species. "Our results show how future climate change can potentially weaken marine food webs through reduced energy flow to higher trophic levels and a shift towards a more detritus-based system, leading to food web simplification and altered producer–consumer dynamics, both of which have important implications for the structuring of benthic communities." "...increased temperatures reduce the vital flow of energy from the primary food producers at the bottom (e.g. algae), to intermediate consumers (herbivores), to predators at the top of marine food webs. Such disturbances in energy transfer can potentially lead to a decrease in food availability for top predators, which in turn, can lead to negative impacts for many marine species within these food webs... "Whilst climate change increased the productivity of plants, this was mainly due to an expansion of cyanobacteria (small blue-green algae)," said Mr Ullah. "This increased primary productivity does not support food webs, however, because these cyanobacteria are largely unpalatable and they are not consumed by herbivores. Understanding how ecosystems function under the effects of global warming is a challenge in ecological research. Most research on ocean warming involves simplified, short-term experiments based on only one or a few species." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " TL_i=1 + \\sum_j (TL_j \\cdot DC_{ij})," }, { "math_id": 1, "text": "TL_j" }, { "math_id": 2, "text": "DC_{ij}" } ]
https://en.wikipedia.org/wiki?curid=60927729
60933
Triboelectric effect
Charge transfer due to contact or sliding The triboelectric effect (also known as triboelectricity, triboelectric charging, triboelectrification, or tribocharging) describes electric charge transfer between two objects when they contact or slide against each other. It can occur with different materials, such as the sole of a shoe on a carpet, or between two pieces of the same material. It is ubiquitous, and occurs with differing amounts of charge transfer (tribocharge) for all solid materials. There is evidence that tribocharging can occur between combinations of solids, liquids and gases, for instance liquid flowing in a solid tube or an aircraft flying through air. Often static electricity is a consequence of the triboelectric effect when the charge stays on one or both of the objects and is not conducted away. The term triboelectricity has been used to refer to the field of study or the general phenomenon of the triboelectric effect, or to the static electricity that results from it. When there is no sliding, tribocharging is sometimes called contact electrification, and any static electricity generated is sometimes called contact electricity. The terms are often used interchangeably, and may be confused. Triboelectric charge plays a major role in industries such as packaging of pharmaceutical powders, and in many processes such as dust storms and planetary formation. It can also increase friction and adhesion. While many aspects of the triboelectric effect are now understood and extensively documented, significant disagreements remain in the current literature about the underlying details. History. The historical development of triboelectricity is interwoven with work on static electricity and electrons themselves. Experiments involving triboelectricity and static electricity occurred before the discovery of the electron. The name ēlektron (ἤλεκτρον) is Greek for amber, which is connected to the recording of electrostatic charging by Thales of Miletus around 585 BCE, and possibly others even earlier. The prefix "" (Greek for 'rub') refers to sliding, friction and related processes, as in tribology. From the axial age (8th to 3rd century BC) the attraction of materials due to static electricity by rubbing amber and the attraction of magnetic materials were considered to be similar or the same. There are indications that it was known both in Europe and outside, for instance China and other places. Syrian women used amber whorls in weaving and exploited the triboelectric properties, as noted by Pliny the Elder. The effect was mentioned in records from the medieval period. Archbishop Eustathius of Thessalonica, Greek scholar and writer of the 12th century, records that Woliver, king of the Goths, could draw sparks from his body. He also states that a philosopher was able, while dressing, to draw sparks from his clothes, similar to the report by Robert Symmer of his silk stocking experiments, which may be found in the 1759 Philosophical Transactions. It is generally considered that the first major scientific analysis was by William Gilbert in his publication De Magnete in 1600. He discovered that many more materials than amber such as sulphur, wax, glass could produce static electricity when rubbed, and that moisture prevented electrification. Others such as Sir Thomas Browne made important contributions slightly later, both in terms of materials and the first use of the word electricity in Pseudodoxia Epidemica. He noted that metals did not show triboelectric charging, perhaps because the charge was conducted away. An important step was around 1663 when Otto von Guericke invented a machine that could automate triboelectric charge generation, making it much easier to produce more tribocharge; other electrostatic generators followed. For instance, shown in the Figure is an electrostatic generator built by Francis Hauksbee the Younger. Another key development was in the 1730s when C.F. du Fay pointed out that there were two types of charge which he named "vitreous" and "resinous". These names corresponded to the glass (vitreous) rods and bituminous coal, amber, or sealing wax (resinous) used in du Fay's experiments. These names were used throughout the 19th century. The use of the terms "positive" and "negative" for types of electricity grew out of the independent work of Benjamin Franklin around 1747 where he ascribed electricity to an over- or under- abundance of an electrical fluid. At about the same time Johan Carl Wilcke published in his 1757 PhD thesis a triboelectric series. In this work materials were listed in order of the polarity of charge separation when they are touched or slide against another. A material towards the bottom of the series, when touched to a material near the top of the series, will acquire a more negative charge. The first systematic analysis of triboelectricity is considered to be the work of Jean Claude Eugène Péclet in 1834. He studied triboelectric charging for a range of conditions such as the material, pressure and rubbing of surfaces. It was some time before there were further quantitative works by Owen in 1909 and Jones in 1915. The most extensive early set of experimental analyses was from 1914–1930 by the group of Professor Shaw, who laid much of the foundation of experimental knowledge. In a series of papers he: was one of the first to mention some of the failings of the triboelectric series, also showing that heat had a major effect on tribocharging; analyzed in detail where different materials would fall in a triboelectric series, at the same time pointing out anomalies; separately analyzed glass and solid elements and solid elements and textiles, carefully measuring both tribocharging and friction; analyzed charging due to air-blown particles; demonstrated that surface strain and relaxation played a critical role for a range of materials, and examined the tribocharging of many different elements with silica. Much of this work predates an understanding of solid state variations of energies levels with position, and also band bending. It was in the early 1950s in the work of authors such as Vick that these were taken into account along with concepts such as quantum tunnelling and behavior such as Schottky barrier effects, as well as including models such as asperities for contacts based upon the work of Frank Philip Bowden and David Tabor. Basic characteristics. Triboelectric charging occurs when two materials are brought into contact then separated, or slide against each other. An example is rubbing a plastic pen on a shirt sleeve made of cotton, wool, polyester, or the blended fabrics used in modern clothing. An electrified pen will attract and pick up pieces of paper less than a square centimeter, and will repel a similarly electrified pen. This repulsion is detectable by hanging both pens on threads and setting them near one another. Such experiments led to the theory of two types of electric charge, one being the negative of the other, with a simple sum respecting signs giving the total charge. The electrostatic attraction of the charged plastic pen to neutral uncharged pieces of paper (for example) is due to induced dipolesChapter 27 in the paper. The triboelectric effect can be unpredictable because many details are often not controlled. Phenomena which do not have a simple explanation have been known for many years. For instance, as early as 1910, Jaimeson observed that for a piece of cellulose, the sign of the charge was dependent upon whether it was bent concave or convex during rubbing. The same behavior with curvature was reported in 1917 by Shaw, who noted that the effect of curvature with different materials made them either more positive or negative. In 1920, Richards pointed out that for colliding particles the velocity and mass played a role, not just what the materials were. In 1926, Shaw pointed out that with two pieces of identical material, the sign of the charge transfer from "rubber" to "rubbed" could change with time. There are other more recent experimental results which also do not have a simple explanation. For instance the work of Burgo and Erdemir, which showed that the sign of charge transfer reverses between when a tip is pushing into a substrate versus when it pulls out; the detailed work of Lee et al and Forward, Lacks and Sankaran and others measuring the charge transfer during collisions between particles of zirconia of different size but the same composition, with one size charging positive, the other negative; the observations using sliding or Kelvin probe force microscope of inhomogeneous charge variations between nominally identical materials. The details of how and why tribocharging occurs are not established science as of 2023. One component is the difference in the work function (also called the electron affinity) between the two materials. This can lead to charge transfer as, for instance, analyzed by Harper. As has been known since at least 1953, the contact potential is part of the process but does not explain many results, such as the ones mentioned in the last two paragraphs. Many studies have pointed out issues with the work function difference (Volta potential) as a complete explanation. There is also the question of why sliding is often important. Surfaces have many nanoscale asperities where the contact is taking place, which has been taken into account in many approaches to triboelectrification. Volta and Helmholtz suggested that the role of sliding was to produce more contacts per second. In modern terms, the idea is that electrons move many times faster than atoms, so the electrons are always in equilibrium when atoms move (the Born–Oppenheimer approximation). With this approximation, each asperity contact during sliding is equivalent to a stationary one; there is no direct coupling between the sliding velocity and electron motion. An alternative view (beyond the Born–Oppenheimer approximation) is that sliding acts as a quantum mechanical pump which can excite electrons to go from one material to another. A different suggestion is that local heating during sliding matters, an idea first suggested by Frenkel in 1941. Other papers have considered that local bending at the nanoscale produces voltages which help drive charge transfer via the flexoelectric effect. There are also suggestions that surface or trapped charges are important. More recently there have been attempts to include a full solid state description. Explanations and mechanisms. From early work starting around the end of the 19th century a large amount of information is available about what, empirically, causes triboelectricity. While there is extensive experimental data on triboelectricity there is not as yet full scientific consensus on the source, or perhaps more probably the sources. Some aspects are established, and will be part of the full picture: Triboelectric series. An empirical approach to triboelectricity is a triboelectric series"." This is a list of materials ordered by how they develop a charge relative to other materials on the list. Johan Carl Wilcke published the first one in a 1757 paper. The series was expanded by Shaw and Henniker by including natural and synthetic polymers, and included alterations in the sequence depending on surface and environmental conditions. Lists vary somewhat as to the order of some materials. Another triboelectric series based on measuring the triboelectric charge density of materials was proposed by the group of Zhong Lin Wang. The triboelectric charge density of the tested materials was measured with respect to liquid mercury in a glove box under well-defined conditions, with fixed temperature, pressure and humidity. It is known that this approach is too simple and unreliable. There are many cases where there are triangles: material A is positive when rubbed against B, B is positive when rubbed against C, and C is positive when rubbed against A, an issue mentioned by Shaw in 1914. This cannot be explained by a linear series; cyclic series are inconsistent with the empirical triboelectric series. Furthermore, there are many cases where charging occurs with contacts between two pieces of the same material. This has been modelled as a consequence of the electric fields from local bending (flexoelectricity). Work function differences. In all materials there is a positive electrostatic potential from the positive atomic nuclei, partially balanced by a negative electrostatic potential of what can be described as a sea of electrons. The average potential is positive, what is called the "mean inner potential" (MIP). Different materials have different MIPs, depending upon the types of atoms and how close they are. At a surface the electrons also spill out a little into the vacuum, as analyzed in detail by Kohn and Liang. This leads to a dipole at the surface. Combined, the dipole and the MIP lead to a potential barrier for electrons to leave a material which is called the work function. A rationalization of the triboelectric series is that different members have different work functions, so electrons can go from the material with a small work function to one with a large. The potential difference between the two materials is called the Volta potential, also called the "contact potential". Experiments have validated the importance of this for metals and other materials. However, because the surface dipoles vary for different surfaces of any solid the contact potential is not a universal parameter. By itself it cannot explain many of the results which were established in the early 20th century. Electromechanical contributions. Whenever a solid is strained, electric fields can be generated. One process is due to linear strains, and is called piezoelectricity, the second depends upon how rapidly strains are changing with distance (derivative) and is called flexoelectricity. Both are established science, and can be both measured and calculated using density functional theory methods. Because flexoelectricity depends upon a gradient it can be much larger at the nanoscale during sliding or contact of asperity between two objects. There has been considerable work on the connection between piezoelectricity and triboelectricity. While it can be important, piezoelectricity only occurs in the small number of materials which do not have inversion symmetry, so it is not a general explanation. It has recently been suggested that flexoelectricity may be very important in triboelectricity as it occurs in all insulators and semiconductors. Quite a few of the experimental results such as the effect of curvature can be explained by this approach, although full details have not as yet been determined. There is also early work from Shaw and Hanstock, and from the group of Daniel Lacks demonstrating that strain matters. Capacitor charge compensation model. An explanation that has appeared in different forms is analogous to charge on a capacitor. If there is a potential difference between two materials due to the difference in their work functions (contact potential), this can be thought of as equivalent to the potential difference across a capacitor. The charge to compensate this is that which cancels the electric field. If an insulating dielectric is in between the two materials, then this will lead to a polarization density formula_0 and a bound surface charge of formula_1, where formula_2 is the surface normal. The total charge in the capacitor is then the combination of the bound surface charge from the polarization and that from the potential. The triboelectric charge from this compensation model has been frequently considered as a key component. If the additional polarization due to strain (piezoelectricity) or bending of samples (flexoelectricity) is included this can explain observations such as the effect of curvature or inhomogeneous charging. Electron and/or ion transfer. There is debate about whether electrons or ions are transferred in triboelectricity. For instance, Harper discusses both possibilities, whereas Vick was more in favor of electron transfer. The debate remains to this day with, for instance, George M. Whitesides advocating for ions, while Diaz and Fenzel-Alexander as well as Laurence D. Marks support both, and others just electrons. Thermodynamic irreversibility. In the latter half of the 20th century the Soviet school led by chemist Boris Derjaguin argued that triboelectricity and the associated phenomenon of triboluminescence are fundamentally irreversible. A similar point of view to Derjaguin's has been more recently advocated by Seth Putterman and his collaborators at the University of California, Los Angeles (UCLA). A proposed theory of triboelectricity as a fundamentally irreversible process was published in 2020 by theoretical physicists Robert Alicki and Alejandro Jenkins. They argued that the electrons in the two materials that slide against each other have different velocities, giving a non-equilibrium state. Quantum effects cause this imbalance to pump electrons from one material to the other. This is a fermionic analog of the mechanism of rotational superradiance originally described by Yakov Zeldovich for bosons. Electrons are pumped in both directions, but small differences in the electronic potential landscapes for the two surfaces can cause net charging. Alicki and Jenkins argue that such an irreversible pumping is needed to understand how the triboelectric effect can generate an electromotive force. Humidity. Generally, increased humidity (water in the air) leads to a decrease in the magnitude of triboelectric charging. The size of this effect varies greatly depending on the contacting materials; the decrease in charging ranges from up to a factor of 10 or more to very little humidity dependence. Some experiments find increased charging at moderate humidity compared to extremely dry conditions before a subsequent decrease at higher humidity. The most widespread explanation is that higher humidity leads to more water adsorbed at the surface of contacting materials, leading to a higher surface conductivity. The higher conductivity allows for greater charge recombination as contacts separate, resulting in a smaller transfer of charge. Another proposed explanation for humidity effects considers the case when charge transfer is observed to increase with humidity in dry conditions. Increasing humidity may lead to the formation of water bridges between contacting materials that promote the transfer of ions. Examples. Friction and adhesion from tribocharging. Friction is a retarding force due to different energy dissipation process such as elastic and plastic deformation, phonon and electron excitation, and also adhesion. As an example, in a car or any other vehicle the wheels elastically deform as they roll. Part of the energy needed for this deformation is recovered (elastic deformation), some is not and goes into heating the tires. The energy which is not recovered contributes to the back force, a process called rolling friction. Similar to rolling friction there are energy terms in charge transfer, which contribute to friction. In static friction there is coupling between elastic strains, polarization and surface charge which contributes to the frictional force. In sliding friction, when asperities contact and there is charge transfer, some of the charge returns as the contacts are released, some does not and will contribute to the macroscopically observed friction. There is evidence for a retarding Coulomb force between asperities of different charges, and an increase in the adhesion from contact electrification when geckos walk on water. There is also evidence of connections between jerky (stick–slip) processes during sliding with charge transfer, electrical discharge and x-ray emission. How large the triboelectric contribution is to friction has been debated. It has been suggested by some that it may dominate for polymers, whereas Harper has argued that it is small. Liquids and gases. The generation of static electricity from the relative motion of liquids or gases is well established, with one of the first analyses in 1886 by Lord Kelvin in his water dropper which used falling drops to create an electric generator. Liquid mercury is a special case as it typically acts as a simple metal, so has been used as a reference electrode. More common is water, and electricity due to water droplets hitting surfaces has been documented since the discovery by Philipp Lenard in 1892 of the "spray electrification" or "waterfall effect". This is when falling water generates static electricity either by collisions between water drops or with the ground, leading to the finer mist in updrafts being mainly negatively charged, with positive near the lower surface. It can also occur for sliding drops. Another type of charge can be produced during rapid solidification of water containing ions, which is called the "Workman–Reynolds effect". During the solidification the positive and negative ions may not be equally distributed between the liquid and solid. For instance, in thunderstorms this can contribute (together with the waterfall effect) to separation of positive hydrogen ions and negative hydroxide ions, leading to static charge and lightning. A third class is associated with contact potential differences between liquids or gases and other materials, similar to the work function differences for solids. It has been suggested that a triboelectric series for liquids is useful. One difference from solids is that often liquids have charged double layers, and most of the work to date supports that ion transfer (rather than electron) dominates for liquids as first suggested by Irving Langmuir in 1938. Finally, with liquids there can be flow-rate gradients at interfaces, and also viscosity gradients. These can produce electric fields and also polarization of the liquid, a field called electrohydrodynamics. These are analogous to the electromechanical terms for solids where electric fields can occur due to elastic strains as described earlier. Powders. During commercial powder processing or in natural processes such as dust storms, triboelectric charge transfer can occur. There can be electric fields of up to 160kV/m with moderate wind conditions, which leads to Coulomb forces of about the same magnitude as gravity. There does not need to be air present, significant charging can occur, for instance, on airless planetary bodies. With pharmaceutic powders and other commercial powders the tribocharging needs to be controlled for quality control of the materials and doses. Static discharge is also a particular hazard in grain elevators owing to the danger of a dust explosion, in places that store explosive powders, and in many other cases. Triboelectric powder separation has been discussed as a method of separating powders, for instance different biopolymers. The principle here is that different degrees of charging can be exploited for electrostatic separation, a general concept for powders. In industry. There are many areas in industry where triboelectricity is known to be an issue. some examples are: Other examples. While the simple case of stroking a cat is familiar to many, there are other areas in modern technological civilization where triboelectricity is exploited or is a concern: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf P" }, { "math_id": 1, "text": "\\mathbf P \\cdot \\mathbf n" }, { "math_id": 2, "text": "\\mathbf n" } ]
https://en.wikipedia.org/wiki?curid=60933
60933516
Restricted random waypoint model
Simulation of wireless network users In mobility management, the restricted random waypoint model is a random model for the movement of mobile users, similar to the random waypoint model, but where the waypoints are restricted to fall within one of a finite set of sub-domains. It was originally introduced by Blaževic et al. in order to model intercity examples and later defined in a more general setting by Le Boudec et al. Definition. The restricted random waypoint models the trajectory of a mobile user in a connected domain formula_0. Given a sequence of locations formula_1 in formula_0, called waypoints, the trajectory of the mobile is defined by traveling from one waypoint formula_2 to the next formula_3 along the shortest path in formula_0 between them. In the restricted setting, the waypoints are restricted to fall within one of a finite set of subdomains formula_4. On the trip between formula_2 and formula_3, the mobile moves at constant speed formula_5 which is sampled from some distribution, usually a uniform distribution. The duration of the formula_6-th trip is thus: formula_7 where formula_8 is the length of the shortest path in formula_0 between formula_9 and formula_10. The mobile may also pause at a waypoint, in which case the formula_6-th trip is a pause at the location of the formula_6-th waypoint, i.e. formula_11. A duration formula_12 is drawn from some distribution formula_13 to indicate the end of the pause. The transition instants formula_14 are the time at which the mobile reaches the formula_6-th waypoint. They are defined as follow: formula_15 The sampling algorithm for the waypoints depends on the phase of the simulation. An initial phase formula_16 is chosen according to some initialization rule. Given phase formula_22, the next phase formula_23 is chosen as follows. If formula_24 then formula_25 is sampled from some distribution and formula_26. Otherwise, a new sub-domain formula_27 is sampled and a number formula_28 of trip to undergo in sub-domain formula_20 is sampled. The new phase is: formula_29. Given a phase formula_22 the waypoint formula_3 is set to formula_2 if formula_30. Otherwise, it is sampled from sub-domain formula_18 if formula_24 and from sub-domain formula_31 if formula_32. Transient and stationary period. In a typical simulation models, when the condition for stability is satisfied, simulation runs go through a transient period and converge to the stationary regime. It is important to remove the transients for performing meaningful comparisons of, for example, different mobility regimes. A standard method for avoiding such a bias is to (i) make sure the used model has a stationary regime and (ii) remove the beginning of all simulation runs in the hope that long runs converge to stationary regime. However the length of transients may be prohibitively long for even simple mobility models and a major difficulty is to know when the transient ends. An alternative, called "perfect simulation", is to sample the initial simulation state from the stationary regime. There exists algorithms for perfect simulation of the general restricted random waypoint. They are described in "Perfect simulation and stationarity of a class of mobility models" (2005) and a Python implementation is available on GitHub. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "M_0, M_1, ..." }, { "math_id": 2, "text": "M_n" }, { "math_id": 3, "text": "M_{n+1}" }, { "math_id": 4, "text": "A_i \\subset A" }, { "math_id": 5, "text": "V_n" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "S_n = \\frac{d(M_n, M_{n+1})}{V_n}" }, { "math_id": 8, "text": "d(x, y)" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "M_{n+1} = M_n" }, { "math_id": 12, "text": "S_n" }, { "math_id": 13, "text": "F_{\\text{pause}}" }, { "math_id": 14, "text": "T_n" }, { "math_id": 15, "text": "\\begin{cases}T_0 \\text{ is chosen by some initialization rule } \\\\ T_{n+1} = T_n + S_n \\end{cases}" }, { "math_id": 16, "text": "I_0 = (i, j, r, p)" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "A_i" }, { "math_id": 19, "text": "r" }, { "math_id": 20, "text": "j" }, { "math_id": 21, "text": "p \\in \\{\\text{pause}, \\text{move}\\}" }, { "math_id": 22, "text": "I_n = (i, j, r, p)" }, { "math_id": 23, "text": "I_{n+1}" }, { "math_id": 24, "text": "r > 0" }, { "math_id": 25, "text": "p'" }, { "math_id": 26, "text": "I_{n+1} = (i, j, r-1, p')" }, { "math_id": 27, "text": "k" }, { "math_id": 28, "text": "r'" }, { "math_id": 29, "text": "I_{n+1} = (j, k, r', \\text{move})" }, { "math_id": 30, "text": "p=\\text{pause}" }, { "math_id": 31, "text": "A_j" }, { "math_id": 32, "text": "r = 0" } ]
https://en.wikipedia.org/wiki?curid=60933516
6093954
Bernays–Schönfinkel class
The Bernays–Schönfinkel class (also known as Bernays–Schönfinkel–Ramsey class) of formulas, named after Paul Bernays, Moses Schönfinkel and Frank P. Ramsey, is a fragment of first-order logic formulas where satisfiability is decidable. It is the set of sentences that, when written in prenex normal form, have an formula_0 quantifier prefix and do not contain any function symbols. Ramsey proved that, if formula_1 is a formula in the Bernays–Schönfinkel class with one free variable, then either formula_2 is finite, or formula_3 is finite. This class of logic formulas is also sometimes referred as effectively propositional (EPR) since it can be effectively translated into propositional logic formulas by a process of grounding or instantiation. The satisfiability problem for this class is NEXPTIME-complete. Applications. Efficient algorithms for deciding satisfiability of EPR have been integrated into SMT solvers. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exists^*\\forall^*" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\\{x \\in \\N : \\phi(x)\\} " }, { "math_id": 3, "text": "\\{x \\in \\N : \\neg \\phi(x)\\} " } ]
https://en.wikipedia.org/wiki?curid=6093954
6093957
Neutron reflectometry
Neutron reflectometry is a neutron diffraction technique for measuring the structure of thin films, similar to the often complementary techniques of X-ray reflectivity and ellipsometry. The technique provides valuable information over a wide variety of scientific and technological applications including chemical aggregation, polymer and surfactant adsorption, structure of thin film magnetic systems, biological membranes, etc. History. Neutron reflectometery emerged as a new field in the 1980s, after the discovery of giant magnetoresistance in antiferromagnetically-coupled multilayered films. Technique. The technique involves shining a highly collimated beam of neutrons onto an extremely flat surface and measuring the intensity of reflected radiation as a function of angle or neutron wavelength. The exact shape of the reflectivity profile provides detailed information about the structure of the surface, including the thickness, density, and roughness of any thin films layered on the substrate. Neutron reflectometry is most often made in specular reflection mode, where the angle of the incident beam is equal to the angle of the reflected beam. The reflection is usually described in terms of a momentum transfer vector, denoted formula_0, which describes the change in momentum of a neutron after reflecting from the material. Conventionally the formula_1 direction is defined to be the direction normal to the surface, and for specular reflection, the scattering vector has only a formula_1-component. A typical neutron reflectometry plot displays the reflected intensity (relative to the incident beam) as a function of the scattering vector: formula_2 where formula_3 is the neutron wavelength, and formula_4 is the angle of incidence. The Abeles matrix formalism or the Parratt recursion can be used to calculate the specular signal arising from the interface. Off-specular reflectometry gives rise to diffuse scattering and involves momentum transfer within the layer, and is used to determine lateral correlations within the layers, such as those arising from magnetic domains or in-plane correlated roughness. The wavelength of the neutrons used for reflectivity are typically on the order of 0.2 to 1 nm (2 to 10 Å). This technique requires a neutron source, which may be either a research reactor or a spallation source (based on a particle accelerator). Like all neutron scattering techniques, neutron reflectometry is sensitive to contrast arising from different nuclei (as compared to electron density, which is measured in x-ray scattering). This allows the technique to differentiate between various isotopes of elements. Neutron reflectometry measures the neutron scattering length density (SLD) and can be used to accurately calculate material density if the atomic composition is known. Comparison to other reflectometry techniques. Although other reflectivity techniques (in particular optical reflectivity, x-ray reflectometry) operate using the same general principles, neutron measurements are advantageous in a few significant ways. Most notably, since the technique probes nuclear contrast, rather than electron density, it is more sensitive for measuring some elements, especially lighter elements (hydrogen, carbon, nitrogen, oxygen, etc.). Sensitivity to isotopes also allows contrast to be greatly (and selectively) enhanced for some systems of interest using isotopic substitution, and multiple experiments that differ only by isotopic substitution can be used to resolve the phase problem that is general to scattering techniques. Finally, neutrons are highly penetrating and typically non-perturbing: which allows for great flexibility in sample environments, and the use of delicate sample materials (e.g., biological specimens). By contrast x-ray exposure may damage some materials, and laser light can modify some materials (e.g. photoresists). Also, optical techniques may include ambiguity due to optical anisotropy (birefringence), which complementary neutron measurements can resolve. Dual polarisation interferometry is one optical method which provides analogous results to neutron reflectometry at comparable resolution although the underpinning mathematical model is somewhat simpler, i.e. it can only derive a thickness (or birefringence) for a uniform layer density. Disadvantages of neutron reflectometry include the higher cost of the required infrastructure, the fact that some materials may become radioactive upon exposure to the beam, and insensitivity to the chemical state of constituent atoms. Moreover, the relatively lower flux and higher background of the technique (when compared to x-ray reflectivity) limit the maximum value of formula_0 that can be probed (and hence the measurement resolution).
[ { "math_id": 0, "text": "q_z" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": " q_z = \\frac{4\\pi}{\\lambda}\\sin ( \\theta )" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "\\theta " } ]
https://en.wikipedia.org/wiki?curid=6093957
60944280
Scanning helium microscopy
The scanning helium microscope (SHeM) is a novel form of microscopy that uses low-energy (5–100 meV) neutral helium atoms to image the surface of a sample without any damage to the sample caused by the imaging process. Since helium is inert and neutral, it can be used to study delicate and insulating surfaces. Images are formed by rastering a sample underneath an atom beam and monitoring the flux of atoms that are scattered into a detector at each point. The technique is different from a scanning helium ion microscope, which uses charged helium ions that can cause damage to a surface. Motivation. Microscopes can be divided into two general classes: those that illuminate the sample with a beam, and those that use a physical scanning probe. Scanning probe microscopies raster a small probe across the surface of a sample and monitor the interaction of the probe with the sample. The resolution of scanning probe microscopies is set by the size of the interaction region between the probe and the sample, which can be sufficiently small to allow atomic resolution. Using a physical tip (e.g. AFM or STM) does have some disadvantages though including a reasonably small imaging area and difficulty in observing structures with a large height variation over a small lateral distance. Microscopes that use a beam have a fundamental limit on the minimum resolvable feature size, formula_0,which is given by the Abbe diffraction limit, formula_1 where formula_2 is the wavelength of the probing wave, formula_3 is the refractive index of the medium the wave is travelling in and the wave is converging to a spot with a half-angle of formula_4. While it is possible to overcome the diffraction limit on resolution by using a near-field technique, it is usually quite difficult. Since the denominator of the above equation for the Abbe diffraction limit will be approximately two at best, the wavelength of the probe is the main factor in determining the minimum resolvable feature, which is typically about 1 μm for optical microscopy. To overcome the diffraction limit, a probe that has a smaller wavelength is needed, which can be achieved using either light with a higher energy, or through using a matter wave. X-rays have a much smaller wavelength than visible light, and therefore can achieve superior resolutions when compared to optical techniques. Projection X-ray imaging is conventionally used in medical applications, but high resolution imaging is achieved through scanning transmission X-ray microscopy (STXM). By focussing the X-rays to a small point and rastering across a sample, a very high resolution can be obtained with light. The small wavelength of X-rays comes at the expense of a high photon energy, meaning that X-rays can cause radiation damage. Additionally, X-rays are weakly interacting, so they will primarily interact with the bulk of the sample, making investigations of a surface difficult. Matter waves have a much shorter wavelength than visible light and therefore can be used to study features below about 1 μm. The advent of electron microscopy opened up a variety of new materials that could be studied due to the enormous improvement in the resolution when compared to optical microscopy. The de Broglie wavelength, formula_2, of a matter wave in terms of its kinetic energy, formula_5, and particle mass, formula_6, is given by formula_7 Hence, for an electron beam to resolve atomic structure, the wavelength of the matter wave would need be at least formula_2 = 1 Å, and therefore the beam energy would need to be given by formula_5 &gt; 100 eV. Since electrons are charged, they can be manipulated using electromagnetic optics to form extremely small spot sizes on a surface. Due to the wavelength of an electron beam being low, the Abbe diffraction limit can be pushed below atomic resolution and electromagnetic lenses can be used to form very intense spots on the surface of a material. The optics in a scanning electron microscope usually require the beam energy to be in excess of 1 keV to produce the best-quality electron beam. The high energy of the electrons leads to the electron beam interacting not only with the surface of a material, but forming a tear-drop interaction volume underneath the surface. While the spot size on the surface can be extremely low, the electrons will travel into the bulk and continue interacting with the sample. Transmission electron microscopy avoids the bulk interaction by only using thin samples, however usually the electron beam interacting with the bulk will limit the resolution of a scanning electron microscope. The electron beam can also damage the material, destroying the structure that is to be studied due to the high beam energy. Electron beam damage can occur through a variety of different processes that are specimen-specific. Examples of beam damage include the breaking of bonds in a polymer, which changes the structure, and knock-on damage in metals that creates a vacancy in the lattice, which changes to the surface chemistry. Additionally, the electron beam is charged, which means that the surface of the sample needs to be conducting to avoid artefacts of charge accumulation in images. One method to mitigate the issue when imaging insulating surfaces is to use an environmental scanning electron microscope (ESEM). Therefore, in general, electrons are often not particularly suited to studying delicate surfaces due to the high beam energy and lack of exclusive surface sensitivity. Instead, an alternative beam is required for the study of surfaces at low energy without disturbing the structure. Given the equation for the de Broglie wavelength above, the same wavelength of a beam can be achieved at lower energies by using a beam of particles that have a higher mass. Thus, if the objective were to study the surface of a material at a resolution that is below that which can be achieved with optical microscopy, it may be appropriate to use atoms as a probe instead. While neutrons can be used as a probe, they are weakly interacting with matter and can only study the bulk structure of a material. Neutron imaging also requires a high flux of neutrons, which usually can only be provided by a nuclear reactor or particle accelerator. A beam of helium atoms with a wavelength formula_2 = 1 Å has an energy of formula_8 20 meV, which is about the same as the thermal energy. Using particles of a higher mass than that of an electron means that it is possible to obtain a beam with a wavelength suitable to probe length scales down to the atomic level with a much lower energy. Thermal energy helium atom beams are exclusively surface sensitive, giving helium scattering an advantage over other techniques such as electron and x-ray scattering for surface studies. For the beam energies that are used, the helium atoms will have classical turning points 2–3 Å away from the surface atom cores. The turning point is well above the surface atom cores, meaning that the beam will only interact with the outermost electrons. History. The first discussion of obtaining an image of a surface using atoms was by King and Bigas, who showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest that it could be possible to form an image by scattering atoms from the surface, though it was some time before this was demonstrated. The idea of imaging with atoms instead of light was subsequently widely discussed in the literature. The initial approach to producing a helium microscope assumed that a focussing element is required to produce a high intensity beam of atoms. An early approach was to develop an atomic mirror, which is appealing since the focussing is independent of the velocity distribution of the incoming atoms. However the material challenges to produce an appropriate surface that is macroscopically curved and defect free on an atomic length-scale has proved too challenging so far. King and Bigas, showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest it could be possible to form an image by scattering atoms from the surface, though it was some time before it was demonstrated. Metastable atoms are atoms that have been excited out of the ground state, but remain in an excited state for a significant period of time. Microscopy using metastable atoms has been shown to be possible, where the metastable atoms release stored internal energy into the surface, releasing electrons that provide information on the electronic structure. The kinetic energy of the metastable atoms means that only the surface electronic structure is probed, but the large energy exchange when the metastable atom de-excites will still perturb delicate sample surfaces. The first two-dimensional neutral helium images were obtained using a conventional Fresnel zone plate by Koch et al. in a transmission setup. Helium will not pass through a solid material, therefore a large change in the measured signal is obtained when a sample is placed between the source and the detector. By maximising the contrast and using transmission mode, it was much easier to verify the feasibility of the technique. However, the setup used by Koch et al. with a zone plate did not produce a high enough signal to observe the reflected signal from the surface at the time. Nevertheless, the focussing obtained with a zone plate offers the potential for improved resolution due to the small beam spot size in the future. Research into neutral helium microscopes that use a Fresnel zone plate is an active area in Holst’s group at the University of Bergen. Since using a zone plate proved difficult due to the low focussing efficiency, alternative methods for forming a helium beam to produce images with atoms were explored. Recent efforts have avoided focussing elements and instead are directly collimating a beam with a pinhole. The lack of atom optics means that the beam width will be significantly larger than in an electron microscope. The first published demonstration of a two-dimensional image formed by helium reflecting from the surface was by Witham and Sánchez, who used a pinhole to form the helium beam. A small pinhole is placed very close to a sample and the helium scattered into a large solid angle is fed to a detector. Images are collected by moving the sample around underneath the beam and monitoring how the scattered helium flux changes. In parallel to the work by Witham and Sánchez, a proof of concept machine named the scanning helium microscope (SHeM) was being developed in Cambridge in collaboration with Dastoor's group from the University of Newcastle. The approach that was adopted was to simplify previous attempts that involved an atom mirror by using a pinhole, but to still use a conventional helium source to produce a high quality beam. Other differences from the Witham and Sánchez design include using a larger sample to pinhole distance, so that a larger variety of samples can be used and to use a smaller collection solid angle, so that it may be possible to observe more subtle contrast. These changes also reduced the total flux in the detector meaning that higher efficiency detectors are required (which in itself is an active area of research. Image formation process. The atomic beam is formed through a supersonic expansion, which is a standard technique used in helium atom scattering. The centreline of the gas is selected by a skimmer to form an atom beam with a narrow velocity distribution. The gas is then further collimated by a pinhole to form a narrow beam, which is typically between 1–10 μm. The use of a focusing element (such as a zone plate) allows beam spot sizes below 1 μm to be achieved, but currently still comes with low signal intensity. The gas then scatters from the surface and is collected into a detector. In order to measure the flux of the neutral helium atoms, they must first be ionised. The inertness of helium that makes it a gentle probe means that it is difficult to ionise and therefore reasonably aggressive electron bombardment is typically used to create the ions. A mass spectrometer setup is then used to select only the helium ions for detection. Once the flux from a specific part of the surface is collected, the sample is moved underneath the beam to generate an image. By obtaining the value of the scattered flux across a grid of positions, then values can then be converted to an image. The observed contrast in helium images has typically been dominated by the variation in topography of the sample. Typically, since the wavelength of the atom beam is small, surfaces appear extremely rough to the incoming atom beam. Therefore, the atoms are diffusely scattered and roughly follow Knudsen's Law [citation?] (the atom equivalent of Lambert's cosine law in optics). However, more recently work has begun to see divergence from diffuse scattering due to effects such as diffraction and chemical contrast effects. However, the exact mechanisms for forming contrast in a helium microscope is an active field of research. Most cases have some complex combination of several contrast mechanisms making it difficult to disentangle the different contributions. Combinations of images from multiple perspectives allows stereophotogrammetry to produce partial three dimensional images, especially valuable for biological samples subject to degradation in electron microscopes. Optimal configurations. The optimal configurations of scanning helium microscopes are geometrical configurations that maximise the intensity of the imaging beam within a given lateral resolution and under certain technological constraints. When designing a scanning helium microscope, scientists strive to maximise the intensity of the imaging beam while minimising its width. The reason behind this is that the beam's width gives the resolution of the microscope while its intensity is proportional to its signal to noise ratio. Due to their neutrality and high ionisation energy, neutral helium atoms are hard to detect. This makes high-intensity beams a crucial requirement for a viable scanning helium microscope. In order to generate a high-intensity beam, scanning helium microscopes are designed to generate a supersonic expansion of the gas into vacuum, that accelerates neutral helium atoms to high velocities. Scanning helium microscopes exist in two different configurations: the pinhole configuration and the zone plate configuration. In the pinhole configuration, a small opening (the pinhole) selects a section of the supersonic expansion far away from its origin, which has previously been collimated by a skimmer (essentially, another small pinhole). This section then becomes the imaging beam. In the zone plate configuration a Fresnel zone plate focuses the atoms coming from a skimmer into a small focal spot. Each of these configurations have different optimal designs, as they are defined by different optics equations. Pinhole configuration. For the pinhole configuration the width of the beam (which we aim to minimise) is largely given by geometrical optics. The size of the beam at the sample plane is given by the lines connecting the skimmer edges with the pinhole edges. When the Fresnel number is very small (formula_9), the beam width is also affected by Fraunhofer diffraction (see equation below). formula_10 In this equation formula_11 is the Full Width at Half Maximum of the beam, formula_12 is the geometrical projection of the beam and formula_13 is the Airy diffraction term. formula_4 is the Heaviside step function used here to indicate that the presence of the diffraction term depends on the value of the Fresnel number. Note that there are variations of this equation depending on what is defined as the "beam width" (for details compare and ). Due to the small wavelength of the helium beam, the Fraunhofer diffraction term can usually be omitted. The intensity of the beam (which we aim to maximise) is given by the following equation (according to the Sikora and Andersen model): formula_14 Where formula_15 is the total intensity stemming from the supersonic expansion nozzle (taken as a constant in the optimisation problem), formula_16 is the radius of the pinhole, "S" is the speed ratio of the beam, formula_17 is the radius of the skimmer, formula_18 is the radius of the supersonic expansion quitting surface (the point in the expansion from which atoms can be considered to travel in a straight line), formula_19is the distance between the nozzle and the skimmer and formula_20 is the distance between the skimmer and the pinhole. There are several other versions of this equation that depend on the intensity model, but they all show a quadratic dependency on the pinhole radius (the bigger the pinhole, the more intensity) and an inverse quadratic dependency with the distance between the skimmer and the pinhole (the more the atoms spread, the less intensity). By combining the two equations shown above, one can obtain that for a given beam width formula_11 for the geometrical optics regime the following values correspond to intensity maxima: formula_21 In here, formula_22 represents the working distance of the microscope and formula_23 is a constant that stems from the definition of the beam width. Note that both equations are given with respect to the distance between the skimmer and the pinhole, "a". The global maximum of intensity can then be obtained numerically by replacing these values in the intensity equation above. In general, smaller skimmer radii coupled with smaller distances between the skimmer and the pinhole are preferred, leading in practice to the design of increasingly smaller pinhole microscopes. Zone plate configuration. The zone plate microscope uses a zone plate (that acts roughly like as a classical lens) instead of a pinhole to focus the atom beam into a small focal spot. This means that the beam width equation changes significantly (see below). formula_24 Here, formula_25 is the zone plate magnification and formula_26 is the width of the smallest zone. Note the presence of chromatic aberrations (formula_27). The approximation sign indicates the regime in which the distance between the zone plate and the skimmer is much bigger than its focal length. The first term in this equation is similar to the geometric contribution formula_12 in the pinhole case: a bigger zone plate (taken all parameters constant) corresponds to a bigger focal spot size. The third term differs from the pinhole configuration optics as it includes a quadratic relation with the skimmer size (which is imaged through the zone plate) and a linear relation with the zone plate magnification, which will at the same time depend on its radius. The equation to maximise, the intensity, is the same as the pinhole case with the substitution formula_28. By substitution of the magnification equation: formula_29 where formula_2 is the average de-Broglie wavelength of the beam. Taking a constant formula_26, which should be made equal to the smallest achievable value, the maxima of the intensity equation with respect to the zone plate radius and the skimmer-zone plate distance formula_20 can be obtained analytically. The derivative of the intensity with respect to the zone plate radius can be reduced the following cubic equation (once it has been set equal to zero): formula_30 Here some groupings are used: formula_31 is a constant that gives the relative size of the smallest aperture of the zone plate compared with the average wavelength of the beam and formula_32 is the modified beam width, which is used through the derivation to avoid explicitly operating with the constant airy term: formula_33. This cubic equation is obtained under a series of geometrical assumptions and has a closed-form analytical solution that can be consulted in the original paper or obtained through any modern-day algebra software. The practical consequence of this equation is that zone plate microscopes are optimally designed when the distances between the components are small, and the radius of the zone plate is also small. This goes in line with the results obtained for the pinhole configuration, and has as its practical consequence the design of smaller scanning helium microscopes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d_\\text{A}" }, { "math_id": 1, "text": "d_\\text{A}=\\frac{\\lambda}{2n\\sin{\\theta}}," }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "\\lambda = \\frac{h}{\\sqrt{2mE}}." }, { "math_id": 8, "text": "E\\approx" }, { "math_id": 9, "text": "F\\ll1" }, { "math_id": 10, "text": "\\Phi = 2\\sqrt{2\\ln 2/3}\\sqrt{\\delta^2+3\\sigma_A^2(1-\\theta(F))}." }, { "math_id": 11, "text": "\\Phi" }, { "math_id": 12, "text": "\\delta" }, { "math_id": 13, "text": "\\sigma_A" }, { "math_id": 14, "text": "I = I_0 \\frac{r_{ph}^2}{(R_{F}+a)^2}\\left(1-\\exp\\left[-S^2\\left(\\frac{r_S(R_F+a)}{R_F(R_F-x_S+a)}\\right) ^2\\right]\\right)." }, { "math_id": 15, "text": "I_0" }, { "math_id": 16, "text": "r_{ph}" }, { "math_id": 17, "text": "r_S" }, { "math_id": 18, "text": "R_F" }, { "math_id": 19, "text": "x_S" }, { "math_id": 20, "text": "a" }, { "math_id": 21, "text": "r_S^{max}=\\frac{\\Phi a}{2W_D K}, \\qquad r_{ph}^{max}=\\frac{\\Phi a}{2K(a+W_D)}." }, { "math_id": 22, "text": "W_D" }, { "math_id": 23, "text": "K=2\\sqrt{2\\ln 2/3}" }, { "math_id": 24, "text": "\\Phi = K\\sqrt{\\sigma_{cm}^2+\\sigma_A^2+\\left(\\frac{M r_S}{\\sqrt{3}}\\right)^2} \\sim K\\sqrt{\\left(\\frac{r_{ZP}}{S\\sqrt{2}}\\right)^2+0.42\\Delta r+\\left(\\frac{M r_S}{\\sqrt{3}}\\right)^2 }." }, { "math_id": 25, "text": "M" }, { "math_id": 26, "text": "\\Delta r" }, { "math_id": 27, "text": "\\sigma_{cm}" }, { "math_id": 28, "text": "r_{ph}\\leftrightarrow r_{ZP}" }, { "math_id": 29, "text": "M=\\frac{f}{a-f}=\\frac{2r_{ZP}\\Delta r}{\\lambda(a-2r_{ZP}\\Delta r/\\lambda)}," }, { "math_id": 30, "text": "a^3+2a^2\\left(R_F-\\sqrt{3\\Gamma}r_{ZP}\\right)+a R_F(R_F-4r_{ZP}\\sqrt{3\\Gamma}) = r_{ZP}\\sqrt{3\\Gamma})R_F^2\\left[\\frac{2S^2\\Phi'^2+r_{ZP}^2(\\Gamma-1)}{S^2\\Phi'^2-0.5r_{ZP}^2}\\right]." }, { "math_id": 31, "text": "\\Gamma" }, { "math_id": 32, "text": "\\Phi'" }, { "math_id": 33, "text": "\\Phi'^2=\\sigma_{cm}^2+\\left( \\frac{M r_S}{\\sqrt{3}}\\right)^2" } ]
https://en.wikipedia.org/wiki?curid=60944280
609487
Gelfond–Schneider theorem
On the transcendence of a large class of numbers In mathematics, the Gelfond–Schneider theorem establishes the transcendence of a large class of numbers. History. It was originally proved independently in 1934 by Aleksandr Gelfond and Theodor Schneider. If "a" and "b" are complex algebraic numbers with "a" formula_0 and "b" not rational, then any value of "ab" is a transcendental number. formula_1 Here, "a" is √2√2, which (as proven by the theorem itself) is transcendental rather than algebraic. Similarly, if "a" = 3 and "b" = (log 2)/(log 3), which is transcendental, then "ab" = 2 is algebraic. A characterization of the values for "a" and "b" which yield a transcendental "ab" is not known. Corollaries. The transcendence of the following numbers follows immediately from the theorem: Applications. The Gelfond–Schneider theorem answers affirmatively Hilbert's seventh problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\not\\in \\{0,1\\}" }, { "math_id": 1, "text": "{\\left(\\sqrt{2}^{\\sqrt{2}}\\right)}^{\\sqrt{2}} = \\sqrt{2}^{\\sqrt{2} \\cdot \\sqrt{2}} = \\sqrt{2}^2 = 2." }, { "math_id": 2, "text": "|a-1|_p<1" }, { "math_id": 3, "text": "|b-1|_p<1," }, { "math_id": 4, "text": "(\\log_p a)/(\\log_p b)" }, { "math_id": 5, "text": "2^{\\sqrt{2}}" }, { "math_id": 6, "text": "\\sqrt{2}^{\\sqrt{2}}." }, { "math_id": 7, "text": "e^{\\pi} = \\left( e^{i \\pi} \\right)^{-i} = (-1)^{-i} = 23.14069263 \\ldots" }, { "math_id": 8, "text": " i^i = \\left( e^{\\frac{i \\pi}{2}} \\right)^i = e^{-\\frac{\\pi}{2}} = 0.207879576 \\ldots" } ]
https://en.wikipedia.org/wiki?curid=609487
6095100
FIFA World Ranking system (1999–2006)
The 1999–2006 FIFA men's ranking system was a calculation technique previously used by FIFA for ranking men's national teams in association football. The ranking system was introduced by FIFA in 1999, as an update to an earlier system, and was replaced after the 2006 World Cup with a simplified system. The rankings were fundamentally the same as a league system, though on a much larger, and more complex scale. Each team could potentially win a certain number of points in each match, though the number of points awarded in a league system depends solely on the result of the match, in the FIFA rankings far more had to be taken into account, as every team does not play all of the other teams home and away every season, as in most league systems. After the awarding of points, the teams were then organized into descending order by number of points, with the team with the most being the highest ranked. The points allocated did not depend solely on whether a team wins, loses or draws their match, but also on the importance of the match and the strength of the opponent. A win over a weak opponent resulted in fewer points being awarded than a win over a much stronger one. This meant that a match would not result in the two or three points for a win and one for a draw, as is standard in most national league competitions. The calculation was more complex since it had to incorporate the other aforementioned factors. One of the changes that was introduced when the ranking system was revised in 1999 was dubbed the "scaling up", where the points on offer for a match were roughly multiplied by ten, with the addition of more factors. In the 1999–2006 system teams could receive between zero and thirty points for a single match, and the leaders of the rankings had over eight hundred points. Overview. The rankings were intended by FIFA to give a fair ranking of all FIFA member associations' senior men's national teams. For the ranking all matches, their scores and importance were all recorded, and were used in the calculation procedure. Only matches for the senior men's national team were included – separate ranking systems were used for other representative national sides, such as the women's and junior teams, for example the FIFA Women's World Rankings. FIFA did not use the same formula to determine its rankings for women's football. The women's rankings were based on a procedure which was a simplified version of the Football Elo Ratings. For the purposes of calculating the importance of matches, each match was divided into one of six categories. Competitions that were not endorsed by the appropriate continental association of FIFA were counted as friendlies. Each category was given appropriate weighting in the calculation in order to correctly include the importance. The six categories were: A computer program was used to calculate the rankings. Points were awarded according to the following criteria: In order to try to remove the obvious advantage of having more matches, only the best seven matches each year were taken into account, as seven was the average number of matches a team played per year. Older matches were given diminishing importance within the calculation, in order to reward teams' most recent form, so the calculations only took into account teams' performances over the previous eight years. At the end of each season two prizes were awarded by FIFA; Team of the Year and Best Mover of the year. Winning, drawing or losing. In any football ranking system, a win will bring more points than a draw or a loss. Until July 2006, however, FIFA believed that awarding points simply on the basis of win, draw or loss, would not meet the requirements of a reliable and accurate world ranking system. The calculations took into account the relative strengths of the two teams. This resulted in more points being awarded for beating a stronger opponent than for beating a relatively weak one. It also enabled weak teams to earn points despite a defeat if they managed to play well (i.e. they scored goals, or there was low margin of defeat), though this was a small number and did not secure as many points as the winning team. In the event of a match being decided on penalties, the winners received the correct points for the victory. The losers received points for the draw which they earned in normal time. Number of goals. When calculating the points, the number of goals was taken into consideration, and the distribution of the points between the two teams was also affected by their relative strengths (i.e. the lower ranked a team was in comparison to its opponent, the more points it received for a goal scored), and as well as points being given for goals scored, they were deducted from the total for conceding goals. In order to encourage more attacking football, points given for goals scored were weighted far more heavily than the deduction as a result of conceding, though most teams were more concerned with the tournament or match at hand than their position in the world rankings. When a match was decided on penalties, only those goals scored in playing time were included in the total. To prevent "overweighing" goals, and huge numbers of points being given in runaway victories, far more weighting was attached to the initial goal by each team, and progressively fewer points for any subsequent ones. This was done on the principle that the goals scored were important but the most important factor was the win or loss, as in normal championship games. Home and away games. To allow for the extra handicap incurred by playing away from home, a small bonus of three points per match was awarded to the away team. For tournaments played on neutral territory, but with a home team, such a World Cup Finals, there were no bonus points given. Status of a match. The relative game importance was also considered when calculating the points. The method for incorporating this into the totals points allocation was by multiplying the match points by a predetermined weighting. These factors were: Regional strength factors. Due to significant differences in national team strengths between continents, weighting factors were worked out each year for each confederation, based the member teams of the confederation's performances in intercontinental encounters and competitions. Not all intercontinental matches were taken into account, only matches between the strongest 25 percent of teams from each continent, with a minimum of five teams from each continent considered. This averted errors that could be caused by considering matches where relatively strong teams from one confederation defeated weak teams from another. The weightings were applied in the form of multiplication factors for teams from the same continent. If teams from two different confederations were involved in one match then the factor applied was the average of the two continental weightings. For 2005, weighting factors ranged from 1.00 for UEFA teams to 0.93 for OFC teams. Summary. Based on the above considerations, the total number of points credited to a team after a match depended on the following criteria: Where: formula_0 The number of points for a win, draw or loss, as well as for the number of goals scored or conceded was dependent on the strength of the opponent. In order not to punish a lack of success too severely, a negative points total was rounded up to zero. These examples have also been used to demonstrate the Elo football ratings system for a fair comparison. Here are some calculation examples to show the formula being used. For simplicity in this instance it is assumed that three teams of different strengths are involved in a small friendly tournament on neutral territory. "Note:no away team bonus, nor continental or status multiplication factors are applied." Before the tournament the three team have the following point totals: As shown, team A is by some distance the highest ranked of the three: The following table shows the divisions of point allocations based on three possible outcomes of the match between the far stronger team A, and the somewhat weaker team B: Example 1. Team A versus Team B (Team A stronger than Team B) As is shown on the table, in the case of a 3:1 win, team a receives an allocation of 21.0 points, however, since team A is a much higher ranked team, the win alone earns only 17.4 of the total points, and the much lower ranked team B still manages to earn 1.7 points. Had match been won 3:1 by the far weaker team B, they would have received 27.2 points, whilst team A would have received a negative total of points, which would then have been rounded up to 0.0. If the result had been a 2:2 draw, since it is the lower-rated team, B would have earned a few points more than team A. Example 2. Team B versus Team C (both teams approximately the same strength) When the difference in strength between the two teams is less, so also the difference in points allocation will be fewer. The following table shows how the points would be divided following the same results as above, but with two roughly equally ranked teams, B and C, being involved: Further criteria. To increase the level of accuracy and objectivity of the rankings, after the 1999 revision, further criteria were introduced. Firstly, the number of matches a team plays within a given period of time was taken into account. Secondly, the importance attached to previous results would be interpreted differently. The number of matches played. In order to ensure that an increased number of fixtures in a given season did not give a team more potential points, the rankings only considered a limited number of results. This number was determined by deciding how many fixtures in a season an "averagely active team" would participate in. This was agreed to be between seven and ten matches a year. In order to prevent teams with more fixtures than this being given an advantage, the calculation initially considered only the best seven results of a team. To include further results an average of them must be calculated. For example, if a team played twelve matches: Previous results. In order to assure that the rankings best reflected a team's current form, the most recent results were of greatest importance; however attention was also paid to the results of previous years. The results from the preceding year were given full weighting, with the results from one to two years before given seven-eighths of their value, those from two to three years before given six-eighths, and so on until after eight years the results are dropped from calculation completely. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(w+g+a-c)\\ s\\ r=m\\," } ]
https://en.wikipedia.org/wiki?curid=6095100
6095269
G-factor (physics)
Relation between observed magnetic moment of a particle and the related unit of magnetic moment A g"-factor (also called g" value) is a dimensionless quantity that characterizes the magnetic moment and angular momentum of an atom, a particle or the nucleus. It is the ratio of the magnetic moment (or, equivalently, the gyromagnetic ratio) of a particle to that expected of a classical particle of the same charge and angular momentum. In nuclear physics, the nuclear magneton replaces the classically expected magnetic moment (or gyromagnetic ratio) in the definition. The two definitions coincide for the proton. Definition. Dirac particle. The spin magnetic moment of a charged, spin-1/2 particle that does not possess any internal structure (a Dirac particle) is given by formula_0 where μ is the spin magnetic moment of the particle, "g" is the "g"-factor of the particle, "e" is the elementary charge, "m" is the mass of the particle, and S is the spin angular momentum of the particle (with magnitude "ħ"/2 for Dirac particles). Baryon or nucleus. Protons, neutrons, nuclei, and other composite baryonic particles have magnetic moments arising from their spin (both the spin and magnetic moment may be zero, in which case the "g"-factor is undefined). Conventionally, the associated "g"-factors are defined using the nuclear magneton, and thus implicitly using the proton's mass rather than the particle's mass as for a Dirac particle. The formula used under this convention is formula_1 where μ is the magnetic moment of the nucleon or nucleus resulting from its spin, "g" is the effective "g"-factor, I is its spin angular momentum, "μ"N is the nuclear magneton, "e" is the elementary charge, and "m"p is the proton rest mass. Calculation. Electron "g"-factors. There are three magnetic moments associated with an electron: one from its spin angular momentum, one from its orbital angular momentum, and one from its total angular momentum (the quantum-mechanical sum of those two components). Corresponding to these three moments are three different "g"-factors: Electron spin "g"-factor. The most known of these is the "electron spin g-factor" (more often called simply the "electron g-factor"), "g"e, defined by formula_2 where μs is the magnetic moment resulting from the spin of an electron, S is its spin angular momentum, and "μ"B = "eħ"/2"m"e is the Bohr magneton. In atomic physics, the electron spin "g"-factor is often defined as the "absolute value" of "g"e: formula_3 The "z"-component of the magnetic moment then becomes formula_4 The value "g"s is roughly equal to 2.002319 and is known to extraordinary precision – one part in 1013. The reason it is not "precisely" two is explained by quantum electrodynamics calculation of the anomalous magnetic dipole moment. The spin "g"-factor is related to spin frequency for a free electron in a magnetic field of a cyclotron: formula_5 Electron orbital "g"-factor. Secondly, the "electron orbital g-factor", "g""L", is defined by formula_6 where μ"L" is the magnetic moment resulting from the orbital angular momentum of an electron, L is its orbital angular momentum, and "μ"B is the Bohr magneton. For an infinite-mass nucleus, the value of "g""L" is exactly equal to one, by a quantum-mechanical argument analogous to the derivation of the classical magnetogyric ratio. For an electron in an orbital with a magnetic quantum number "m"l, the "z"-component of the orbital magnetic moment is formula_7 which, since "g""L" = 1, is −"μ"B"m"l For a finite-mass nucleus, there is an effective "g" value formula_8 where "M" is the ratio of the nuclear mass to the electron mass. Total angular momentum (Landé) "g"-factor. Thirdly, the "Landé g-factor", "g""J", is defined by formula_9 where μ"J" is the total magnetic moment resulting from both spin and orbital angular momentum of an electron, J = L + S is its total angular momentum, and "μ"B is the Bohr magneton. The value of "g""J" is related to "g""L" and "g"s by a quantum-mechanical argument; see the article Landé "g"-factor. μ"J" and J vectors are not collinear, so only their magnitudes can be compared. Muon "g"-factor. The muon, like the electron, has a "g"-factor associated with its spin, given by the equation formula_10 where μ is the magnetic moment resulting from the muon's spin, S is the spin angular momentum, and "m"μ is the muon mass. That the muon "g"-factor is not quite the same as the electron "g"-factor is mostly explained by quantum electrodynamics and its calculation of the anomalous magnetic dipole moment. Almost all of the small difference between the two values (99.96% of it) is due to a well-understood lack of heavy-particle diagrams contributing to the probability for emission of a photon representing the magnetic dipole field, which are present for muons, but not electrons, in QED theory. These are entirely a result of the mass difference between the particles. However, not all of the difference between the "g"-factors for electrons and muons is exactly explained by the Standard Model. The muon "g"-factor can, in theory, be affected by physics beyond the Standard Model, so it has been measured very precisely, in particular at the Brookhaven National Laboratory. In the E821 collaboration final report in November 2006, the experimental measured value is , compared to the theoretical prediction of . This is a difference of 3.4 standard deviations, suggesting that beyond-the-Standard-Model physics may be a contributory factor. The Brookhaven muon storage ring was transported to Fermilab where the Muon "g"–2 experiment used it to make more precise measurements of muon "g"-factor. On April 7, 2021, the Fermilab Muon "g"−2 collaboration presented and published a new measurement of the muon magnetic anomaly. When the Brookhaven and Fermilab measurements are combined, the new world average differs from the theory prediction by 4.2 standard deviations. Measured "g"-factor values. The electron "g"-factor is one of the most precisely measured values in physics. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol \\mu = g {e \\over 2m} \\mathbf S ," }, { "math_id": 1, "text": " \\boldsymbol{\\mu} = g {\\mu_\\text{N} \\over \\hbar} {\\mathbf{I}} = g {e \\over 2 m_\\text{p}} \\mathbf{I} ," }, { "math_id": 2, "text": " \\boldsymbol{\\mu}_{\\text{s}} = g_\\text{e} {\\mu_\\text{B} \\over \\hbar} \\mathbf{S}" }, { "math_id": 3, "text": "g_\\text{s} = |g_\\text{e}| = -g_\\text{e} ." }, { "math_id": 4, "text": " \\mu_\\text{z} = -g_\\text{s} \\mu_\\text{B} m_\\text{s}" }, { "math_id": 5, "text": "\\nu_\\text{s} = \\frac{g}{2} \\nu_\\text{c}" }, { "math_id": 6, "text": " \\boldsymbol{\\mu}_L = -g_L {\\mu_\\mathrm{B} \\over \\hbar} \\mathbf{L} ," }, { "math_id": 7, "text": "\\mu_\\text{z} = -g_L \\mu_\\text{B} m_\\text{l}" }, { "math_id": 8, "text": "g_L = 1 - \\frac{1}{M}" }, { "math_id": 9, "text": " |\\boldsymbol{\\mu_\\text{J}}| = g_J {\\mu_\\text{B} \\over \\hbar} |\\mathbf{J}|" }, { "math_id": 10, "text": "\\boldsymbol \\mu = g {e \\over 2m_\\mu} \\mathbf{S} ," } ]
https://en.wikipedia.org/wiki?curid=6095269
6095467
Erland Samuel Bring
Swedish mathematician Erland Samuel Bring (19 August 1736 – 20 May 1798) was a Swedish mathematician. Bring studied at Lund University between 1750 and 1757. In 1762 he obtained a position of a reader in history and was promoted to professor in 1779. At Lund he wrote eight volumes of mathematical work in the fields of algebra, geometry, analysis and astronomy, including "Meletemata quaedam mathematica circa transformationem aequationum algebraicarum" (1786). This work describes Bring's contribution to the algebraic solution of equations. Bring had developed an important transformation to simplify a quintic equation to the form formula_0 (see Bring radical). In 1832–35 the same transformation was independently derived by George Jerrard. However, whereas Jerrard knew from the past work by Paolo Ruffini and Niels Henrik Abel that a general quintic equation can not be solved, this fact was not known to Bring, putting him in a disadvantage. Bring's curve is named after him. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^5 + px + q = 0" } ]
https://en.wikipedia.org/wiki?curid=6095467
60956303
K-U ratio
Ratio of a slightly volatile element, potassium (K), to a highly refractory element, uranium (U). The K/U Ratio is the ratio of a slightly volatile element, potassium (K), to a highly refractory element, uranium (U). It is a useful way to measure the presence of volatile elements on planetary surfaces. The K/U ratio helps explain the evolution of the planetary system and the origin of Earth's moon. Volatile and refractory elements. In planetary science, volatiles are the group of chemical elements and chemical compounds with low boiling points that are associated with a planet's or a moon's crust or atmosphere. Very low boiling temperature examples include nitrogen, water, carbon dioxide, ammonia, hydrogen, methane and sulfur dioxide. In contrast with volatiles, elements and compounds with high boiling points are known as "refractory substances". On the basis of available data, which is sparse for Mars and very uncertain for Venus, the three inner planets then become progressively more depleted in K passing from Mars to Earth to Venus. Planetary gamma-ray spectrometers. Some elements like potassium, uranium, and thorium are naturally radioactive and give off gamma rays as they decay. Electromagnetic radiation from these isotopes can be detected by a Gamma-Ray Spectrometer (GRS) dropped toward the planetary surface or observed from orbit. An orbiting instrument can map the surface distribution of many elements for an entire planet. Uncrewed spacecraft programs such as Venera and the Vega program have flown to Venus and sent back data allowing estimates of the K/U ratio of the surface rocks. The Lunar Prospector mission used a GRS to map the Earth's Moon. To determine the elemental makeup of the Martian surface, the Mars Odyssey used a GRS and two neutron detectors. These GRS readings can be compared to direct elemental measurements of chondrites meteorites, Earth, and Moon samples brought back from Apollo program missions, as well as to meteorites that are believed to have come from Mars. Ratios of solar system bodies. K and U move together during geochemical processes and have long-lived radioisotopes that emit gamma rays. It is calculated as a ratio of one to the other on an equal mass basis which is often formula_0. This creates a compelling explanation for the evolution of the solar system. This result is consistent with an increasing temperature toward the sun during its early protoplanetary nebula phase. The temperature at the early stage of solar system formation was in excess of 1,000K at the distance of Earth from the sun, and as low as 200–100K at the distances of Jupiter and Saturn. Earth. At the high temperatures for Earth, no volatiles would be in the solid state, and the dust would be made up of silicate and metal. The continental crust and lower mantle have average K/U values of about 12,000. mid-ocean ridge basalt (MORB) or upper mantle have more volatiles and have a K/U ratio of about 19,000. Volatile depletion explains why Earth's sodium (volatile) content is about 10% of its calcium (refractory) content, despite the similar abundance in chondrites. Earth's Moon's origin. The Moon stands out as being very depleted in volatiles. The Moon not only lacks water and atmospheric gases, but also lacks moderately volatile elements such as K, Na, and Cl. The Earth's K/U ratio is 12,000, while the Moon has a K/U ratio of only 2,000. This difference suggests that the material that formed the Moon was subjected to temperatures considerably higher than the Earth. The prevailing theory is that the Moon formed out of the debris left over from a collision between Earth and an astronomical body the size of Mars, approximately 4.5 billion years ago, about 20 to 100 million years after the Solar System coalesced. This is called the Giant-impact hypothesis. It is hypothesized that most of the outer silicates of the colliding body would be vaporized, whereas a metallic core would not. Hence, most of the collisional material sent into orbit would consist of silicates, leaving the coalescing Moon deficient in iron. The more volatile materials that were emitted during the collision probably would escape the Solar System, whereas silicates would tend to coalesce. The ratios of the Moon's volatile elements are not explained by the giant-impact hypothesis. If the giant-impact hypothesis is correct, they must be due to some other cause. Meteorites. Farther from the sun, the temperature was low enough that volatile elements would precipitate as ices. The two are separated by a snow line controlled by the temperature distribution around the Sun. Formed farthest from the sun, the carbonaceous chondrites have the highest K/U ratios. Ordinary chondrites which form closer in are only about 10% depleted in K relative to U. The fine-grained matrix which fills spaces between the chondrules, however, appears to have formed at rather different temperatures in the various classes of chondrites. For this reason the volatile abundances of different classes of chondrites can vary. One particularly important class is the carbonaceous chondrites because of their high carbon content. In these meteorites, chondrules coexist with minerals that are only stable below 100 °C, so they contain materials that formed in both high- and low-temperature environments and were only later collected together. Further evidence for the primordial attributes of carbonaceous chondrites comes from the fact that they have compositions very similar to the nonvolatile element composition of the sun. Controversy of Mercury. Mercury was surveyed by the MESSENGER mission with its Gamma-Ray Spectrometer. The K/U ratios for Mercury could range between 8,000 and 17,000 which would imply a volatile rich planet. However, metal/silicate partitioning data for K and U still needs additional experiments at the conditions of Mercury's core formation to understand this unusual high ratio. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu gram [element]/gram" } ]
https://en.wikipedia.org/wiki?curid=60956303
60958267
Reshetikhin–Turaev invariant
Family of quantum invariants In the mathematical field of quantum topology, the Reshetikhin–Turaev invariants (RT-invariants) are a family of quantum invariants of framed links. Such invariants of framed links also give rise to invariants of 3-manifolds via the Dehn surgery construction. These invariants were discovered by Nicolai Reshetikhin and Vladimir Turaev in 1991, and were meant to be a mathematical realization of Witten's proposed invariants of links and 3-manifolds using quantum field theory. Overview. To obtain an RT-invariant, one must first have a formula_0-linear ribbon category at hand. Each formula_0-linear ribbon category comes equipped with a diagrammatic calculus in which morphisms are represented by certain decorated framed tangle diagrams, where the initial and terminal objects are represented by the boundary components of the tangle. In this calculus, a (decorated framed) link diagram formula_1, being a (decorated framed) tangle without boundary, represents an endomorphism of the monoidal identity (the empty set in this calculus), or in other words, an element of formula_0. This element of formula_0 is the RT-invariant associated to formula_1. Given any closed oriented 3-manifold formula_2, there exists a framed link formula_1 in the 3-sphere formula_3 so that formula_2 is homeomorphic to the manifold formula_4 obtained by surgering formula_3 along formula_1. Two such manifolds formula_4 and formula_5 are homeomorphic if and only if formula_1 and formula_6 are related by a sequence of Kirby moves. Reshetikhin and Turaev used this idea to construct invariants of 3-manifolds by combining certain RT-invariants into an expression which is invariant under Kirby moves. Such invariants of 3-manifolds are known as Witten–Reshetikhin–Turaev invariants (WRT-invariants). Examples. Let formula_7 be a ribbon Hopf algebra over a field formula_0 (one can take, for example, any quantum group over formula_8). Consider the category formula_9, of finite dimensional representations of formula_7. There is a diagrammatic calculus in which morphisms in formula_9 are represented by framed tangle diagrams with each connected component decorated by a finite dimensional representation of formula_7. That is, formula_9 is a formula_0-linear ribbon category. In this way, each ribbon Hopf algebra formula_7 gives rise to an invariant of framed links colored by representations of formula_7 (an RT-invariant). For the quantum group formula_10 over the field formula_11, the corresponding RT-invariant for links and 3-manifolds gives rise to the following family of link invariants, appearing in skein theory. Let formula_1 be a framed link in formula_3 with formula_12 components. For each formula_13, let formula_14 denote the RT-invariant obtained by decorating each component of formula_1 by the unique formula_15-dimensional representation of formula_7. Then formula_16 where the formula_12-tuple, formula_17 denotes the Kauffman polynomial of the link formula_1, where each of the formula_12 components is cabled by the Jones–Wenzl idempotent formula_18, a special element of the Temperley–Lieb algebra. To define the corresponding WRT-invariant for 3-manifolds, first of all we choose formula_19 to be either a formula_20-th root of unity or an formula_21-th root of unity with odd formula_21. Assume that formula_4 is obtained by doing Dehn surgery on a framed link formula_1. Then the RT-invariant for the 3-manifold formula_2 is defined to be formula_22 where formula_23 is the Kirby coloring, formula_24 are the unknot with formula_25 framing, and formula_26 are the numbers of positive and negative eigenvalues for the linking matrix of formula_1 respectively. Roughly speaking, the first and second bracket ensure that formula_27 is invariant under blowing up/down (first Kirby move) and the third bracket ensures that formula_27 is invariant under handle sliding (second Kirby move). Properties. The Witten–Reshetikhin–Turaev invariants for 3-manifolds satisfy the following properties: These three properties coincide with the properties satisfied by the 3-manifold invariants defined by Witten using Chern–Simons theory (under certain normalization) Open problems. Witten's asymptotic expansion conjecture. Pick formula_36. Witten's asymptotic expansion conjecture suggests that for every 3-manifold formula_2, the large formula_21-th asymptotics of formula_37 is governed by the contributions of flat connections. Conjecture: There exists constants formula_38 and formula_39 (depending on formula_2) for formula_40 and formula_41 for formula_42 such that the asymptotic expansion of formula_37 in the limit formula_43 is given by formula_44 where formula_45 are the finitely many different values of the Chern–Simons functional on the space of flat formula_46-connections on formula_2. Volume conjecture for the Reshetikhin–Turaev invariant. The Witten's asymptotic expansion conjecture suggests that at formula_47, the RT-invariants grow polynomially in formula_21. On the contrary, at formula_48 with odd formula_21, in 2018 Q. Chen and T. Yang suggested the volume conjecture for the RT-invariants, which essentially says that the RT-invariants for hyperbolic 3-manifolds grow exponentially in formula_21 and the growth rate gives the hyperbolic volume and Chern–Simons invariants for the 3-manifold. Conjecture: Let formula_2 be a closed oriented hyperbolic 3-manifold. Then for a suitable choice of arguments, formula_49 where formula_21 is odd positive integer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Bbbk" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "S^3" }, { "math_id": 4, "text": "M_L" }, { "math_id": 5, "text": "M_{L^\\prime}" }, { "math_id": 6, "text": "L^\\prime" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "\\mathbb{C}" }, { "math_id": 9, "text": "\\textbf{Rep}^{\\text{f.d.}}(A)" }, { "math_id": 10, "text": "A=U_q(\\mathfrak{sl}_2(\\mathbb{C}))" }, { "math_id": 11, "text": "\\mathbb{C}(q)" }, { "math_id": 12, "text": "m" }, { "math_id": 13, "text": "r\\in\\mathbb{N}" }, { "math_id": 14, "text": "\\text{RT}_r(S^3, L)" }, { "math_id": 15, "text": "N+1" }, { "math_id": 16, "text": "\\operatorname{RT}_r(S^3,L) = \\langle e_n, e_n, \\dots, e_n \\rangle_L \\in\\mathbb{C}(q)" }, { "math_id": 17, "text": "\\langle e_n, e_n, \\dots, e_n \\rangle_L" }, { "math_id": 18, "text": "e_n" }, { "math_id": 19, "text": "t" }, { "math_id": 20, "text": "2r" }, { "math_id": 21, "text": "r" }, { "math_id": 22, "text": "\\operatorname{RT}_r(M_L) = \\langle \\omega_r \\rangle_{O^+}^{b_-} \\langle \\omega_r \\rangle_{O^-}^{b_+} \\langle \\omega_r, \\omega_r, \\dots, \\omega_r \\rangle_L (t)\\in \\mathbb{C}," }, { "math_id": 23, "text": "\\omega_r = \\sum_{n=0}^{r-2} \\langle e_n \\rangle_{O} e_n" }, { "math_id": 24, "text": "O^\\pm" }, { "math_id": 25, "text": "\\pm 1" }, { "math_id": 26, "text": "b_\\pm" }, { "math_id": 27, "text": "\\text{RT}_r(M_L)" }, { "math_id": 28, "text": "\\text{RT}_r(M\\#N) = \\text{RT}_r(M)\\text{RT}_r(N)," }, { "math_id": 29, "text": "M\\# N" }, { "math_id": 30, "text": "N" }, { "math_id": 31, "text": "\\operatorname{RT}_r(-M)=\\overline{\\text{RT}_r(M)}," }, { "math_id": 32, "text": "-M" }, { "math_id": 33, "text": "\\overline{\\text{RT}_r(M)}" }, { "math_id": 34, "text": "\\operatorname{RT}_r(M)" }, { "math_id": 35, "text": "\\operatorname{RT}_r(S^3)=1" }, { "math_id": 36, "text": "t = e^{\\frac{\\pi i}{r}}" }, { "math_id": 37, "text": "\\text{RT}_r(M)" }, { "math_id": 38, "text": "d_j \\in \\mathbb{Q}" }, { "math_id": 39, "text": "b_j \\in \\mathbb{C}" }, { "math_id": 40, "text": "j = 0,1, \\dots, n" }, { "math_id": 41, "text": "a^l_j \\in \\mathbb{C}" }, { "math_id": 42, "text": "j=0,1,\\dots, n, l=1,2,\\dots" }, { "math_id": 43, "text": "r \\to \\infty" }, { "math_id": 44, "text": " \\operatorname{RT}_r(M) \\sim \\sum_{j=0}^n e^{2\\pi i r q_j} r^{d_j} b_j \\left( 1 + \\sum_{\\ell=1}^\\infty a^\\ell_j r^{-\\ell} \\right)" }, { "math_id": 45, "text": "q_0 = 0, q_1,\\dots q_n" }, { "math_id": 46, "text": "\\text{SU}(2)" }, { "math_id": 47, "text": "t =e^{{\\pi i}/{r}} " }, { "math_id": 48, "text": "t=e^{{2\\pi i}/{r}}" }, { "math_id": 49, "text": "\\lim_{r\\to \\infty} \\frac{4\\pi}{r} \\log \\left(\\operatorname{RT}_r \\big(M,e^{{2\\pi i}/{r}}\\big)\\right) = \\operatorname{Vol}(M) - i \\operatorname{CS}(M) \\mod \\pi^2 i\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=60958267
6095845
Digital scan back
A digital scan back or scanning back is a type of digital camera back. Digital imaging devices typically use a matrix of light-sensitive photosensors, such as CCD or CMOS technologies. These sensors can be arranged in different ways, like a Bayer filter, where each row captures RGB components, or using one full-sized layer for each color, such as the Foveon X3 sensor. A digital scan back takes a similar approach to the second type of photosensor, but instead of using one matrix for each component, it uses one array per component. This translates to a 3×"N" sensor matrix, where "N" is typically a large number (between 5,000 for earlier models and 15,000 for newer models), which is then placed vertically in a holder. To take an image, the sensor travels the "x"-axis, taking one exposure per point. Advantages. The main advantages of this technology are the extremely high image quality and the huge resulting files. This translates to very accurate color reproduction, because every pixel is measured individually, allowing printing in very large sizes without loss of detail. Previously only large format film cameras could print to similar sizes. Scan backs also have the advantage of not being subject to light fall-off due to off-axis lens positions, so wide angle lenses and perspective shifts on the camera can be used without issue. A somewhat less obvious advantage lies in that scanning backs are typically created using trilinear CCDs. This means that for every pixel position a separate measurement is taken for red then green then blue. This results in a much higher effective resolution than a similar resolution image created by a mosaic sensor such as those on most typical digital cameras. (With the notable exception of Foveon) Disadvantages. The downside of capturing images this way is the amount of time it takes. Even at the fastest speed, the time taken to make a complete exposure is measured in seconds or minutes, because even though the shutter speed could be 1/1000 s, it has to be taken literally thousands of times. This makes it very inappropriate for moving subjects, such as sports, nature, or city life, and is practically restricted to still life, art reproduction, and landscapes. Another downside is that most of these backs have to be used tethered to a computer. One reason is that there would be no other way to know when critical focus has been achieved, and the second reason is the huge file sizes, measured in hundreds of megabytes or even gigabytes. A Compact Flash card can only store a handful of images. Example. Let's say we have a 10,000 pixel array, and we want to take an image with a shutter speed of 1/50 s, and 48 bits per pixel (16 bits per component) to achieve maximum quality. Assume each individual pixel has a width and height of in each dimension. The array will be wide and high. Now we place this array in a holder high and wide. That means we can evenly divide the x axis in 10,000 points, so the array has to take 10,000 exposures. To capture an image the sensor would start at "x" 0, take one exposure at the selected shutter speed, move to "x" 1, take one exposure at the selected shutter speed, and so on until "x" 9,999. Now instead of doing it with one array, we do it with three arrays at the same time (one for each component in RGB), where the first array would be making an exposure at x, the second at "x" − 1, and the third at "x" − 2. The resulting image would be 10,000 × 10,000, or 100 million pixels, with full color information for each pixel. The total exposure time will be at least: formula_0 or almost three and a half minutes. A real sensor would have to take into account an accurate movement to the next point, stopping and waiting to reduce vibration, take the exposure, and so on. This overhead is required, and can double or triple this time. formula_1 The newest sensors can achieve even bigger file sizes, up to several gigabytes. History. The first commercial digital scan back was introduced by "Leaf" (now "Phase One") in 1991. For a further outline of the history of scanning backs see Digital camera back. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{50}\\text{ s} \\times 10,000\\text{ pixels} = 200\\text{ s}" }, { "math_id": 1, "text": "10,000\\text{ pixels} \\times 10,000\\text{ pixels} \\times 48\\text{ bpp} \\times \\frac{1\\text{ byte}}{8\\text{ bits}} = 600\\text{ megabytes}" } ]
https://en.wikipedia.org/wiki?curid=6095845
60958781
Normal fan
Structure in convex geometry In mathematics, specifically convex geometry, the normal fan of a convex polytope "P" is a polyhedral fan that is dual to "P". Normal fans have applications to polyhedral combinatorics, linear programming, tropical geometry and other areas of mathematics. Definition. Given a convex polytope "P" in R"n", the normal fan "N""P" of "P" is a polyhedral fan in the dual space, (R"n")* whose cones consist of the normal cone "C""F" to each face "F" of "P", formula_0 Each normal cone "C""F" is defined as the set of linear functionals "w" such that the set of points "x" in "P" that maximize "w"("x") contains "F", formula_1 formula_2 formula_3 where "H" is the smallest face of "P" that contains both "F" and "G". References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "N_P = \\{C_F\\}_{F \\in \\operatorname{face}(P)}." }, { "math_id": 1, "text": "C_F = \\{w \\in (\\mathbb{R}^n)^* \\mid F \\subseteq \\operatorname{argmax}_{x \\in P} w(x) \\}." }, { "math_id": 2, "text": "F \\subseteq G \\quad \\Leftrightarrow \\quad C_F \\supseteq C_G." }, { "math_id": 3, "text": "C_F \\cap C_G = C_H" } ]
https://en.wikipedia.org/wiki?curid=60958781
60965378
Maxwell–Lodge effect
The Maxwell-Lodge effect is a phenomenon of electromagnetic induction in which an electric charge, near a solenoid in which current changes slowly, feels an electromotive force (e.m.f.) even if the magnetic field is practically static inside and null outside. It can be considered a classical analogue of the quantum mechanical Aharonov–Bohm effect, where instead the field is exactly static inside and null outside. The term appeared in the scientific literature in a 2008 article, referring to an article of 1889 by physicist Oliver Lodge. Description. Consider an infinite solenoid (ideal solenoid) with "n" turns per length unit, through which a current formula_0 flows. The magnetic field inside the solenoid is, formula_1      (1) while the field outside the solenoid is null. From the second and third Maxwell's equations, formula_2 and from definitions of magnetic potential and electric potential stems: formula_3 that without electric charges reduces to formula_4       (2) Resuming the original definition of Maxwell on the potential vector, according to which is a vector that its circuitation along a closed curve is equal to the flow of formula_5 through the surface having the above curve as its edge, i.e. formula_6, we can calculate the induced e.m.f., as Lodge did in his 1889 article, considering formula_7 the closed line around the solenoid, or convenience a circumference, and formula_8 the surface having formula_7 as border. Assuming formula_9 the radius of the solenoid and formula_10 the radius of formula_7, the surface crossing it is subjected to a magnetic flux formula_11 which is equal to circuitation formula_7: formula_12. From that stems formula_13. From (2) we have that the e.m.f. is null for formula_5 constant, which means, due to (1), at constant current. On the other hand, if the current changes, formula_5 must also change, producing electromagnetic waves in the surrounding space that can induce an e.m.f. outside the solenoid. But if the current changes very slowly, one finds oneself in an almost stationary situation in which the radiative effects are negligible and therefore, excluding formula_5, the only possible cause of the e.m.f. is formula_14. It is possible to make calculations without referring to the field formula_14. Indeed, in the framework of Maxwell equations as written above: formula_15, being formula_16 negligible outside the solenoid. Thus formula_17. This doesn't avoid the problem that formula_5 is practically null in places where the e.m.f. manifests itself. Interpretation. Bearing in mind that the concept of field was introduced into physics to ensure that actions on objects are always local, i.e. by contact (direct and mediated by a field) and not by remote action, as Albert Einstein feared in the EPR paradox, the result of the Maxwell-Lodge effect, like the Aharonov-Bohm effect, seems contradictory. In fact, even though the magnetic field is zero outside the solenoid and the electromagnetic radiation is negligible, a test charge experiences the presence of an electric field. The question arises as to how the information on the presence of the magnetic field from inside the solenoid reaches the electric charge. In terms of the fields formula_16 and formula_18 the explanation is very simple: the variation of formula_16 inside the solenoid produces an electric field both inside and outside the solenoid, in the same way in which a charge distribution produces an electric field both inside and outside the distribution. In this sense the information from inside and outside is mediated by the electric field which must be continuous over all space due to the Maxwell equations and their boundary condition. From the calculations it seems evident that the source can be considered either the variation of the potential vector formula_19, if you choose to introduce it, or that of the magnetic field formula_16, if you do not want to use the potential vector, which in the classical context has always been considered a mathematical aid, unlike the quantum case, in which Richard Feynman proclaimed its existence as a physical reality.
[ { "math_id": 0, "text": "I(t)" }, { "math_id": 1, "text": "\\mathbf B = \\mu n I(t)" }, { "math_id": 2, "text": "\\begin{cases}\n\\nabla \\times \\mathbf{E} &= -\\dfrac{\\partial\\mathbf B}{\\partial t} \\\\ \n\\nabla \\cdot \\mathbf{B} &= 0\n\\end{cases}" }, { "math_id": 3, "text": "\\mathbf E = - \\mathbf \\nabla \\phi - \\frac{\\partial \\mathbf A}{\\partial t}" }, { "math_id": 4, "text": "\\mathbf E = - \\frac{\\partial \\mathbf A}{\\partial t}" }, { "math_id": 5, "text": "\\mathbf B" }, { "math_id": 6, "text": "\\int_{S} \\mathbf B \\cdot d \\mathbf S = \\int_S \\nabla \\times \\mathbf A \\cdot d \\mathbf S = \\oint_{l} \\mathbf A \\cdot d\\mathbf l" }, { "math_id": 7, "text": "l" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "r > a" }, { "math_id": 11, "text": "\\pi a^2 \\mathbf B" }, { "math_id": 12, "text": "C(l) = 2\\pi r \\mathbf A(r)" }, { "math_id": 13, "text": "\\mathbf A(r) = \\frac{1}{2} a^2 \\frac{\\mathbf B}{r}" }, { "math_id": 14, "text": "\\mathbf A(r)" }, { "math_id": 15, "text": " \\int_l \\mathbf E \\cdot dl =-\\frac{d}{dt} \\int_{S} {\\mathbf B \\cdot dS}= -\\frac{d}{dt} \\int_{\\pi a^2} {\\mathbf B \\cdot dS} " }, { "math_id": 16, "text": " \\mathbf B" }, { "math_id": 17, "text": " \\mathbf E 2\\pi r= - \\pi a^2 \\frac{d \\mathbf B}{dt} \\quad \\Rightarrow \\quad \\mathbf E= - \\frac{a^2}{2r} \\frac{d \\mathbf B}{dt}" }, { "math_id": 18, "text": " \\mathbf E" }, { "math_id": 19, "text": "\\mathbf A" } ]
https://en.wikipedia.org/wiki?curid=60965378
60968880
Weak supervision
A paradigm in machine learning Weak supervision is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data (exclusively used in more expensive and time-consuming supervised learning paradigm), followed by a large amount of unlabeled data (used exclusively in unsupervised learning paradigm). In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In the transductive setting, these unsolved problems act as exam questions. In the inductive setting, they become practice problems of the sort that will make up the exam. Technically, it could be viewed as performing clustering and then labeling the clusters with the labeled data, pushing the decision boundary away from high-density regions, or learning an underlying one-dimensional manifold where the data reside. Problem. &lt;templatestyles src="Machine learning/styles.css"/&gt; The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render large, fully labeled training sets infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning. Technique. More formally, semi-supervised learning assumes a set of formula_0 independently identically distributed examples formula_1 with corresponding labels formula_2 and formula_3 unlabeled examples formula_4 are processed. Semi-supervised learning combines this information to surpass the classification performance that can be obtained either by discarding the unlabeled data and doing supervised learning or by discarding the labels and doing unsupervised learning. Semi-supervised learning may refer to either transductive learning or inductive learning. The goal of transductive learning is to infer the correct labels for the given unlabeled data formula_5 only. The goal of inductive learning is to infer the correct mapping from formula_6 to formula_7. It is unnecessary (and, according to Vapnik's principle, imprudent) to perform transductive learning by way of inferring a classification rule over the entire input space; however, in practice, algorithms formally designed for transduction or induction are often used interchangeably. Assumptions. In order to make any use of unlabeled data, some relationship to the underlying distribution of data must exist. Semi-supervised learning algorithms make use of at least one of the following assumptions: Continuity / smoothness assumption. "Points that are close to each other are more likely to share a label." This is also generally assumed in supervised learning and yields a preference for geometrically simple decision boundaries. In the case of semi-supervised learning, the smoothness assumption additionally yields a preference for decision boundaries in low-density regions, so few points are close to each other but in different classes. Cluster assumption. "The data tend to form discrete clusters, and points in the same cluster are more likely to share a label" (although data that shares a label may spread across multiple clusters). This is a special case of the smoothness assumption and gives rise to feature learning with clustering algorithms. Manifold assumption. "The data lie approximately on a manifold of much lower dimension than the input space." In this case learning the manifold using both the labeled and unlabeled data can avoid the curse of dimensionality. Then learning can proceed using distances and densities defined on the manifold. The manifold assumption is practical when high-dimensional data are generated by some process that may be hard to model directly, but which has only a few degrees of freedom. For instance, human voice is controlled by a few vocal folds, and images of various facial expressions are controlled by a few muscles. In these cases, it is better to consider distances and smoothness in the natural space of the generating problem, rather than in the space of all possible acoustic waves or images, respectively. History. The heuristic approach of "self-training" (also known as "self-learning" or "self-labeling") is historically the oldest approach to semi-supervised learning, with examples of applications starting in the 1960s. The transductive learning framework was formally introduced by Vladimir Vapnik in the 1970s. Interest in inductive learning using generative models also began in the 1970s. A "probably approximately correct" learning bound for semi-supervised learning of a Gaussian mixture was demonstrated by Ratsaby and Venkatesh in 1995. Methods. Generative models. Generative approaches to statistical learning first seek to estimate formula_8, the distribution of data points belonging to each class. The probability formula_9 that a given point formula_10 has label formula_11 is then proportional to formula_12 by Bayes' rule. Semi-supervised learning with generative models can be viewed either as an extension of supervised learning (classification plus information about formula_13) or as an extension of unsupervised learning (clustering plus some labels). Generative models assume that the distributions take some particular form formula_14 parameterized by the vector formula_15. If these assumptions are incorrect, the unlabeled data may actually decrease the accuracy of the solution relative to what would have been obtained from labeled data alone. However, if the assumptions are correct, then the unlabeled data necessarily improves performance. The unlabeled data are distributed according to a mixture of individual-class distributions. In order to learn the mixture distribution from the unlabeled data, it must be identifiable, that is, different parameters must yield different summed distributions. Gaussian mixture distributions are identifiable and commonly used for generative models. The parameterized joint distribution can be written as formula_16 by using the chain rule. Each parameter vector formula_15 is associated with a decision function formula_17. The parameter is then chosen based on fit to both the labeled and unlabeled data, weighted by formula_18: formula_19 Low-density separation. Another major class of methods attempts to place boundaries in regions with few data points (labeled or unlabeled). One of the most commonly used algorithms is the transductive support vector machine, or TSVM (which, despite its name, may be used for inductive learning as well). Whereas support vector machines for supervised learning seek a decision boundary with maximal margin over the labeled data, the goal of TSVM is a labeling of the unlabeled data such that the decision boundary has maximal margin over all of the data. In addition to the standard hinge loss formula_20 for labeled data, a loss function formula_21 is introduced over the unlabeled data by letting formula_22. TSVM then selects formula_23 from a reproducing kernel Hilbert space formula_24 by minimizing the regularized empirical risk: formula_25 An exact solution is intractable due to the non-convex term formula_21, so research focuses on useful approximations. Other approaches that implement low-density separation include Gaussian process models, information regularization, and entropy minimization (of which TSVM is a special case). Laplacian regularization. Laplacian regularization has been historically approached through graph-Laplacian. Graph-based methods for semi-supervised learning use a graph representation of the data, with a node for each labeled and unlabeled example. The graph may be constructed using domain knowledge or similarity of examples; two common methods are to connect each data point to its formula_26 nearest neighbors or to examples within some distance formula_27. The weight formula_28 of an edge between formula_29 and formula_30 is then set to formula_31. Within the framework of manifold regularization, the graph serves as a proxy for the manifold. A term is added to the standard Tikhonov regularization problem to enforce smoothness of the solution relative to the manifold (in the intrinsic space of the problem) as well as relative to the ambient input space. The minimization problem becomes formula_32 where formula_24 is a reproducing kernel Hilbert space and formula_33 is the manifold on which the data lie. The regularization parameters formula_34 and formula_35 control smoothness in the ambient and intrinsic spaces respectively. The graph is used to approximate the intrinsic regularization term. Defining the graph Laplacian formula_36 where formula_37 and formula_38 is the vector formula_39, we have formula_40. The graph-based approach to Laplacian regularization is to put in relation with finite difference method. The Laplacian can also be used to extend the supervised learning algorithms: regularized least squares and support vector machines (SVM) to semi-supervised versions Laplacian regularized least squares and Laplacian SVM. Heuristic approaches. Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples formula_41 may inform a choice of representation, distance metric, or kernel for the data in an unsupervised first step. Then supervised learning proceeds from only the labeled examples. In this vein, some methods learn a low-dimensional representation using the supervised data and then apply either low-density separation or graph-based methods to the learned representation. Iteratively refining the representation and then performing semi-supervised learning on said representation may further improve performance. "Self-training" is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm. Generally only the labels the classifier is most confident in are added at each step. In natural language processing, a common self-training algorithm is the Yarowsky algorithm for problems like word sense disambiguation, accent restoration, and spelling correction. Co-training is an extension of self-training in which multiple classifiers are trained on different (ideally disjoint) sets of features and generate labeled examples for one another. In human cognition. Human responses to formal semi-supervised learning problems have yielded varying conclusions about the degree of influence of the unlabeled data. More natural learning problems may also be viewed as instances of semi-supervised learning. Much of human concept learning involves a small amount of direct instruction (e.g. parental labeling of objects during childhood) combined with large amounts of unlabeled experience (e.g. observation of objects without naming or counting them, or at least without feedback). Human infants are sensitive to the structure of unlabeled natural categories such as images of dogs and cats or male and female faces. Infants and children take into account not only unlabeled examples, but the sampling process from which labeled examples arise.
[ { "math_id": 0, "text": "l" }, { "math_id": 1, "text": "x_1,\\dots,x_l \\in X" }, { "math_id": 2, "text": "y_1,\\dots,y_l \\in Y" }, { "math_id": 3, "text": "u" }, { "math_id": 4, "text": "x_{l+1},\\dots,x_{l+u} \\in X" }, { "math_id": 5, "text": "x_{l+1},\\dots,x_{l+u}" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "Y" }, { "math_id": 8, "text": "p(x|y)" }, { "math_id": 9, "text": "p(y|x)" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "y" }, { "math_id": 12, "text": "p(x|y)p(y)" }, { "math_id": 13, "text": "p(x)" }, { "math_id": 14, "text": "p(x|y,\\theta)" }, { "math_id": 15, "text": "\\theta" }, { "math_id": 16, "text": "p(x,y|\\theta)=p(y|\\theta)p(x|y,\\theta)" }, { "math_id": 17, "text": "f_\\theta(x) = \\underset{y}{\\operatorname{argmax}}\\ p(y|x,\\theta)" }, { "math_id": 18, "text": "\\lambda" }, { "math_id": 19, "text": "\\underset{\\Theta}{\\operatorname{argmax}}\\left( \\log p(\\{x_i,y_i\\}_{i=1}^l | \\theta) + \\lambda \\log p(\\{x_i\\}_{i=l+1}^{l+u}|\\theta)\\right) " }, { "math_id": 20, "text": "(1-yf(x))_+" }, { "math_id": 21, "text": "(1-|f(x)|)_+" }, { "math_id": 22, "text": "y=\\operatorname{sign}{f(x)}" }, { "math_id": 23, "text": "f^*(x) = h^*(x) + b" }, { "math_id": 24, "text": "\\mathcal{H}" }, { "math_id": 25, "text": "f^* = \\underset{f}{\\operatorname{argmin}}\\left( \n\\displaystyle \\sum_{i=1}^l(1-y_if(x_i))_+ + \\lambda_1 \\|h\\|_\\mathcal{H}^2 + \\lambda_2 \\sum_{i=l+1}^{l+u} (1-|f(x_i)|)_+\n\\right) " }, { "math_id": 26, "text": "k" }, { "math_id": 27, "text": "\\epsilon" }, { "math_id": 28, "text": "W_{ij}" }, { "math_id": 29, "text": "x_i" }, { "math_id": 30, "text": "x_j" }, { "math_id": 31, "text": "e^{-\\|x_i-x_j\\|^2 / \\epsilon^2}" }, { "math_id": 32, "text": "\\underset{f\\in\\mathcal{H}}{\\operatorname{argmin}}\\left(\n\\frac{1}{l}\\displaystyle\\sum_{i=1}^l V(f(x_i),y_i) + \n\\lambda_A \\|f\\|^2_\\mathcal{H} + \n\\lambda_I \\int_\\mathcal{M}\\|\\nabla_\\mathcal{M} f(x)\\|^2dp(x)\n\\right) " }, { "math_id": 33, "text": "\\mathcal{M}" }, { "math_id": 34, "text": "\\lambda_A" }, { "math_id": 35, "text": "\\lambda_I" }, { "math_id": 36, "text": "L = D - W" }, { "math_id": 37, "text": "D_{ii} = \\sum_{j=1}^{l+u} W_{ij}" }, { "math_id": 38, "text": "\\mathbf{f}" }, { "math_id": 39, "text": "[f(x_1)\\dots f(x_{l+u})]" }, { "math_id": 40, "text": "\\mathbf{f}^T L \\mathbf{f} = \\displaystyle\\sum_{i,j=1}^{l+u}W_{ij}(f_i-f_j)^2 \\approx \\int_\\mathcal{M}\\|\\nabla_\\mathcal{M} f(x)\\|^2dp(x)" }, { "math_id": 41, "text": "x_1,\\dots,x_{l+u}" } ]
https://en.wikipedia.org/wiki?curid=60968880
609717
Aluminium–silicon alloys
Aluminium–silicon alloys or Silumin is a general name for a group of lightweight, high-strength aluminium alloys based on an aluminum–silicon system (AlSi) that consist predominantly of aluminum - with silicon as the quantitatively most important alloying element. Pure AlSi alloys cannot be hardened, the commonly used alloys AlSiCu (with copper) and AlSiMg (with magnesium) can be hardened. The hardening mechanism corresponds to that of AlCu and AlMgSi. AlSi alloys are by far the most important of all aluminum cast materials. They are suitable for all casting processes and have excellent casting properties. Important areas of application are in car parts, including engine blocks and pistons. In addition, their use as a functional material for high-energy heat storage in electric vehicles is currently being focused on. Alloying elements. Aluminium-silicon alloys typically contain 3% to 25% silicon content. Casting is the primary use of aluminum-silicon alloys, but they can also be utilized in rapid solidification processes and powder metallurgy. Alloys used by powder metallurgy, rather than casting, may contain even more silicon, up to 50%. Silumin has a high resistance to corrosion, making it useful in humid environments. The addition of silicon to aluminum also makes it less viscous when in liquid form, which, together with its low cost (as both component elements are relatively cheap to extract), makes it a very good casting alloy. Silumin with good castability may give a stronger finished casting than a potentially stronger alloy that is more difficult to cast. All aluminum alloys also contain iron as an admixture. It is generally undesirable because it lowers strength and elongation at break. Together with Al and Si it forms the formula_0-phase AlFeSi, which is present in the structure in the form of small needles. However, iron also prevents the castings from sticking to the molds in die casting, so that special die-casting alloys contain a small amount of iron, while iron is avoided as far as possible in other alloys. Manganese also reduces the tendency to stick, but affects the mechanical properties less than iron. Manganese forms a phase with other elements that is in the form of globulitic (round) grains. Copper occurs in almost all technical alloys, at least as an admixture. From a content of 0.05% Cu, the corrosion resistance is reduced. Additions of about 1% Cu are alloyed to increase strength through solid solution strengthening. This also improves machinability. In the case of the AlSiCu alloys, higher proportions of copper are also added, which means that the materials can be hardened (see Aluminum-copper alloy). Together with silicon, magnesium forms the Mg2Si (magnesium silicide) phase, which is the basis of hardenability, similar to aluminum-magnesium-silicon alloys (AlMgSi). In these there is an excess of Mg, so the structure consists of aluminum mixed crystal with magnesium and Mg2Si. In the AlSiMg alloys, on the other hand, there is an excess of silicon and the structure consists of aluminum mixed crystal, silicon and Mg2Si. Small additions of titanium and boron serve to refine the grain. Pure aluminium–silicon alloys. Aluminum forms a eutectic with silicon, which is at 577 °C, with a Si content of 12.5% or 12.6%. Up to 1.65% Si can be dissolved in aluminum at this temperature. However, the solubility decreases rapidly with temperature. At 500 °C it is still 0.8% Si, at 400 °C 0.3% Si and at 250 °C only 0.05% Si. At room temperature, silicon is practically insoluble. Aluminum cannot be dissolved in silicon at all, not even at high temperatures. Only in the molten state are both completely soluble. Increases in strength due to solid solution strengthening are negligible. Pure AlSi alloys are smelted from primary aluminium, while AlSi alloys with other elements are usually smelted from secondary aluminium. The pure AlSi alloys are medium strength, non-hardenable, but corrosion resistant, even in salt water environments. The exact properties depend on whether the composition of the alloy is above, near or below the eutectic point. Castability increases with increasing Si content and is best at about 17% Si; the mechanical properties are best at 6% to 12% Si. Otherwise, AlSi alloys generally have favorable casting properties: the shrinkage is only 1.25% and the influence of the wall thickness is small. Hypereutectic alloys, with a silicon content of 16 to 19%, such as Alusil, can be used in high-wear applications such as pistons, cylinder liners and internal combustion engine blocks. The metal is etched after casting, exposing hard, wear-resistant silicon precipitates. The rest of the surface becomes slightly porous and retains oil. Overall this makes for an excellent bearing surface, and at lower cost than traditional bronze bearing bushes. Hypoeutectic alloys. Hypoeutectic alloys (also hypoeutectic) have a silicon content of less than 12%. With them, the aluminum solidifies first. As the temperature falls and the proportion of solidified aluminum increases, the silicon content of the residual melt increases until the eutectic point is reached. Then the entire residual melt solidifies as a eutectic. The microstructure is consequently characterized by primary aluminium, which is often present in the form of dendrites, and the eutectic of the residual melt lying between them. The lower the silicon content, the larger the dendrites. In pure AlSi alloys, the eutectic is often in a degenerate form. Instead of the fine structure that is otherwise typical of eutectics with its good mechanical properties, AlSi takes the form of a coarse-grained structure on slow cooling, in which silicon forms large plates or needles. These can sometimes be seen with the naked eye and make the material brittle. This is not a problem in chill casting, since the cooling rates are high enough to avoid degeneration. In sand casting in particular, with its slow cooling rates, additional elements are added to the melt to prevent degeneration. Sodium, strontium and antimony are suitable. These elements are added to the melt at around 720 °C to 780 °C, causing supercooling that reduces the diffusion of silicon, resulting in a common fine eutectic, resulting in higher strength and elongation at break. Eutectic and near-eutectic alloys. Alloys with 11% Si to 13% Si are counted among the eutectic alloys. Annealing improves elongation and fatigue strength. Solidification is shell -forming in untreated alloys and smooth-walled in refined alloys, resulting in very good castability. Above all, the flowability and mold filling ability is very good, which is why eutectic alloys are suitable for thin-walled parts. Hypereutectic Alloys. Alloys with more than 13% Si are referred to as over- or hypereutectic. The Si content is usually up to 17%, with special piston alloys also over 20%. Hypereutectic alloys have very low thermal expansion and are very wear resistant. In contrast to many other alloys, AlSi alloys do not show their maximum fluidity near the eutectic, but at 14 to 16% Si, in the case of overheating at 17% to 18% Si. The tendency to hot cracking is minimal in the range from 10% to 14%. In the case of hypereutectic alloys, the silicon crystals solidify first in the melt, until the remaining melt solidifies as a eutectic. For grain refinement copper-phosphorus alloys are used. The hard and brittle silicon leads to increased tool wear during subsequent machining, which is why diamond tools are sometimes used (See also Machinability). Aluminium–silicon–magnesium alloys. AlSiMg alloys with small additions of magnesium (below 0.3 to 0.6% Mg) can be hardened both cold and warm. The proportion of magnesium decreases with increasing silicon content, which is between 5% Si and 10% Si. They are related to the AlMgSi alloys: Both are based on the fact that magnesium silicide Mg2Si is precipitated, which is present in the material in the form of finely divided particles and thus increases the strength. In addition, magnesium increases the elongation at break. In contrast to AlSiCu, which can also be hardened, these alloys are corrosion-resistant and easy to cast. However, copper is present as an impurity in some AlSiMg alloys, which reduces corrosion resistance. This applies above all to materials that have been melted from secondary aluminium. Aluminium–silicon–copper alloys. AlSiCu alloys are also heat-hardenable and additionally high-strength, but susceptible to corrosion and less, but still adequately, castable. It is often smelted from secondary aluminium. The hardening is based on the same mechanism as the AlCu alloys. The copper content is 1% to 4%, that of silicon 4% to 10%. Small additions of magnesium improve strength. Compositions of standardized varieties. All data are in percent by mass. The rest is aluminum. Wrought alloys Cast Alloys 4000 series. 4000 series are alloyed with silicon. Variations of aluminium–silicon alloys intended for casting (and therefore not included in 4000 series) are also known as silumin. Applications. Within the Aluminum Association numeric designation system, Silumin corresponds to alloys of two systems: 3xxx, aluminum–silicon alloys also containing magnesium and/or copper, and 4xx.x, binary aluminum–silicon alloys. Copper increases strength, but reduces corrosion resistance. In general, AlSi alloys are mainly used in foundries, especially for vehicle construction. Wrought alloys are very rare. They are used as a filler metal (welding wire) or as a solder in brazing. In some cases, forged AlSi pistons are also built for aviation. AlSi eutectic casting alloys are used for machine parts, cylinder heads, cylinder crankcases, impellers and ribbed bodies. Hypereutectic (high silicon) alloys are used for engine parts because of low thermal expansion and high strength and wear resistance. This also includes special piston alloys with around 25% Si. Alloys with additions of magnesium (AlSiMg) can be hardened by heat treatment. An example use-case are wheel rims produced by low -pressure casting because of their good strength, corrosion resistance and elongation at break. Alloys with about 10% Si are used for cylinder heads, switch housings, intake manifolds, transformer tanks, wheel suspensions and oil pans. Alloys with 5% Si to 7% Si are used for chassis parts and wheels. At levels of 9%, they are suitable for structural components and body nodes. The copper-containing AlSiCu alloys are used for gear housings, crankcases and cylinder heads because of their heat resistance and hardenability. In addition to the use of AlSi alloys as a structural material, in which the mechanical properties are paramount, another area of application is latent heat storage. In the phase change of the alloy at 577 °C, thermal energy can be stored in the form of the enthalpy of fusion. AlSi can therefore also be used as a metallic phase change material (mPCM) be used. Compared to other phase change materials, metals are characterized by a high specific energy density combined with high thermal conductivity. The latter is important for the rapid entry and exit of heat in the storage material and thus increases the performance of a heat storage system. These advantageous properties of mPCM such as AlSi are of particular importance for vehicle applications, since low masses and volumes as well as high thermal performance are the main goals here. By using storage systems based on mPCM, the range of electric cars can be increased by thermally storing the necessary thermal energy for heating in the mPCM instead of taking it from the traction battery. Almost eutectic AlSi melts are also used for hot-dip aluminizing. In the process of continuous strip galvanizing, steel strips are finished with a heat-resistant metallic coating 10-25 μm thick. Hot-dip aluminized sheet steel is an inexpensive material for thermally stressed components. Unlike zinc coatings, the coating does not provide cathodic protection under atmospheric conditions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=609717
609737
Duality (mathematics)
General concept and operation in mathematics In mathematics, a duality translates concepts, theorems or mathematical structures into other concepts, theorems or structures in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of A is B, then the dual of B is A. In other cases the dual of the dual – the double dual or bidual – is not necessarily identical to the original (also called "primal"). Such involutions sometimes have fixed points, so that the dual of A is A itself. For example, Desargues' theorem is self-dual in this sense under the "standard duality in projective geometry". In mathematical contexts, "duality" has numerous meanings. It has been described as "a very pervasive and important concept in (modern) mathematics" and "an important general theme that has manifestations in almost every area of mathematics". Many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. For instance, "linear algebra duality" corresponds in this way to bilinear maps from pairs of vector spaces to scalars, the "duality between distributions and the associated test functions" corresponds to the pairing in which one integrates a distribution against a test function, and "Poincaré duality" corresponds similarly to intersection number, viewed as a pairing between submanifolds of a given manifold. From a category theory viewpoint, duality can also be seen as a functor, at least in the realm of vector spaces. This functor assigns to each space its dual space, and the pullback construction assigns to each arrow "f": "V" → "W" its dual "f"∗: "W"∗ → "V"∗. Introductory examples. In the words of Michael Atiyah, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Duality in mathematics is not a theorem, but a "principle". The following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case. Complement of a subset. A simple duality arises from considering subsets of a fixed set S. To any subset , the complement consists of all those elements in S that are not contained in A. It is again a subset of S. Taking the complement has the following properties: This duality appears in topology as a duality between open and closed subsets of some fixed topological space X: a subset U of X is closed if and only if its complement in X is open. Because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of open sets is open, so dually, any intersection of closed sets is closed. The interior of a set is the largest open set contained in it, and the closure of the set is the smallest closed set that contains it. Because of the duality, the complement of the interior of any set U is equal to the closure of the complement of U. Dual cone. A duality in geometry is provided by the dual cone construction. Given a set formula_0 of points in the plane formula_1 (or more generally points in formula_2), the dual cone is defined as the set formula_3 consisting of those points formula_4 satisfying formula_5 for all points formula_6 in formula_0, as illustrated in the diagram. Unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original set formula_0. Instead, formula_7 is the smallest cone containing formula_0 which may be bigger than formula_0. Therefore this duality is weaker than the one above, in that The other two properties carry over without change: Dual vector space. A very important example of a duality arises in linear algebra by associating to any vector space V its dual vector space . Its elements are the linear functionals formula_13, where K is the field over which V is defined. The three properties of the dual cone carry over to this type of duality by replacing subsets of formula_1 by vector space and inclusions of such subsets by linear maps. That is: A particular feature of this duality is that V and are isomorphic for certain objects, namely finite-dimensional vector spaces. However, this is in a sense a lucky coincidence, for giving such an isomorphism requires a certain choice, for example the choice of a basis of V. This is also true in the case if V is a Hilbert space, "via" the Riesz representation theorem. Galois theory. In all the dualities discussed before, the dual of an object is of the same kind as the object itself. For example, the dual of a vector space is again a vector space. Many duality statements are not of this kind. Instead, such dualities reveal a close relation between objects of seemingly different nature. One example of such a more general duality is from Galois theory. For a fixed Galois extension , one may associate the Galois group to any intermediate field E (i.e., ). This group is a subgroup of the Galois group G = Gal(K/F). Conversely, to any such subgroup there is the fixed field consisting of elements fixed by the elements in H. Compared to the above, this duality has the following features: Order-reversing dualities. Given a poset P = (X, ≤) (short for partially ordered set; i.e., a set that has a notion of ordering but in which two elements cannot necessarily be placed in order relative to each other), the dual poset P = (X, ≥) comprises the same ground set but the converse relation. Familiar examples of dual partial orders include A "duality transform" is an involutive antiautomorphism f of a partially ordered set S, that is, an order-reversing involution . In several important cases these simple properties determine the transform uniquely up to some simple symmetries. For example, if , are two duality transforms then their composition is an order automorphism of S; thus, any two duality transforms differ only by an order automorphism. For example, all order automorphisms of a power set S = 2 are induced by permutations of R. A concept defined for a partial order P will correspond to a "dual concept" on the dual poset . For instance, a minimal element of P will be a maximal element of : minimality and maximality are dual concepts in order theory. Other pairs of dual concepts are upper and lower bounds, lower sets and upper sets, and ideals and filters. In topology, open sets and closed sets are dual concepts: the complement of an open set is closed, and vice versa. In matroid theory, the family of sets complementary to the independent sets of a given matroid themselves form another matroid, called the dual matroid. Dimension-reversing dualities. There are many distinct but interrelated dualities in which geometric or topological objects correspond to other objects of the same type, but with a reversal of the dimensions of the features of the objects. A classical example of this is the duality of the Platonic solids, in which the cube and the octahedron form a dual pair, the dodecahedron and the icosahedron form a dual pair, and the tetrahedron is self-dual. The dual polyhedron of any of these polyhedra may be formed as the convex hull of the center points of each face of the primal polyhedron, so the vertices of the dual correspond one-for-one with the faces of the primal. Similarly, each edge of the dual corresponds to an edge of the primal, and each face of the dual corresponds to a vertex of the primal. These correspondences are incidence-preserving: if two parts of the primal polyhedron touch each other, so do the corresponding two parts of the dual polyhedron. More generally, using the concept of polar reciprocation, any convex polyhedron, or more generally any convex polytope, corresponds to a dual polyhedron or dual polytope, with an i-dimensional feature of an n-dimensional polytope corresponding to an -dimensional feature of the dual polytope. The incidence-preserving nature of the duality is reflected in the fact that the face lattices of the primal and dual polyhedra or polytopes are themselves order-theoretic duals. Duality of polytopes and order-theoretic duality are both involutions: the dual polytope of the dual polytope of any polytope is the original polytope, and reversing all order-relations twice returns to the original order. Choosing a different center of polarity leads to geometrically different dual polytopes, but all have the same combinatorial structure. From any three-dimensional polyhedron, one can form a planar graph, the graph of its vertices and edges. The dual polyhedron has a dual graph, a graph with one vertex for each face of the polyhedron and with one edge for every two adjacent faces. The same concept of planar graph duality may be generalized to graphs that are drawn in the plane but that do not come from a three-dimensional polyhedron, or more generally to graph embeddings on surfaces of higher genus: one may draw a dual graph by placing one vertex within each region bounded by a cycle of edges in the embedding, and drawing an edge connecting any two regions that share a boundary edge. An important example of this type comes from computational geometry: the duality for any finite set S of points in the plane between the Delaunay triangulation of S and the Voronoi diagram of S. As with dual polyhedra and dual polytopes, the duality of graphs on surfaces is a dimension-reversing involution: each vertex in the primal embedded graph corresponds to a region of the dual embedding, each edge in the primal is crossed by an edge in the dual, and each region of the primal corresponds to a vertex of the dual. The dual graph depends on how the primal graph is embedded: different planar embeddings of a single graph may lead to different dual graphs. Matroid duality is an algebraic extension of planar graph duality, in the sense that the dual matroid of the graphic matroid of a planar graph is isomorphic to the graphic matroid of the dual graph. A kind of geometric duality also occurs in optimization theory, but not one that reverses dimensions. A linear program may be specified by a system of real variables (the coordinates for a point in Euclidean space formula_2), a system of linear constraints (specifying that the point lie in a halfspace; the intersection of these halfspaces is a convex polytope, the feasible region of the program), and a linear function (what to optimize). Every linear program has a dual problem with the same optimal solution, but the variables in the dual problem correspond to constraints in the primal problem and vice versa. Duality in logic and set theory. In logic, functions or relations A and B are considered dual if A(¬x) = ¬B(x), where ¬ is logical negation. The basic duality of this type is the duality of the ∃ and ∀ quantifiers in classical logic. These are dual because and are equivalent for all predicates P in classical logic: if there exists an x for which P fails to hold, then it is false that P holds for all x (but the converse does not hold constructively). From this fundamental logical duality follow several others: Other analogous dualities follow from these: Bidual. The dual of the dual, called the bidual or double dual, depending on context, is often identical to the original (also called "primal"), and duality is an involution. In this case the bidual is not usually distinguished, and instead one only refers to the primal and dual. For example, the dual poset of the dual poset is exactly the original poset, since the converse relation is defined by an involution. In other cases, the bidual is not identical with the primal, though there is often a close connection. For example, the dual cone of the dual cone of a set contains the primal set (it is the smallest cone containing the primal set), and is equal if and only if the primal set is a cone. An important case is for vector spaces, where there is a map from the primal space to the double dual, , known as the "canonical evaluation map". For finite-dimensional vector spaces this is an isomorphism, but these are not identical spaces: they are different sets. In category theory, this is generalized by , and a "natural transformation" from the identity functor to the double dual functor. For vector spaces (considered algebraically), this is always an injection; see . This can be generalized algebraically to a dual module. There is still a canonical evaluation map, but it is not always injective; if it is, this is known as a torsionless module; if it is an isomophism, the module is called reflexive. For topological vector spaces (including normed vector spaces), there is a separate notion of a topological dual, denoted &amp;NoBreak;&amp;NoBreak; to distinguish from the algebraic dual V*, with different possible topologies on the dual, each of which defines a different bidual space &amp;NoBreak;&amp;NoBreak;. In these cases the canonical evaluation map &amp;NoBreak;&amp;NoBreak; is not in general an isomorphism. If it is, this is known (for certain locally convex vector spaces with the strong dual space topology) as a reflexive space. In other cases, showing a relation between the primal and bidual is a significant result, as in Pontryagin duality (a locally compact abelian group is naturally isomorphic to its bidual). Dual objects. A group of dualities can be described by endowing, for any mathematical object X, the set of morphisms into some fixed object D, with a structure similar to that of X. This is sometimes called internal Hom. In general, this yields a true duality only for specific choices of D, in which case X* = Hom (X, D) is referred to as the "dual" of X. There is always a map from X to the "bidual", that is to say, the dual of the dual, formula_14 It assigns to some the map that associates to any map (i.e., an element in ) the value . Depending on the concrete duality considered and also depending on the object X, this map may or may not be an isomorphism. Dual vector spaces revisited. The construction of the dual vector space formula_15 mentioned in the introduction is an example of such a duality. Indeed, the set of morphisms, i.e., linear maps, forms a vector space in its own right. The map mentioned above is always injective. It is surjective, and therefore an isomorphism, if and only if the dimension of V is finite. This fact characterizes finite-dimensional vector spaces without referring to a basis. Isomorphisms of "V" and "V"∗ and inner product spaces. A vector space "V" is isomorphic to "V"∗ precisely if "V" is finite-dimensional. In this case, such an isomorphism is equivalent to a non-degenerate bilinear form formula_16 In this case "V" is called an inner product space. For example, if "K" is the field of real or complex numbers, any positive definite bilinear form gives rise to such an isomorphism. In Riemannian geometry, "V" is taken to be the tangent space of a manifold and such positive bilinear forms are called Riemannian metrics. Their purpose is to measure angles and distances. Thus, duality is a foundational basis of this branch of geometry. Another application of inner product spaces is the Hodge star which provides a correspondence between the elements of the exterior algebra. For an "n"-dimensional vector space, the Hodge star operator maps "k"-forms to ("n" − "k")-forms. This can be used to formulate Maxwell's equations. In this guise, the duality inherent in the inner product space exchanges the role of magnetic and electric fields. Duality in projective geometry. In some projective planes, it is possible to find geometric transformations that map each point of the projective plane to a line, and each line of the projective plane to a point, in an incidence-preserving way. For such planes there arises a general principle of duality in projective planes: given any theorem in such a plane projective geometry, exchanging the terms "point" and "line" everywhere results in a new, equally valid theorem. A simple example is that the statement "two points determine a unique line, the line passing through these points" has the dual statement that "two lines determine a unique point, the intersection point of these two lines". For further examples, see Dual theorems. A conceptual explanation of this phenomenon in some planes (notably field planes) is offered by the dual vector space. In fact, the points in the projective plane formula_17 correspond to one-dimensional subvector spaces formula_18 while the lines in the projective plane correspond to subvector spaces formula_19 of dimension 2. The duality in such projective geometries stems from assigning to a one-dimensional formula_20 the subspace of formula_21 consisting of those linear maps formula_22 which satisfy formula_23. As a consequence of the dimension formula of linear algebra, this space is two-dimensional, i.e., it corresponds to a line in the projective plane associated to formula_21. The (positive definite) bilinear form formula_24 yields an identification of this projective plane with the formula_17. Concretely, the duality assigns to formula_18 its orthogonal formula_25. The explicit formulas in duality in projective geometry arise by means of this identification. Topological vector spaces and Hilbert spaces. In the realm of topological vector spaces, a similar construction exists, replacing the dual by the topological dual vector space. There are several notions of topological dual space, and each of them gives rise to a certain concept of duality. A topological vector space formula_26 that is canonically isomorphic to its bidual formula_27 is called a reflexive space: formula_28 Examples: Further dual objects. The dual lattice of a lattice L is given by formula_33 the set of linear functions on the real vector space containing the lattice that map the points of the lattice to the integers formula_34. This is used in the construction of toric varieties. The Pontryagin dual of locally compact topological groups "G" is given by formula_35 continuous group homomorphisms with values in the circle (with multiplication of complex numbers as group operation). Dual categories. Opposite category and adjoint functors. In another group of dualities, the objects of one theory are translated into objects of another theory and the maps between objects in the first theory are translated into morphisms in the second theory, but with direction reversed. Using the parlance of category theory, this amounts to a contravariant functor between two categories C and D: &lt;templatestyles src="Block indent/styles.css"/&gt; which for any two objects "X" and "Y" of "C" gives a map &lt;templatestyles src="Block indent/styles.css"/&gt; That functor may or may not be an equivalence of categories. There are various situations, where such a functor is an equivalence between the opposite category of C, and D. Using a duality of this type, every statement in the first theory can be translated into a "dual" statement in the second theory, where the direction of all arrows has to be reversed. Therefore, any duality between categories C and D is formally the same as an equivalence between C and ( and D). However, in many circumstances the opposite categories have no inherent meaning, which makes duality an additional, separate concept. A category that is equivalent to its dual is called "self-dual". An example of self-dual category is the category of Hilbert spaces. Many category-theoretic notions come in pairs in the sense that they correspond to each other while considering the opposite category. For example, Cartesian products and disjoint unions of sets are dual to each other in the sense that &lt;templatestyles src="Block indent/styles.css"/&gt; and &lt;templatestyles src="Block indent/styles.css"/&gt; for any set X. This is a particular case of a more general duality phenomenon, under which limits in a category C correspond to colimits in the opposite category ; further concrete examples of this are epimorphisms vs. monomorphism, in particular factor modules (or groups etc.) vs. submodules, direct products vs. direct sums (also called coproducts to emphasize the duality aspect). Therefore, in some cases, proofs of certain statements can be halved, using such a duality phenomenon. Further notions displaying related by such a categorical duality are projective and injective modules in homological algebra, fibrations and cofibrations in topology and more generally model categories. Two functors and are adjoint if for all objects "c" in "C" and "d" in "D" &lt;templatestyles src="Block indent/styles.css"/&gt; in a natural way. Actually, the correspondence of limits and colimits is an example of adjoints, since there is an adjunction &lt;templatestyles src="Block indent/styles.css"/&gt; between the colimit functor that assigns to any diagram in C indexed by some category I its colimit and the diagonal functor that maps any object c of C to the constant diagram which has c at all places. Dually, &lt;templatestyles src="Block indent/styles.css"/&gt; Spaces and functions. Gelfand duality is a duality between commutative C*-algebras "A" and compact Hausdorff spaces "X" is the same: it assigns to "X" the space of continuous functions (which vanish at infinity) from "X" to C, the complex numbers. Conversely, the space "X" can be reconstructed from "A" as the spectrum of "A". Both Gelfand and Pontryagin duality can be deduced in a largely formal, category-theoretic way. In a similar vein there is a duality in algebraic geometry between commutative rings and affine schemes: to every commutative ring "A" there is an affine spectrum, Spec "A". Conversely, given an affine scheme "S", one gets back a ring by taking global sections of the structure sheaf O"S". In addition, ring homomorphisms are in one-to-one correspondence with morphisms of affine schemes, thereby there is an equivalence (Commutative rings)op ≅ (affine schemes) Affine schemes are the local building blocks of schemes. The previous result therefore tells that the local theory of schemes is the same as commutative algebra, the study of commutative rings. Noncommutative geometry draws inspiration from Gelfand duality and studies noncommutative C*-algebras as if they were functions on some imagined space. Tannaka–Krein duality is a non-commutative analogue of Pontryagin duality. Galois connections. In a number of situations, the two categories which are dual to each other are actually arising from partially ordered sets, i.e., there is some notion of an object "being smaller" than another one. A duality that respects the orderings in question is known as a Galois connection. An example is the standard duality in Galois theory mentioned in the introduction: a bigger field extension corresponds—under the mapping that assigns to any extension "L" ⊃ "K" (inside some fixed bigger field Ω) the Galois group Gal (Ω / "L") —to a smaller group. The collection of all open subsets of a topological space "X" forms a complete Heyting algebra. There is a duality, known as Stone duality, connecting sober spaces and spatial locales. Pontryagin duality. Pontryagin duality gives a duality on the category of locally compact abelian groups: given any such group "G", the character group χ("G") = Hom ("G", "S"1) given by continuous group homomorphisms from "G" to the circle group "S"1 can be endowed with the compact-open topology. Pontryagin duality states that the character group is again locally compact abelian and that "G" ≅ χ(χ("G")). Moreover, discrete groups correspond to compact abelian groups; finite groups correspond to finite groups. On the one hand, Pontryagin is a special case of Gelfand duality. On the other hand, it is the conceptual reason of Fourier analysis, see below. Analytic dualities. In analysis, problems are frequently solved by passing to the dual description of functions and operators. Fourier transform switches between functions on a vector space and its dual: formula_36 and conversely formula_37 If "f" is an "L"2-function on R or R"N", say, then so is formula_38 and formula_39. Moreover, the transform interchanges operations of multiplication and convolution on the corresponding function spaces. A conceptual explanation of the Fourier transform is obtained by the aforementioned Pontryagin duality, applied to the locally compact groups R (or R"N" etc.): any character of R is given by ξ ↦ "e"−2"πixξ". The dualizing character of Fourier transform has many other manifestations, for example, in alternative descriptions of quantum mechanical systems in terms of coordinate and momentum representations. Homology and cohomology. Theorems showing that certain objects of interest are the dual spaces (in the sense of linear algebra) of other objects of interest are often called "dualities". Many of these dualities are given by a bilinear pairing of two "K"-vector spaces "A" ⊗ "B" → "K". For perfect pairings, there is, therefore, an isomorphism of "A" to the dual of "B". Poincaré duality. Poincaré duality of a smooth compact complex manifold "X" is given by a pairing of singular cohomology with C-coefficients (equivalently, sheaf cohomology of the constant sheaf C) H"i"(X) ⊗ H2"n"−"i"(X) → C, where "n" is the (complex) dimension of "X". Poincaré duality can also be expressed as a relation of singular homology and de Rham cohomology, by asserting that the map formula_40 (integrating a differential "k"-form over an 2"n"−"k"-(real) -dimensional cycle) is a perfect pairing. Poincaré duality also reverses dimensions; it corresponds to the fact that, if a topological manifold is represented as a cell complex, then the dual of the complex (a higher-dimensional generalization of the planar graph dual) represents the same manifold. In Poincaré duality, this homeomorphism is reflected in an isomorphism of the "k"th homology group and the ("n" − "k")th cohomology group. Duality in algebraic and arithmetic geometry. The same duality pattern holds for a smooth projective variety over a separably closed field, using l-adic cohomology with Qℓ-coefficients instead. This is further generalized to possibly singular varieties, using intersection cohomology instead, a duality called Verdier duality. Serre duality or coherent duality are similar to the statements above, but applies to cohomology of coherent sheaves instead. With increasing level of generality, it turns out, an increasing amount of technical background is helpful or necessary to understand these theorems: the modern formulation of these dualities can be done using derived categories and certain direct and inverse image functors of sheaves (with respect to the classical analytical topology on manifolds for Poincaré duality, l-adic sheaves and the étale topology in the second case, and with respect to coherent sheaves for coherent duality). Yet another group of similar duality statements is encountered in arithmetics: étale cohomology of finite, local and global fields (also known as Galois cohomology, since étale cohomology over a field is equivalent to group cohomology of the (absolute) Galois group of the field) admit similar pairings. The absolute Galois group "G"(F"q") of a finite field, for example, is isomorphic to formula_41, the profinite completion of Z, the integers. Therefore, the perfect pairing (for any "G"-module "M") H"n"("G", "M") × H1−"n" ("G", Hom ("M", Q/Z)) → Q/Z is a direct consequence of Pontryagin duality of finite groups. For local and global fields, similar statements exist (local duality and global or Poitou–Tate duality). See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "\\mathbb R^2" }, { "math_id": 2, "text": "\\mathbb R^n" }, { "math_id": 3, "text": "C^* \\subseteq \\mathbb R^2" }, { "math_id": 4, "text": "(x_1, x_2)" }, { "math_id": 5, "text": "x_1 c_1 + x_2 c_2 \\ge 0" }, { "math_id": 6, "text": "(c_1, c_2)" }, { "math_id": 7, "text": "C^{**}" }, { "math_id": 8, "text": "C \\subseteq D" }, { "math_id": 9, "text": "D^* \\subseteq C^*" }, { "math_id": 10, "text": "D" }, { "math_id": 11, "text": "D^*" }, { "math_id": 12, "text": "C^*" }, { "math_id": 13, "text": "\\varphi: V \\to K" }, { "math_id": 14, "text": "X \\to X^{**} := (X^*)^* = \\operatorname{Hom}(\\operatorname{Hom}(X, D), D)." }, { "math_id": 15, "text": "V^* = \\operatorname{Hom}(V, K)" }, { "math_id": 16, "text": "\\varphi: V \\times V \\to K" }, { "math_id": 17, "text": "\\mathbb{RP}^2" }, { "math_id": 18, "text": "V \\subset \\mathbb R^3" }, { "math_id": 19, "text": "W" }, { "math_id": 20, "text": "V" }, { "math_id": 21, "text": "(\\mathbb R^3)^*" }, { "math_id": 22, "text": "f: \\mathbb R^3 \\to \\mathbb R" }, { "math_id": 23, "text": "f (V) = 0" }, { "math_id": 24, "text": "\\langle \\cdot , \\cdot \\rangle : \\R^3 \\times \\R^3 \\to \\R, \\langle x , y \\rangle = \\sum_{i=1}^3 x_i y_i" }, { "math_id": 25, "text": "\\left\\{w \\in \\R^3, \\langle v, w \\rangle = 0 \\text{ for all } v \\in V\\right\\}" }, { "math_id": 26, "text": "X" }, { "math_id": 27, "text": "X''" }, { "math_id": 28, "text": "X\\cong X''." }, { "math_id": 29, "text": "H \\to H^*, v \\mapsto (w \\mapsto \\langle w,v \\rangle)," }, { "math_id": 30, "text": "{\\mathcal D}'(U)" }, { "math_id": 31, "text": "{\\mathcal S}'(\\R^n)" }, { "math_id": 32, "text": "{\\mathcal C}^\\infty(U)'" }, { "math_id": 33, "text": "\\operatorname{Hom} (L, \\mathbb{Z})," }, { "math_id": 34, "text": "\\mathbb{Z}" }, { "math_id": 35, "text": "\\operatorname{Hom} (G, S^1)," }, { "math_id": 36, "text": "\\widehat{f}(\\xi) := \\int_{-\\infty}^\\infty f(x)\\ e^{- 2\\pi i x \\xi} \\, dx, " }, { "math_id": 37, "text": "f(x) = \\int_{-\\infty}^\\infty \\widehat{f}(\\xi)\\ e^{2 \\pi i x \\xi} \\, d\\xi." }, { "math_id": 38, "text": "\\widehat{f}" }, { "math_id": 39, "text": "f(-x) = \\widehat{\\widehat{f}}(x)" }, { "math_id": 40, "text": "(\\gamma, \\omega) \\mapsto \\int_\\gamma \\omega" }, { "math_id": 41, "text": "\\widehat {\\mathbf Z}" } ]
https://en.wikipedia.org/wiki?curid=609737
60976257
Eighth power
In arithmetic and algebra the eighth power of a number "n" is the result of multiplying eight instances of "n" together. So: "n"8 = "n" × "n" × "n" × "n" × "n" × "n" × "n" × "n". Eighth powers are also formed by multiplying a number by its seventh power, or the fourth power of a number by itself. The sequence of eighth powers of integers is: 0, 1, 256, 6561, 65536, 390625, 1679616, 5764801, 16777216, 43046721, 100000000, 214358881, 429981696, 815730721, 1475789056, 2562890625, 4294967296, 6975757441, 11019960576, 16983563041, 25600000000, 37822859361, 54875873536, 78310985281, 110075314176, 152587890625 ... (sequence in the OEIS) In the archaic notation of Robert Recorde, the eighth power of a number was called the "zenzizenzizenzic". Algebra and number theory. Polynomial equations of degree 8 are octic equations. These have the form formula_0 The smallest known eighth power that can be written as a sum of eight eighth powers is formula_1 The sum of the reciprocals of the nonzero eighth powers is the Riemann zeta function evaluated at 8, which can be expressed in terms of the eighth power of pi: formula_2 (OEIS: ) This is an example of a more general expression for evaluating the Riemann zeta function at positive even integers, in terms of the Bernoulli numbers: formula_3 Physics. In aeroacoustics, Lighthill's eighth power law states that the power of the sound created by a turbulent motion, far from the turbulence, is proportional to the eighth power of the characteristic turbulent velocity. The ordered phase of the two-dimensional Ising model exhibits an inverse eighth power dependence of the order parameter upon the reduced temperature. The Casimir–Polder force between two molecules decays as the inverse eighth power of the distance between them. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ax^8+bx^7+cx^6+dx^5+ex^4+fx^3+gx^2+hx+k=0.\\," }, { "math_id": 1, "text": "1409^8 = 1324^8 + 1190^8 + 1088^8 + 748^8 + 524^8 + 478^8 + 223^8 + 90^8." }, { "math_id": 2, "text": "\\zeta(8) = \\frac{1}{1^8} + \\frac{1}{2^8} + \\frac{1}{3^8} + \\cdots = \\frac{\\pi^8}{9450} = 1.00407 \\dots" }, { "math_id": 3, "text": "\\zeta(2n) = (-1)^{n+1}\\frac{B_{2n}(2\\pi)^{2n}}{2(2n)!}." } ]
https://en.wikipedia.org/wiki?curid=60976257
609808
Kinesin
Eukaryotic motor protein A kinesin is a protein belonging to a class of motor proteins found in eukaryotic cells. Kinesins move along microtubule (MT) filaments and are powered by the hydrolysis of adenosine triphosphate (ATP) (thus kinesins are ATPases, a type of enzyme). The active movement of kinesins supports several cellular functions including mitosis, meiosis and transport of cellular cargo, such as in axonal transport, and intraflagellar transport. Most kinesins walk towards the plus end of a microtubule, which, in most cells, entails transporting cargo such as protein and membrane components from the center of the cell towards the periphery. This form of transport is known as anterograde transport. In contrast, dyneins are motor proteins that move toward the minus end of a microtubule in retrograde transport. Discovery. The first kinesins to be discovered were microtubule-based anterograde intracellular transport motors in 1985, based on their motility in cytoplasm extruded from the giant axon of the squid. The founding member of this superfamily, kinesin-1, was isolated as a heterotetrameric fast axonal organelle transport motor consisting of four parts: two identical motor subunits (called Kinesin Heavy Chain (KHC) molecules) and two other molecules each known as a Kinesin Light Chain (KLC). These were discovered via microtubule affinity purification from neuronal cell extracts. Subsequently, a different, heterotrimeric plus-end-directed MT-based motor named kinesin-2, consisting of two distinct KHC-related motor subunits and an accessory "KAP" subunit, was purified from echinoderm egg/embryo extracts and is best known for its role in transporting protein complexes (intraflagellar transport particles) along axonemes during ciliogenesis. Molecular genetic and genomic approaches have led to the recognition that the kinesins form a diverse superfamily of motors that are responsible for multiple intracellular motility events in eukaryotic cells. For example, the genomes of mammals encode more than 40 kinesin proteins, organized into at least 14 families named kinesin-1 through kinesin-14. Structure. Overall structure. Members of the kinesin superfamily vary in shape but the prototypical kinesin-1 motor consists of two Kinesin Heavy Chain (KHC) molecules which form a protein dimer (molecule pair) that binds two light chains (KLCs), which are unique for different cargos. The heavy chain of kinesin-1 comprises a globular head (the motor domain) at the amino terminal end connected via a short, flexible neck linker to the stalk – a long, central alpha-helical coiled coil domain – that ends in a carboxy terminal tail domain which associates with the light-chains. The stalks of two KHCs intertwine to form a coiled coil that directs dimerization of the two KHCs. In most cases transported cargo binds to the kinesin light chains, at the TPR motif sequence of the KLC, but in some cases cargo binds to the C-terminal domains of the heavy chains. Kinesin motor domain. The head is the signature of kinesin and its amino acid sequence is well conserved among various kinesins. Each head has two separate binding sites: one for the microtubule and the other for ATP. ATP binding and hydrolysis as well as ADP release change the conformation of the microtubule-binding domains and the orientation of the neck linker with respect to the head; this results in the motion of the kinesin. Several structural elements in the head, including a central beta-sheet domain and the Switch I and II domains, have been implicated as mediating the interactions between the two binding sites and the neck domain. Kinesins are structurally related to G proteins, which hydrolyze GTP instead of ATP. Several structural elements are shared between the two families, notably the Switch I and Switch II domain. Basic kinesin regulation. Kinesins tend to have low basal enzymatic activity which becomes significant when microtubule-activated. In addition, many members of the kinesin superfamily can be self-inhibited by the binding of tail domain to the motor domain. Such self-inhibition can then be relieved via additional regulation such as binding to cargo, cargo adapters or other microtubule-associated proteins. Cargo transport. In the cell, small molecules, such as gases and glucose, diffuse to where they are needed. Large molecules synthesised in the cell body, intracellular components such as vesicles and organelles such as mitochondria are too large (and the cytosol too crowded) to be able to diffuse to their destinations. Motor proteins fulfill the role of transporting large cargo about the cell to their required destinations. Kinesins are motor proteins that transport such cargo by walking unidirectionally along microtubule tracks hydrolysing one molecule of adenosine triphosphate (ATP) at each step. It was thought that ATP hydrolysis powered each step, the energy released propelling the head forwards to the next binding site. However, it has been proposed that the head diffuses forward and the force of binding to the microtubule is what pulls the cargo along. In addition viruses, HIV for example, exploit kinesins to allow virus particle shuttling after assembly. There is significant evidence that cargoes in-vivo are transported by multiple motors. Direction of motion. Motor proteins travel in a specific direction along a microtubule. Microtubules are polar; meaning, the heads only bind to the microtubule in one orientation, while ATP binding gives each step its direction through a process known as neck linker zippering. It has been previously known that kinesin move cargo towards the plus (+) end of a microtubule, also known as anterograde transport/orthograde transport. However, it has been recently discovered that in budding yeast cells kinesin Cin8 (a member of the Kinesin-5 family) can move toward the minus end as well, or retrograde transport. This means, these unique yeast kinesin homotetramers have the novel ability to move bi-directionally. Kinesin, so far, has only been shown to move toward the minus end when in a group, with motors sliding in the antiparallel direction in an attempt to separate microtubules. This dual directionality has been observed in identical conditions where free Cin8 molecules move towards the minus end, but cross-linking Cin8 move toward the plus ends of each cross-linked microtubule. One specific study tested the speed at which Cin8 motors moved, their results yielded a range of about 25-55 nm/s, in the direction of the spindle poles. On an individual basis it has been found that by varying ionic conditions Cin8 motors can become as fast as 380 nm/s. It is suggested that the bidirectionality of yeast kinesin-5 motors such as Cin8 and Cut7 is a result of coupling with other Cin8 motors and helps to fulfill the role of dynein in budding yeast, as opposed to the human homologue of these motors, the plus directed Eg5. This discovery in kinesin-14 family proteins (such as "Drosophila melanogaster" NCD, budding yeast KAR3, and "Arabidopsis thaliana" ATK5) allows kinesin to walk in the opposite direction, toward microtubule minus end. This is not typical of kinesin, rather, an exception to the normal direction of movement. Another type of motor protein, known as dyneins, move towards the minus end of the microtubule. Thus, they transport cargo from the periphery of the cell towards the center. An example of this would be transport occurring from the terminal boutons of a neuronal axon to the cell body (soma). This is known as "retrograde transport". Mechanism of movement. In 2023 direct visualization of kinesin "walking" along a microtubule in real-time was reported. In a "hand-over-hand" mechanism, the kinesin heads step past one another, alternating the lead position. Thus in each step the leading head becomes the trailing head, while the trailing head becomes the leading head. Theoretical modeling. A number of theoretical models of the molecular motor protein kinesin have been proposed. Many challenges are encountered in theoretical investigations given the remaining uncertainties about the roles of protein structures, the precise way energy from ATP is transformed into mechanical work, and the roles played by thermal fluctuations. This is a rather active area of research. There is a need especially for approaches which better make a link with the molecular architecture of the protein and data obtained from experimental investigations. The single-molecule dynamics are already well described but it seems that these nano scale machines typically work in large teams. Single-molecule dynamics are based on the distinct chemical states of the motor and observations about its mechanical steps. For small concentrations of adenosine diphosphate, the motor's behaviour is governed by the competition of two chemomechanical motor cycles which determine the motor's stall force. A third cycle becomes important for large ADP concentrations. Models with a single cycle have been discussed too. Seiferth et al. demonstrated how quantities such as the velocity or the entropy production of a motor change when adjacent states are merged in a multi-cyclic model until eventually the number of cycles is reduced. Recent experimental research has shown that kinesins, while moving along microtubules, interact with each other, the interactions being short range and weak attractive (1.6±0.5 KBT). One model that has been developed takes into account these particle interactions, where the dynamic rates change accordingly with the energy of interaction. If the energy is positive the rate of creating bonds (q) will be higher while the rate of breaking bonds (r) will be lower. One can understand that the rates of entrance and exit in the microtubule will be changed as well by the energy (See figure 1 in reference 30). If the second site is occupied the rate of entrance will be α*q and if the last but one site is occupied the rate of exit will be β*r. This theoretical approach agrees with the results of Monte Carlo simulations for this model, especially for the limiting case of very large negative energy. The normal totally asymmetric simple exclusion process for (or TASEP) results can be recovered from this model making the energy equal to zero. formula_0 Mitosis. In recent years, it has been found that microtubule-based molecular motors (including a number of kinesins) have a role in mitosis (cell division). Kinesins are important for proper spindle length and are involved in sliding microtubules apart within the spindle during prometaphase and metaphase, as well as depolymerizing microtubule minus ends at centrosomes during anaphase. Specifically, Kinesin-5 family proteins act within the spindle to slide microtubules apart, while the Kinesin 13 family act to depolymerize microtubules. Kinesin superfamily. Human kinesin superfamily members include the following proteins, which in the standardized nomenclature developed by the community of kinesin researchers, are organized into 14 families named kinesin-1 through kinesin-14: kinesin-1 light chains: kinesin-2 associated protein: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "{q\\over r} = e^{E\\over K_B T}" } ]
https://en.wikipedia.org/wiki?curid=609808
60992857
Federated learning
Decentralized machine learning Federated learning (also known as collaborative learning) is a sub-field of machine learning focusing on settings in which multiple entities (often referred to as clients) collaboratively train a model while ensuring that their data remains decentralized. This stands in contrast to machine learning settings in which data is centrally stored. One of the primary defining characteristics of federated learning is data heterogeneity. Due to the decentralized nature of the clients' data, there is no guarantee that data samples held by each client are independently and identically distributed. Federated learning is generally concerned with and motivated by issues such as data privacy, data minimization, and data access rights. Its applications involve a variety of research areas including defense, telecommunications, the Internet of Things, and pharmaceuticals. Definition. Federated learning aims at training a machine learning algorithm, for instance deep neural networks, on multiple local datasets contained in local nodes without explicitly exchanging data samples. The general principle consists in training local models on local data samples and exchanging parameters (e.g. the weights and biases of a deep neural network) between these local nodes at some frequency to generate a global model shared by all nodes. The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are independent and identically distributed (i.i.d.) and roughly have the same size. None of these hypotheses are made for federated learning; instead, the datasets are typically heterogeneous and their sizes may span several orders of magnitude. Moreover, the clients involved in federated learning may be unreliable as they are subject to more failures or drop out since they commonly rely on less powerful communication media (i.e. Wi-Fi) and battery-powered systems (i.e. smartphones and IoT devices) compared to distributed learning where nodes are typically datacenters that have powerful computational capabilities and are connected to one another with fast networks. Mathematical formulation. The objective function for federated learning is as follows: formula_0 where formula_1 is the number of nodes, formula_2 are the weights of model as viewed by node formula_3, and formula_4 is node formula_3's local objective function, which describes how model weights formula_2 conforms to node formula_3's local dataset. The goal of federated learning is to train a common model on all of the nodes' local datasets, in other words: Centralized federated learning. In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system. Decentralized federated learning. In the decentralized federated learning setting, the nodes are able to coordinate themselves to obtain the global model. This setup prevents single point failures as the model updates are exchanged only between interconnected nodes without the orchestration of the central server. Nevertheless, the specific network topology may affect the performances of the learning process. See blockchain-based federated learning and the references therein. Heterogeneous federated learning. An increasing number of application domains involve a large set of heterogeneous clients, e.g., mobile phones and IoT devices. Most of the existing Federated learning strategies assume that local models share the same global model architecture. Recently, a new federated learning framework named HeteroFL was developed to address heterogeneous clients equipped with very different computation and communication capabilities. The HeteroFL technique can enable the training of heterogeneous local models with dynamically varying computation and non-IID data complexities while still producing a single accurate global inference model. Main features. Iterative learning. To ensure good task performance of a final, central machine learning model, federated learning relies on an iterative process broken up into an atomic set of client-server interactions known as a federated learning round. Each round of this process consists in transmitting the current global model state to participating nodes, training local models on these local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying it to the global model. In the methodology below, a central server is used for aggregation, while local nodes perform local training depending on the central server's orders. However, other strategies lead to the same results without central servers, in a peer-to-peer approach, using gossip or consensus methodologies. Assuming a federated round composed by one iteration of the learning process, the learning procedure can be summarized as follows: The procedure considered before assumes synchronized model updates. Recent federated learning developments introduced novel techniques to tackle asynchronicity during the training process, or training with dynamically varying models. Compared to synchronous approaches where local models are exchanged once the computations have been performed for all layers of the neural network, asynchronous ones leverage the properties of neural networks to exchange model updates as soon as the computations of a certain layer are available. These techniques are also commonly referred to as split learning and they can be applied both at training and inference time regardless of centralized or decentralized federated learning settings. Non-IID data. In most cases, the assumption of independent and identically distributed samples across local nodes does not hold for federated learning setups. Under this setting, the performances of the training process may vary significantly according to the unbalanced local data samples as well as the particular probability distribution of the training examples (i.e., features and labels) stored at the local nodes. To further investigate the effects of non-IID data, the following description considers the main categories presented in the preprint by Peter Kairouz et al. from 2019. The description of non-IID data relies on the analysis of the joint probability between features and labels for each node. This allows decoupling of each contribution according to the specific distribution available at the local nodes. The main categories for non-iid data can be summarized as follows: The loss in accuracy due to non-iid data can be bounded through using more sophisticated means of doing data normalization, rather than batch normalization. Algorithmic hyper-parameters. Network topology. The way the statistical local outputs are pooled and the way the nodes communicate with each other can change from the centralized model explained in the previous section. This leads to a variety of federated learning approaches: for instance no central orchestrating server, or stochastic communication. In particular, orchestrator-less distributed networks are one important variation. In this case, there is no central server dispatching queries to local nodes and aggregating local models. Each local node sends its outputs to several randomly-selected others, which aggregate their results locally. This restrains the number of transactions, thereby sometimes reducing training time and computing cost. Federated learning parameters. Once the topology of the node network is chosen, one can control different parameters of the federated learning process (in addition to the machine learning model's own hyperparameters) to optimize learning: Other model-dependent parameters can also be tinkered with, such as: Those parameters have to be optimized depending on the constraints of the machine learning application (e.g., available computing power, available memory, bandwidth). For instance, stochastically choosing a limited fraction formula_9 of nodes for each iteration diminishes computing cost and may prevent overfitting, in the same way that stochastic gradient descent can reduce overfitting. Technical limitations. Federated learning requires frequent communication between nodes during the learning process. Thus, it requires not only enough local computing power and memory, but also high bandwidth connections to be able to exchange parameters of the machine learning model. However, the technology also avoids data communication, which can require significant resources before starting centralized machine learning. Nevertheless, the devices typically employed in federated learning are communication-constrained, for example IoT devices or smartphones are generally connected to Wi-Fi networks, thus, even if the models are commonly less expensive to be transmitted compared to raw data, federated learning mechanisms may not be suitable in their general form. Federated learning raises several statistical challenges: Federated learning variations. A number of different algorithms for federated optimization have been proposed. Federated stochastic gradient descent (FedSGD). Deep learning training mainly relies on variants of stochastic gradient descent, where gradients are computed on a random subset of the total dataset and then used to make one step of the gradient descent. Federated stochastic gradient descent is the direct transposition of this algorithm to the federated setting, but by using a random fraction formula_9 of the nodes and using all the data on this node. The gradients are averaged by the server proportionally to the number of training samples on each node, and used to make a gradient descent step. Federated averaging. Federated averaging (FedAvg) is a generalization of FedSGD, which allows local nodes to perform more than one batch update on local data and exchanges the updated weights rather than the gradients. The rationale behind this generalization is that in FedSGD, if all local nodes start from the same initialization, averaging the gradients is strictly equivalent to averaging the weights themselves. Further, averaging tuned weights coming from the same initialization does not necessarily hurt the resulting averaged model's performance. Variations of FedAvg based on adaptive optimizers such as ADAM and AdaGrad have been proposed, and generally outperform FedAvg. Federated Learning with Dynamic Regularization (FedDyn). Federated learning methods suffer when the device datasets are heterogeneously distributed. Fundamental dilemma in heterogeneously distributed device setting is that minimizing the device loss functions is not the same as minimizing the global loss objective. In 2021, Acar et al. introduced FedDyn method as a solution to heterogenous dataset setting. FedDyn dynamically regularizes each devices loss function so that the modified device losses converges to the actual global loss. Since the local losses are aligned, FedDyn is robust to the different heterogeneity levels and it can safely perform full minimization in each device. Theoretically, FedDyn converges to the optimal (a stationary point for nonconvex losses) by being agnostic to the heterogeneity levels. These claims are verified with extensive experimentations on various datasets. Minimizing the number of communications is the gold-standard for comparison in federated learning. We may also want to decrease the local computation levels per device in each round. FedDynOneGD is an extension of FedDyn with less local compute requirements. FedDynOneGD calculates only one gradients per device in each round and update the model with a regularized version of the gradient. Hence, the computation complexity is linear in local dataset size. Moreover, gradient computation can be parallelizable within each device which is different from successive SGD steps. Theoretically, FedDynOneGD achieves the same convergence guarantees as in FedDyn with less local computation. Personalized Federated Learning by Pruning (Sub-FedAvg). Federated Learning methods cannot achieve good global performance under non-IID settings which motivates the participating clients to yield personalized models in federation. Recently, Vahidian et al. introduced Sub-FedAvg opening a new personalized FL algorithm paradigm by proposing Hybrid Pruning (structured + unstructured pruning) with averaging on the intersection of clients’ drawn subnetworks which simultaneously handles communication efficiency, resource constraints and personalized models accuracies. Sub-FedAvg is the first work which shows existence of personalized winning tickets for clients in federated learning through experiments. Moreover, it also proposes two algorithms on how to effectively draw the personalized subnetworks. Sub-FedAvg tries to extend the "lottery ticket hypothesis" which is for centrally trained neural networks to federated learning trained neural networks leading to this open research problem: “Do winning tickets exist for clients’ neural networks being trained in federated learning? If yes, how to effectively draw the personalized subnetworks for each client?” Dynamic Aggregation - Inverse Distance Aggregation. IDA (Inverse Distance Aggregation) is a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data. It uses the distance of the model parameters as a strategy to minimize the effect of outliers and improve the model's convergence rate. Hybrid Federated Dual Coordinate Ascent (HyFDCA). Very few methods for hybrid federated learning, where clients only hold subsets of both features and samples, exist. Yet, this scenario is very important in practical settings. Hybrid Federated Dual Coordinate Ascent (HyFDCA) is a novel algorithm proposed in 2024 that solves convex problems in the hybrid FL setting. This algorithm extends CoCoA, a primal-dual distributed optimization algorithm introduced by Jaggi et al. (2014) and Smith et al. (2017), to the case where both samples and features are partitioned across clients. HyFDCA claims several improvement over existing algorithms: There is only one other algorithm that focuses on hybrid FL, HyFEM proposed by Zhang et al. (2020). This algorithm uses a feature matching formulation that balances clients building accurate local models and the server learning an accurate global model. This requires a matching regularizer constant that must be tuned based on user goals and results in disparate local and global models. Furthermore, the convergence results provided for HyFEM only prove convergence of the matching formulation not of the original global problem. This work is substantially different than HyFDCA's approach which uses data on local clients to build a global model that converges to the same solution as if the model was trained centrally. Furthermore, the local and global models are synchronized and do not require the adjustment of a matching parameter between local and global models. However, HyFEM is suitable for a vast array of architectures including deep learning architectures, whereas HyFDCA is designed for convex problems like logistic regression and support vector machines. HyFDCA is empirically benchmarked against the aforementioned HyFEM as well as the popular FedAvg in solving convex problem (specifically classification problems) for several popular datasets (MNIST, Covtype, and News20). The authors found HyFDCA converges to a lower loss value and higher validation accuracy in less overall time in 33 of 36 comparisons examined and 36 of 36 comparisons examined with respect to the number of outer iterations. Lastly, HyFDCA only requires tuning of one hyperparameter, the number of inner iterations, as opposed to FedAvg (which requires tuning three) or HyFEM (which requires tuning four). In addition to FedAvg and HyFEM being quite difficult to optimize hyperparameters in turn greatly affecting convergence, HyFDCA's single hyperparameter allows for simpler practical implementations and hyperparameter selection methodologies. Federated ViT using Dynamic Aggregation (FED-REV). Federated Learning (FL) provides training of global shared model using decentralized data sources on edge nodes while preserving data privacy. However, its performance in the computer vision applications using Convolution neural network (CNN) considerably behind that of centralized training due to limited communication resources and low processing capability at edge nodes. Alternatively, Pure Vision transformer models (VIT) outperform CNNs by almost four times when it comes to computational efficiency and accuracy. Hence, we propose a new FL model with reconstructive strategy called FED-REV, Illustrates how attention-based structures (pure Vision Transformers) enhance FL accuracy over large and diverse data distributed over edge nodes, in addition to the proposed reconstruction strategy that determines the dimensions influence of each stage of the vision transformer and then reduce its dimension complexity which reduce computation cost of edge devices in addition to preserving accuracy achieved due to using the pure Vision transformer. Current research topics. Federated learning has started to emerge as an important research topic in 2015 and 2016, with the first publications on federated averaging in telecommunication settings. Before that, in a thesis work titled "A Framework for Multi-source Prefetching Through Adaptive Weight", an approach to aggregate predictions from multiple models trained at three location of a request response cycle with was proposed. Another important aspect of active research is the reduction of the communication burden during the federated learning process. In 2017 and 2018, publications have emphasized the development of resource allocation strategies, especially to reduce communication requirements between nodes with gossip algorithms as well as on the characterization of the robustness to differential privacy attacks. Other research activities focus on the reduction of the bandwidth during training through sparsification and quantization methods, where the machine learning models are sparsified and/or compressed before they are shared with other nodes. Developing ultra-light DNN architectures is essential for device-/edge- learning and recent work recognises both the energy efficiency requirements for future federated learning and the need to compress deep learning, especially during learning. Recent research advancements are starting to consider real-world propagating channels as in previous implementations ideal channels were assumed. Another active direction of research is to develop Federated learning for training heterogeneous local models with varying computation complexities and producing a single powerful global inference model. A learning framework named Assisted learning was recently developed to improve each agent's learning capabilities without transmitting private data, models, and even learning objectives. Compared with Federated learning that often requires a central controller to orchestrate the learning and optimization, Assisted learning aims to provide protocols for the agents to optimize and learn among themselves without a global model. Use cases. Federated learning typically applies when individual actors need to train models on larger datasets than their own, but cannot afford to share the data in itself with others (e.g., for legal, strategic or economic reasons). The technology yet requires good connections between local servers and minimum computational power for each node. Transportation: self-driving cars. Self-driving cars encapsulate many machine learning technologies to function: computer vision for analyzing obstacles, machine learning for adapting their pace to the environment (e.g., bumpiness of the road). Due to the potential high number of self-driving cars and the need for them to quickly respond to real world situations, traditional cloud approach may generate safety risks. Federated learning can represent a solution for limiting volume of data transfer and accelerating learning processes. Industry 4.0: smart manufacturing. In Industry 4.0, there is a widespread adoption of machine learning techniques to improve the efficiency and effectiveness of industrial process while guaranteeing a high level of safety. Nevertheless, privacy of sensitive data for industries and manufacturing companies is of paramount importance. Federated learning algorithms can be applied to these problems as they do not disclose any sensitive data. In addition, FL also implemented for PM2.5 prediction to support Smart city sensing applications. Medicine: digital health. Federated learning seeks to address the problem of data governance and privacy by training algorithms collaboratively without exchanging the data itself. Today's standard approach of centralizing data from multiple centers comes at the cost of critical concerns regarding patient privacy and data protection. To solve this problem, the ability to train machine learning models at scale across multiple medical institutions without moving the data is a critical technology. Nature Digital Medicine published the paper "The Future of Digital Health with Federated Learning" in September 2020, in which the authors explore how federated learning may provide a solution for the future of digital health, and highlight the challenges and considerations that need to be addressed. Recently, a collaboration of 20 different institutions around the world validated the utility of training AI models using federated learning. In a paper published in Nature Medicine "Federated learning for predicting clinical outcomes in patients with COVID-19", they showcased the accuracy and generalizability of a federated AI model for the prediction of oxygen needs in patients with COVID-19 infections. Furthermore, in a published paper "A Systematic Review of Federated Learning in the Healthcare Area: From the Perspective of Data Properties and Applications", the authors trying to provide a set of challenges on FL challenges on medical data-centric perspective. A coalition from industry and academia has developed MedPerf, an open source platform that enables validation of medical AI models in real world data. The platform relies technically on federated evaluation of AI models aiming to alleviate concerns of patient privacy and conceptually on diverse benchmark committees to build the specifications of neutral clinically impactful benchmarks. Robotics. Robotics includes a wide range of applications of machine learning methods: from perception and decision-making to control. As robotic technologies have been increasingly deployed from simple and repetitive tasks (e.g. repetitive manipulation) to complex and unpredictable tasks (e.g. autonomous navigation), the need for machine learning grows. Federated Learning provides a solution to improve over conventional machine learning training methods. In the paper, mobile robots learned navigation over diverse environments using the FL-based method, helping generalization. In the paper, Federated Learning is applied to improve multi-robot navigation under limited communication bandwidth scenarios, which is a current challenge in real-world learning-based robotic tasks. In the paper, Federated Learning is used to learn Vision-based navigation, helping better sim-to-real transfer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(\\mathbf x_1, \\dots, \\mathbf x_K) = \\dfrac{1}{K} \\sum_{i=1}^K f_i(\\mathbf x_i)" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "\\mathbf x_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "f_i" }, { "math_id": 5, "text": "f(\\mathbf x_1, \\dots, \\mathbf x_K)" }, { "math_id": 6, "text": "\\mathbf x_1, \\dots, \\mathbf x_K" }, { "math_id": 7, "text": "\\mathbf x" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "C" }, { "math_id": 10, "text": "B" }, { "math_id": 11, "text": "N" }, { "math_id": 12, "text": "\\eta" } ]
https://en.wikipedia.org/wiki?curid=60992857
6099726
Electronic filter topology
Electronic filter circuits defined by component connection Electronic filter topology defines electronic filter circuits without taking note of the values of the components used but only the manner in which those components are connected. Filter design characterises filter circuits primarily by their transfer function rather than their topology. Transfer functions may be linear or nonlinear. Common types of linear filter transfer function are; high-pass, low-pass, bandpass, band-reject or notch and all-pass. Once the transfer function for a filter is chosen, the particular topology to implement such a prototype filter can be selected so that, for example, one might choose to design a Butterworth filter using the Sallen–Key topology. Filter topologies may be divided into passive and active types. Passive topologies are composed exclusively of passive components: resistors, capacitors, and inductors. Active topologies also include active components (such as transistors, op amps, and other integrated circuits) that require power. Further, topologies may be implemented either in unbalanced form or else in balanced form when employed in balanced circuits. Implementations such as electronic mixers and stereo sound may require arrays of identical circuits. Passive topologies. Passive filters have been long in development and use. Most are built from simple two-port networks called "sections". There is no formal definition of a section except that it must have at least one series component and one shunt component. Sections are invariably connected in a "cascade" or "daisy-chain" topology, consisting of additional copies of the same section or of completely different sections. The rules of series and parallel impedance would combine two sections consisting only of series components or shunt components into a single section. Some passive filters, consisting of only one or two filter sections, are given special names including the L-section, T-section and Π-section, which are unbalanced filters, and the C-section, H-section and box-section, which are balanced. All are built upon a very simple "ladder" topology (see below). The chart at the bottom of the page shows these various topologies in terms of general constant k filters. Filters designed using network synthesis usually repeat the simplest form of L-section topology though component values may change in each section. Image designed filters, on the other hand, keep the same basic component values from section to section though the topology may vary and tend to make use of more complex sections. L-sections are never symmetrical but two L-sections back-to-back form a symmetrical topology and many other sections are symmetrical in form. Ladder topologies. Ladder topology, often called Cauer topology after Wilhelm Cauer (inventor of the elliptic filter), was in fact first used by George Campbell (inventor of the constant k filter). Campbell published in 1922 but had clearly been using the topology for some time before this. Cauer first picked up on ladders (published 1926) inspired by the work of Foster (1924). There are two forms of basic ladder topologies: unbalanced and balanced. Cauer topology is usually thought of as an unbalanced ladder topology. A ladder network consists of cascaded asymmetrical L-sections (unbalanced) or C-sections (balanced). In low pass form the topology would consist of series inductors and shunt capacitors. Other bandforms would have an equally simple topology transformed from the lowpass topology. The transformed network will have shunt admittances that are dual networks of the series impedances if they were duals in the starting network - which is the case with series inductors and shunt capacitors. Modified ladder topologies. Image filter design commonly uses modifications of the basic ladder topology. These topologies, invented by Otto Zobel, have the same passbands as the ladder on which they are based but their transfer functions are modified to improve some parameter such as impedance matching, stopband rejection or passband-to-stopband transition steepness. Usually the design applies some transform to a simple ladder topology: the resulting topology is ladder-like but no longer obeys the rule that shunt admittances are the dual network of series impedances: it invariably becomes more complex with higher component count. Such topologies include; The m-type (m-derived) filter is by far the most commonly used modified image ladder topology. There are two m-type topologies for each of the basic ladder topologies; the series-derived and shunt-derived topologies. These have identical transfer functions to each other but different image impedances. Where a filter is being designed with more than one passband, the m-type topology will result in a filter where each passband has an analogous frequency-domain response. It is possible to generalise the m-type topology for filters with more than one passband using parameters m1, m2, m3 etc., which are not equal to each other resulting in general mn-type filters which have bandforms that can differ in different parts of the frequency spectrum. The mm'-type topology can be thought of as a double m-type design. Like the m-type it has the same bandform but offers further improved transfer characteristics. It is, however, a rarely used design due to increased component count and complexity as well as its normally requiring basic ladder and m-type sections in the same filter for impedance matching reasons. It is normally only found in a composite filter. Bridged-T topologies. Zobel constant resistance filters use a topology that is somewhat different from other filter types, distinguished by having a constant input resistance at all frequencies and in that they use resistive components in the design of their sections. The higher component and section count of these designs usually limits their use to equalisation applications. Topologies usually associated with constant resistance filters are the bridged-T and its variants, all described in the Zobel network article; The bridged-T topology is also used in sections intended to produce a signal delay but in this case no resistive components are used in the design. Lattice topology. Both the T-section (from ladder topology) and the bridge-T (from Zobel topology) can be transformed into a lattice topology filter section but in both cases this results in high component count and complexity. The most common application of lattice filters (X-sections) is in all-pass filters used for phase equalisation. Although T and bridged-T sections can always be transformed into X-sections the reverse is not always possible because of the possibility of negative values of inductance and capacitance arising in the transform. Lattice topology is identical to the more familiar bridge topology, the difference being merely the drawn representation on the page rather than any real difference in topology, circuitry or function. Active topologies. Multiple feedback topology. Multiple feedback topology is an electronic filter topology which is used to implement an electronic filter by adding two poles to the transfer function. A diagram of the circuit topology for a second order low pass filter is shown in the figure on the right. The transfer function of the multiple feedback topology circuit, like all second-order linear filters, is: formula_0. In an MF filter, formula_1 formula_2 formula_3 formula_4 is the Q factor. formula_5 is the DC voltage gain formula_6 is the corner frequency For finding suitable component values to achieve the desired filter properties, a similar approach can be followed as in the Design choices section of the alternative Sallen–Key topology. Biquad filter topology. "For the digital implementation of a biquad filter, see Digital biquad filter." A biquad filter is a type of linear filter that implements a transfer function that is the ratio of two quadratic functions. The name "biquad" is short for "biquadratic". Any second-order filter topology can be referred to as a "biquad", such as the MFB or Sallen-Key. However, there is also a specific "biquad" topology. It is also sometimes called the 'ring of 3' circuit. Biquad filters are typically active and implemented with a single-amplifier biquad (SAB) or two-integrator-loop topology. The SAB topology is sensitive to component choice and can be more difficult to adjust. Hence, usually the term biquad refers to the two-integrator-loop state variable filter topology. Tow-Thomas filter. For example, the basic configuration in Figure 1 can be used as either a low-pass or bandpass filter depending on where the output signal is taken from. The second-order low-pass transfer function is given by formula_7 where low-pass gain formula_8. The second-order bandpass transfer function is given by formula_9. with bandpass gain formula_10. In both cases, the The bandwidth is approximated by formula_13, and Q is sometimes expressed as a damping constant formula_14. If a noninverting low-pass filter is required, the output can be taken at the output of the second operational amplifier, after the order of the second integrator and the inverter has been switched. If a noninverting bandpass filter is required, the order of the second integrator and the inverter can be switched, and the output taken at the output of the inverter's operational amplifier. Akerberg-Mossberg filter. Figure 2 shows a variant of the Tow-Thomas topology, known as Akerberg-Mossberg topology, that uses an actively compensated Miller integrator, which improves filter performance. Sallen–Key topology. The Sallen-Key design is a non-inverting second-order filter with the option of high Q and passband gain. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "H(s) = \\frac{V_o}{V_i} = -\\frac{1}{As^2+Bs+C} = \\frac{K {\\omega_0}^2}{s^{2}+\\frac{\\omega_{0}}{Q}s+{\\omega_0}^2}" }, { "math_id": 1, "text": "A = (R_1 R_3 C_2 C_5)\\," }, { "math_id": 2, "text": "B = R_3 C_5 + R_1 C_5 + R_1 R_3 C_5/R_4\\," }, { "math_id": 3, "text": "C = R_1/R_4\\," }, { "math_id": 4, "text": "Q = \\frac{ \\sqrt{R_3 R_4 C_2 C_5} }{ ( R_4 + R_3 + |K| R_3 ) C_5 } " }, { "math_id": 5, "text": "K = -R_4/R_1\\," }, { "math_id": 6, "text": "\\omega_{0} = 2 \\pi f_{0} = 1 / \\sqrt{R_3 R_4 C_2 C_5} " }, { "math_id": 7, "text": "H(s)=\\frac{G_\\mathrm{lpf}{\\omega_0}^2}{s^{2}+\\frac{\\omega_{0}}{Q}s+{\\omega_0}^2}" }, { "math_id": 8, "text": "G_\\mathrm{lpf}=-R_{2}/R_{1}" }, { "math_id": 9, "text": "H(s)=\\frac{G_\\mathrm{bpf}\\frac{\\omega_{0}}{Q}s}{s^{2}+\\frac{\\omega_{0}}{Q}s+{\\omega_0}^2}" }, { "math_id": 10, "text": "G_\\mathrm{bpf}=-R_{3}/R_{1}" }, { "math_id": 11, "text": "\\omega_{0}=1/\\sqrt{R_2 R_4 C_1 C_2}" }, { "math_id": 12, "text": "Q=\\sqrt{\\frac{{R_3}^2 C_1}{R_2 R_4 C_2}}" }, { "math_id": 13, "text": "B=\\omega_{0}/Q" }, { "math_id": 14, "text": "\\zeta=1/2Q" } ]
https://en.wikipedia.org/wiki?curid=6099726
60999460
Harmonic mean p-value
Statistical method for multiple testing The harmonic mean "p"-value (HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate (this claim has been disputed). It improves on the power of Bonferroni correction by performing combined tests, i.e. by testing whether "groups" of "p"-values are statistically significant, like Fisher's method. However, it avoids the restrictive assumption that the "p"-values are independent, unlike Fisher's method. Consequently, it controls the false positive rate when tests are dependent, at the expense of less power (i.e. a higher false negative rate) when tests are independent. Besides providing an alternative to approaches such as Bonferroni correction that controls the stringent family-wise error rate, it also provides an alternative to the widely-used Benjamini-Hochberg procedure (BH) for controlling the less-stringent false discovery rate. This is because the power of the HMP to detect significant "groups" of hypotheses is greater than the power of BH to detect significant "individual" hypotheses. There are two versions of the technique: (i) direct interpretation of the HMP as an approximate "p"-value and (ii) a procedure for transforming the HMP into an asymptotically exact "p"-value. The approach provides a multilevel test procedure in which the smallest groups of "p"-values that are statistically significant may be sought. Direct interpretation of the harmonic mean "p"-value. The weighted harmonic mean of "p"-values formula_0 is defined as formula_1 where formula_2 are weights that must sum to one, i.e. formula_3. Equal weights may be chosen, in which case formula_4. In general, interpreting the HMP directly as a "p"-value is anti-conservative, meaning that the false positive rate is higher than expected. However, as the HMP becomes smaller, under certain assumptions, the discrepancy decreases, so that direct interpretation of significance achieves a false positive rate close to that implied for sufficiently small values (e.g. formula_5). The HMP is never anti-conservative by more than a factor of formula_6 for small formula_7, or formula_8 for large formula_7. However, these bounds represent worst case scenarios under arbitrary dependence that are likely to be conservative in practice. Rather than applying these bounds, asymptotically exact "p"-values can be produced by transforming the HMP. Asymptotically exact harmonic mean "p"-value procedure. Generalized central limit theorem shows that an asymptotically exact "p"-value, formula_9, can be computed from the HMP, formula_10, using the formula formula_11Subject to the assumptions of generalized central limit theorem, this transformed "p"-value becomes exact as the number of tests, formula_7, becomes large. The computation uses the Landau distribution, whose density function can be writtenformula_12The test is implemented by the codice_0 command of the codice_1 R package; a tutorial is available online. Equivalently, one can compare the HMP to a table of critical values (Table 1). The table illustrates that the smaller the false positive rate, and the smaller the number of tests, the closer the critical value is to the false positive rate. Multiple testing via the multilevel test procedure. If the HMP is significant at some level formula_14 for a group of formula_7 "p"-values, one may search all subsets of the formula_7 "p"-values for the smallest significant group, while maintaining the strong-sense family-wise error rate. Formally, this constitutes a closed-testing procedure. When formula_14 is small (e.g. formula_15), the following multilevel test based on direct interpretation of the HMP controls the strong-sense family-wise error rate at level approximately formula_16 An asymptotically exact version of the above replaces formula_21in step 2 with formula_22 where formula_7 gives the number of "p"-values, not just those in subset formula_17. Since direct interpretation of the HMP is faster, a two-pass procedure may be used to identify subsets of "p"-values that are likely to be significant using direct interpretation, subject to confirmation using the asymptotically exact formula. Properties of the HMP. The HMP has a range of properties that arise from generalized central limit theorem. It is: When the HMP is not significant, neither is any subset of the constituent tests. Conversely, when the multilevel test deems a subset of "p"-values to be significant, the HMP for all the "p"-values combined is likely to be significant; this is certain when the HMP is interpreted directly. When the goal is to assess the significance of "individual" "p"-values, so that combined tests concerning "groups" of "p"-values are of no interest, the HMP is equivalent to the Bonferroni procedure but subject to the more stringent significance threshold formula_23 (Table 1). The HMP assumes the individual "p"-values have (not necessarily independent) standard uniform distributions when their null hypotheses are true. Large numbers of underpowered tests can therefore harm the power of the HMP. While the choice of weights is unimportant for the validity of the HMP under the null hypothesis, the weights influence the power of the procedure. Supplementary Methods §5C of and an online tutorial consider the issue in more detail. Bayesian interpretations of the HMP. The HMP was conceived by analogy to Bayesian model averaging and can be interpreted as inversely proportional to a model-averaged Bayes factor when combining "p"-values from likelihood ratio tests. The harmonic mean rule-of-thumb. I. J. Good reported an empirical relationship between the Bayes factor and the "p"-value from a likelihood ratio test. For a null hypothesis formula_24 nested in a more general alternative hypothesis formula_25 he observed that often,formula_26 where formula_27 denotes the Bayes factor in favour of formula_28 versus formula_29 Extrapolating, he proposed a rule of thumb in which the HMP is taken to be inversely proportional to the model-averaged Bayes factor for a collection of formula_7 tests with common null hypothesis:formula_30For Good, his rule-of-thumb supported an interchangeability between Bayesian and classical approaches to hypothesis testing. Bayesian calibration of "p"-values. If the distributions of the "p"-values under the alternative hypotheses follow Beta distributions with parameters formula_31, a form considered by Sellke, Bayarri and Berger, then the inverse proportionality between the model-averaged Bayes factor and the HMP can be formalized asformula_32 where The approximation works best for well-powered tests (formula_41). The harmonic mean "p"-value as a bound on the Bayes factor. For likelihood ratio tests with exactly two degrees of freedom, Wilks' theorem implies that formula_42, where formula_43 is the maximized likelihood ratio in favour of alternative hypothesis formula_34 and therefore formula_44, where formula_45 is the weighted mean maximized likelihood ratio, using weights formula_46 Since formula_43 is an upper bound on the Bayes factor, formula_27, then formula_47 is an upper bound on the model-averaged Bayes factor:formula_48While the equivalence holds only for two degrees of freedom, the relationship between formula_13 and formula_49 and therefore formula_50 behaves similarly for other degrees of freedom. Under the assumption that the distributions of the "p"-values under the alternative hypotheses follow Beta distributions with parameters formula_51 and that the weights formula_52 the HMP provides a tighter upper bound on the model-averaged Bayes factor:formula_53a result that again reproduces the inverse proportionality of Good's empirical relationship.
[ { "math_id": 0, "text": "p_1, \\dots, p_L" }, { "math_id": 1, "text": "\n\\overset{\\circ}{p} = \\frac{\\sum_{i=1}^L w_{i}}{\\sum_{i=1}^L w_{i}/p_{i}}, \n" }, { "math_id": 2, "text": "w_1, \\dots, w_L" }, { "math_id": 3, "text": "\\sum_{i=1}^L w_i=1" }, { "math_id": 4, "text": "w_i=1/L" }, { "math_id": 5, "text": "\\overset{\\circ}{p}<0.05" }, { "math_id": 6, "text": "e\\,\\log L" }, { "math_id": 7, "text": "L" }, { "math_id": 8, "text": "\\log L" }, { "math_id": 9, "text": "p_{\\overset{\\circ}{p}}" }, { "math_id": 10, "text": "\\overset{\\circ}{p}" }, { "math_id": 11, "text": "p_{\\overset{\\circ}{p}} = \\int_{1/\\overset{\\circ}{p}}^\\infty f_\\textrm{Landau}\\left(x\\,|\\,\\log L+0.874,\\frac{\\pi}{2}\\right) \\mathrm{d} x. " }, { "math_id": 12, "text": "f_\\textrm{Landau}(x\\,|\\,\\mu,\\sigma) = \\frac{1}{\\pi\\sigma}\\int_0^\\infty \\textrm{e}^{\n-t\\frac{(x-\\mu)}{\\sigma} -\\frac{2}{\\pi}t \\log t\n}\\,\\sin(2t)\\,\\textrm{d}t." }, { "math_id": 13, "text": "\\overset{\\circ}{p}" }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": "\\alpha<0.05" }, { "math_id": 16, "text": "\\alpha:" }, { "math_id": 17, "text": "\\mathcal{R}" }, { "math_id": 18, "text": "\n\\overset{\\circ}{p}_\\mathcal{R} = \\frac{\\sum_{i\\in\\mathcal{R}} w_{i}}{\\sum_{i\\in\\mathcal{R}} w_{i}/p_{i}}.\n" }, { "math_id": 19, "text": "\\overset{\\circ}{p}_\\mathcal{R}\\leq\\alpha\\,w_\\mathcal{R}" }, { "math_id": 20, "text": "w_\\mathcal{R}=\\sum_{i\\in\\mathcal{R}}w_i" }, { "math_id": 21, "text": "\\overset{\\circ}{p}_\\mathcal{R}" }, { "math_id": 22, "text": "p_{\\overset{\\circ}{p}_\\mathcal{R}} = \\max\\left\\{\\overset{\\circ}{p}_\\mathcal{R}, w_{\\mathcal{R}} \\int_{w_{\\mathcal{R}}/\\overset{\\circ}{p}_\\mathcal{R}}^\\infty f_\\textrm{Landau}\\left(x\\,|\\,\\log L +0.874,\\frac{\\pi}{2}\\right) \\mathrm{d} x\\right\\}, " }, { "math_id": 23, "text": "\\alpha_L<\\alpha" }, { "math_id": 24, "text": "H_0" }, { "math_id": 25, "text": "H_A," }, { "math_id": 26, "text": "\\textrm{BF}_i\\approx \\frac{1}{\\gamma\\,p_i},\\quad3\\frac{1}{3}<\\gamma<30," }, { "math_id": 27, "text": "\\textrm{BF}_i" }, { "math_id": 28, "text": "H_A" }, { "math_id": 29, "text": "H_0." }, { "math_id": 30, "text": "\\overline{\\textrm{BF}}=\\sum_{i=1}^L w_i\\,\\textrm{BF}_i \\approx \\sum_{i=1}^L \\frac{w_i}{\\gamma\\,p_i} = \\frac{1}{\\gamma\\,\\overset{\\circ}{p}}." }, { "math_id": 31, "text": "\\left(0<\\xi_i<1, 1\\right)" }, { "math_id": 32, "text": "\\overline{\\textrm{BF}}=\\sum_{i=1}^L \\mu_i\\,\\textrm{BF}_i=\\sum_{i=1}^L \\mu_i\\,\\xi_i\\,p_i^{\\xi_i-1}\\approx\\bar\\xi\\sum_{i=1}^L w_i\\,p_i^{-1}=\\frac{\\bar\\xi}{\\overset{\\circ}{p}}," }, { "math_id": 33, "text": "\\mu_i" }, { "math_id": 34, "text": "i," }, { "math_id": 35, "text": "\\sum_{i=1}^L\\mu_i=1," }, { "math_id": 36, "text": "\\xi_i/(1+\\xi_i)" }, { "math_id": 37, "text": "p_i" }, { "math_id": 38, "text": "w_i=u_i/\\bar\\xi" }, { "math_id": 39, "text": "u_i = \\left(\\mu_i\\,\\xi_i\\right)^{1/(1-\\xi_i)}" }, { "math_id": 40, "text": "\\bar\\xi = \\sum_{i=1}^L u_i" }, { "math_id": 41, "text": "\\xi_i\\ll 1" }, { "math_id": 42, "text": "p_i=1/R_i" }, { "math_id": 43, "text": "R_i" }, { "math_id": 44, "text": "\\overset{\\circ}{p}=1/\\bar{R}" }, { "math_id": 45, "text": "\\bar{R}" }, { "math_id": 46, "text": "w_1,\\dots,w_L." }, { "math_id": 47, "text": "1/\\overset{\\circ}{p}" }, { "math_id": 48, "text": "\\overline{\\textrm{BF}}\\leq\\frac{1}{\\overset{\\circ}{p}}." }, { "math_id": 49, "text": "\\bar{R}," }, { "math_id": 50, "text": "\\overline{\\textrm{BF}}," }, { "math_id": 51, "text": "\\left(1, \\kappa_i>1\\right)," }, { "math_id": 52, "text": "w_i=\\mu_i," }, { "math_id": 53, "text": "\\overline{\\textrm{BF}}\\leq \\frac{1}{e\\,\\overset{\\circ}{p}}," } ]
https://en.wikipedia.org/wiki?curid=60999460
6100522
Dirichlet density
Concept in number theory In mathematics, the Dirichlet density (or analytic density) of a set of primes, named after Peter Gustav Lejeune Dirichlet, is a measure of the size of the set that is easier to use than the natural density. Definition. If "A" is a subset of the prime numbers, the Dirichlet density of "A" is the limit formula_0 if it exists. Note that since formula_1 as formula_2 (see Prime zeta function), this is also equal to formula_3 This expression is usually the order of the "pole" of formula_4 at "s" = 1, (though in general it is not really a pole as it has non-integral order), at least if this function is a holomorphic function times a (real) power of "s"−1 near "s" = 1. For example, if "A" is the set of all primes, it is the Riemann zeta function which has a pole of order 1 at "s" = 1, so the set of all primes has Dirichlet density 1. More generally, one can define the Dirichlet density of a sequence of primes (or prime powers), possibly with repetitions, in the same way. Properties. If a subset of primes "A" has a natural density, given by the limit of (number of elements of "A" less than "N")/(number of primes less than "N") then it also has a Dirichlet density, and the two densities are the same. However it is usually easier to show that a set of primes has a Dirichlet density, and this is good enough for many purposes. For example, in proving Dirichlet's theorem on arithmetic progressions, it is easy to show that the set of primes in an arithmetic progression "a" + "nb" (for "a", "b" coprime) has Dirichlet density 1/φ("b"), which is enough to show that there are an infinite number of such primes, but harder to show that this is the natural density. Roughly speaking, proving that some set of primes has a non-zero Dirichlet density usually involves showing that certain "L"-functions do not vanish at the point "s" = 1, while showing that they have a natural density involves showing that the "L"-functions have no zeros on the line Re("s") = 1. In practice, if some "naturally occurring" set of primes has a Dirichlet density, then it also has a natural density, but it is possible to find artificial counterexamples: for example, the set of primes whose first decimal digit is 1 has no natural density, but has Dirichlet density log(2)/log(10).
[ { "math_id": 0, "text": " \\lim_{s\\rightarrow 1^+} \\frac{\\sum_{p\\in A}{1\\over p^s}}{\\sum_{p} \\frac{1}{p^s}}" }, { "math_id": 1, "text": "\\textstyle{\\sum_{p}\\frac{1}{p^s}\\sim \\log(\\frac{1}{s-1})}" }, { "math_id": 2, "text": "s\\rightarrow 1^+" }, { "math_id": 3, "text": "\\lim_{s\\rightarrow 1^+}{\\sum_{p\\in A}{1\\over p^s}\\over \\log(\\frac{1}{s-1})}." }, { "math_id": 4, "text": "\\prod_{p\\in A}{1\\over 1-p^{-s}}" } ]
https://en.wikipedia.org/wiki?curid=6100522
61009235
Gap-Hamming problem
Problem in communication complexity theory In communication complexity, the gap-Hamming problem asks, if Alice and Bob are each given a (potentially different) string, what is the minimal number of bits that they need to exchange in order for Alice to approximately compute the Hamming distance between their strings. The solution to the problem roughly states that, if Alice and Bob are each given a string, then any communication protocol used to compute the Hamming distance between their strings does (asymptotically) no better than Bob sending his whole string to Alice. More specifically, if Alice and Bob are each given formula_0-bit strings, there exists no communication protocol that lets Alice compute the hamming distance between their strings to within formula_1 using less than formula_2 bits. The gap-Hamming problem has applications to proving lower bounds for many streaming algorithms, including moment frequency estimation and entropy estimation. Formal statement. In this problem, Alice and Bob each receive a string, formula_3 and formula_4, respectively, while Alice is required to compute the (partial) function, formula_5 using the least amount of communication possible. Here, formula_6 indicates that Alice can return either of formula_7 and formula_8 is the Hamming distance between formula_9 and formula_10. In other words, Alice needs to return whether Bob's string is significantly similar or significantly different from hers while minimizing the number of bits she exchanges with Bob. The problem's solution states that computing formula_11 requires at least formula_2 communication. In particular, it requires formula_2 communication even when formula_9 and formula_10 are chosen uniformly at random from formula_12. History. The gap-Hamming problem was originally proposed by Indyk and Woodruff in the early 2000's, who initially proved a linear lower bound on the "one-way" communication complexity of the problem (where Alice is only allowed to receive data from Bob) and conjectured a linear lower bound in the general case. The question of the infinite-round case (in which Alice and Bob are allowed to exchange as many messages as desired) remained open until Chakrabarti and Regev proved, via an anti-concentration argument, that the general problem also has linear lower bound complexity, thus settling the original question completely. This result was followed by a series of other papers that sought to simplify or find new approaches to proving the desired lower bound, notably first by Vidick and later by Sherstov, and, recently, with an information-theoretic approach by Hadar, Liu, Polyanskiy, and Shayevitz. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\pm\\sqrt{n}" }, { "math_id": 2, "text": "\\Omega(n)" }, { "math_id": 3, "text": "x \\in \\{\\pm 1\\}^n" }, { "math_id": 4, "text": "y \\in \\{\\pm 1\\}^n" }, { "math_id": 5, "text": "\\operatorname{GHD}(x, y) = \\begin{cases}\n+1 & D_H(x, y) \\ge \\frac{n}{2} + \\sqrt{n}\\\\\n-1 & D_H(x, y) \\le \\frac{n}{2} - \\sqrt{n}\\\\\n* & \\text{otherwise},\n\\end{cases}" }, { "math_id": 6, "text": "*" }, { "math_id": 7, "text": "\\pm 1" }, { "math_id": 8, "text": "D_H(x, y)" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "\\operatorname{GHD}" }, { "math_id": 12, "text": "\\{\\pm 1\\}^n" } ]
https://en.wikipedia.org/wiki?curid=61009235
6101309
Quantities of information
The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the "bit", or more correctly the shannon, based on the binary logarithm. Although "bit" is more frequently used in place of "shannon", its name is not distinguished from the bit as used in data-processing to refer to a binary value or stream regardless of its entropy (information content) Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm. In what follows, an expression of the form formula_2 is considered by convention to be equal to zero whenever formula_3 is zero. This is justified because formula_4 for any logarithmic base. Self-information. Shannon derived a measure of information content called the self-information or "surprisal" of a message formula_5: formula_6 where formula_7 is the probability that message formula_5 is chosen from all possible choices in the message space formula_8. The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of shannons or more often simply "bits" (a bit in other contexts is rather defined as a "binary digit", whose average information content is at most 1 shannon). Information from a source is gained by a recipient only if the recipient did not already have that information to begin with. Messages that convey information over a certain (P=1) event (or one which is "known" with certainty, for instance, through a back-channel) provide no information, as the above equation indicates. Infrequently occurring messages contain more information than more frequently occurring messages. It can also be shown that a compound message of two (or more) unrelated messages would have a quantity of information that is the sum of the measures of information of each message individually. That can be derived using this definition by considering a compound message formula_9 providing information regarding the values of two random variables M and N using a message which is the concatenation of the elementary messages "m" and "n", each of whose information content are given by formula_10 and formula_11 respectively. If the messages "m" and "n" each depend only on M and N, and the processes M and N are independent, then since formula_12 (the definition of statistical independence) it is clear from the above definition that formula_13. An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity). Entropy. The entropy of a discrete message space formula_8 is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message formula_5 from that message space: formula_14 where formula_15 denotes the expected value operation. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable (e.g. formula_16). In this case formula_17. Sometimes the function formula_18 is expressed in terms of the probabilities of the distribution: formula_19 where each formula_20 and formula_21 An important special case of this is the binary entropy function: formula_22 Joint entropy. The joint entropy of two discrete random variables formula_0 and formula_1 is defined as the entropy of the joint distribution of formula_0 and formula_1: formula_23 If formula_0 and formula_1 are independent, then the joint entropy is simply the sum of their individual entropies. Conditional entropy (equivocation). Given a particular value of a random variable formula_1, the conditional entropy of formula_0 given formula_24 is defined as: formula_25 where formula_26 is the conditional probability of formula_27 given formula_28. The conditional entropy of formula_0 given formula_1, also called the equivocation of formula_0 about formula_1 is then given by: formula_29 This uses the conditional expectation from probability theory. A basic property of the conditional entropy is that: formula_30 Kullback–Leibler divergence (information gain). The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions, a "true" probability distribution formula_3, and an arbitrary probability distribution formula_31. If we compress data in a manner that assumes formula_31 is the distribution underlying some data, when, in reality, formula_3 is the correct distribution, Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression, or, mathematically, formula_32 It is in some sense the "distance" from formula_31 to formula_3, although it is not a true metric due to its not being symmetric. Mutual information (transinformation). It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The mutual information of formula_0 relative to formula_1 (which represents conceptually the average amount of information about formula_0 that can be gained by observing formula_1) is given by: formula_33 A basic property of the mutual information is that: formula_34 That is, knowing formula_1, we can save an average of formula_35 bits in encoding formula_0 compared to not knowing formula_1. Mutual information is symmetric: formula_36 Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) of the posterior probability distribution of formula_0 given the value of formula_1 to the prior distribution on formula_0: formula_37 In other words, this is a measure of how much, on the average, the probability distribution on formula_0 will change if we are given the value of formula_1. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: formula_38 Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Differential entropy. The basic measures of discrete entropy have been extended by analogy to continuous spaces by replacing sums with integrals and probability mass functions with probability density functions. Although, in both cases, mutual information expresses the number of bits of information common to the two sources in question, the analogy does "not" imply identical properties; for example, differential entropy may be negative. The differential analogies of entropy, joint entropy, conditional entropy, and mutual information are defined as follows: formula_39 formula_40 formula_41 formula_42 formula_43 where formula_44 is the joint density function, formula_45 and formula_46 are the marginal distributions, and formula_47 is the conditional distribution.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "p \\log p \\," }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "\\lim_{p \\rightarrow 0+} p \\log p = 0" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": " \\operatorname{I}(m) = \\log \\left( \\frac{1}{p(m)} \\right) = - \\log( p(m) ) \\, " }, { "math_id": 7, "text": "p(m) = \\mathrm{Pr}(M=m)" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": " m\\&n " }, { "math_id": 10, "text": " \\operatorname{I}(m) " }, { "math_id": 11, "text": " \\operatorname{I}(n) " }, { "math_id": 12, "text": "P(m\\&n) = P(m) P(n)" }, { "math_id": 13, "text": " \\operatorname{I}(m\\&n) = \\operatorname{I}(m) + \\operatorname{I}(n) " }, { "math_id": 14, "text": " \\Eta(M) = \\mathbb{E} \\left[\\operatorname{I}(M)\\right] \n= \\sum_{m \\in M} p(m) \\operatorname{I}(m) = -\\sum_{m \\in M} p(m) \\log p(m)" }, { "math_id": 15, "text": " \\mathbb{E} [ - ] " }, { "math_id": 16, "text": " p(m) = 1/|M| " }, { "math_id": 17, "text": "\\Eta(M) = \\log |M|" }, { "math_id": 18, "text": "\\Eta" }, { "math_id": 19, "text": "\\Eta(p_1, p_2, \\ldots , p_k) = -\\sum_{i=1}^k p_i \\log p_i," }, { "math_id": 20, "text": "p_i \\geq 0" }, { "math_id": 21, "text": " \\sum_{i=1}^k p_i = 1" }, { "math_id": 22, "text": "\\Eta_\\mbox{b}(p) = \\Eta(p, 1-p) = - p \\log p - (1-p)\\log (1-p)\\," }, { "math_id": 23, "text": "\\Eta(X, Y) = \\mathbb{E}_{X,Y} \\left[-\\log p(x,y)\\right] \n= - \\sum_{x, y} p(x, y) \\log p(x, y) \\," }, { "math_id": 24, "text": "Y=y" }, { "math_id": 25, "text": " \\Eta(X|y) = \\mathbb{E}_{\\left[X|Y \\right]} [-\\log p(x|y)] \n= -\\sum_{x \\in X} p(x|y) \\log p(x|y)" }, { "math_id": 26, "text": "p(x|y) = \\frac{p(x,y)}{p(y)}" }, { "math_id": 27, "text": "x" }, { "math_id": 28, "text": "y" }, { "math_id": 29, "text": " \\Eta(X|Y) = \\mathbb{E}_Y \\left[\\Eta\\left(X|y \\right)\\right] = -\\sum_{y \\in Y} p(y) \\sum_{x \\in X} p(x|y) \\log p(x|y) = \\sum_{x,y} p(x,y) \\log \\frac{p(y)}{p(x,y)}." }, { "math_id": 30, "text": " \\Eta(X|Y) = \\Eta(X,Y) - \\Eta(Y) .\\," }, { "math_id": 31, "text": "q" }, { "math_id": 32, "text": "D_{\\mathrm{KL}}\\bigl(p(X) \\| q(X) \\bigr) = \\sum_{x \\in X} p(x) \\log \\frac{p(x)}{q(x)}." }, { "math_id": 33, "text": "\\operatorname{I}(X;Y) = \\sum_{y\\in Y} p(y)\\sum_{x\\in X} {p(x|y) \\log \\frac{p(x|y)}{p(x)}} = \\sum_{x,y} p(x,y) \\log \\frac{p(x,y)}{p(x)\\, p(y)}." }, { "math_id": 34, "text": "\\operatorname{I}(X;Y) = \\Eta(X) - \\Eta(X|Y).\\," }, { "math_id": 35, "text": "\\operatorname{I}(X; Y)" }, { "math_id": 36, "text": "\\operatorname{I}(X;Y) = \\operatorname{I}(Y;X)\n= \\Eta(X) + \\Eta(Y) - \\Eta(X,Y).\\," }, { "math_id": 37, "text": "\\operatorname{I}(X;Y) = \\mathbb{E}_{p(y)} \\left[D_{\\mathrm{KL}}\\bigl( p(X|Y=y) \\| p(X) \\bigr)\\right]." }, { "math_id": 38, "text": "\\operatorname{I}(X; Y) = D_{\\mathrm{KL}}\\bigl(p(X,Y) \\| p(X)p(Y) \\bigr)." }, { "math_id": 39, "text": " h(X) = -\\int_X f(x) \\log f(x) \\,dx " }, { "math_id": 40, "text": " h(X,Y) = -\\int_Y \\int_X f(x,y) \\log f(x,y) \\,dx \\,dy" }, { "math_id": 41, "text": " h(X|y) = -\\int_X f(x|y) \\log f(x|y) \\,dx " }, { "math_id": 42, "text": " h(X|Y) = \\int_Y \\int_X f(x,y) \\log \\frac{f(y)}{f(x,y)} \\,dx \\,dy" }, { "math_id": 43, "text": " \\operatorname{I}(X;Y) = \\int_Y \\int_X f(x,y) \\log \\frac{f(x,y)}{f(x)f(y)} \\,dx \\,dy " }, { "math_id": 44, "text": "f(x,y)" }, { "math_id": 45, "text": "f(x)" }, { "math_id": 46, "text": "f(y)" }, { "math_id": 47, "text": "f(x|y)" } ]
https://en.wikipedia.org/wiki?curid=6101309
610165
Cusp form
In number theory, a branch of mathematics, a cusp form is a particular kind of modular form with a zero constant coefficient in the Fourier series expansion. Introduction. A cusp form is distinguished in the case of modular forms for the modular group by the vanishing of the constant coefficient "a"0 in the Fourier series expansion (see "q"-expansion) formula_0 This Fourier expansion exists as a consequence of the presence in the modular group's action on the upper half-plane via the transformation formula_1 For other groups, there may be some translation through several units, in which case the Fourier expansion is in terms of a different parameter. In all cases, though, the limit as "q" → 0 is the limit in the upper half-plane as the imaginary part of "z" → ∞. Taking the quotient by the modular group, this limit corresponds to a cusp of a modular curve (in the sense of a point added for compactification). So, the definition amounts to saying that a cusp form is a modular form that vanishes at a cusp. In the case of other groups, there may be several cusps, and the definition becomes a modular form vanishing at "all" cusps. This may involve several expansions. Dimension. The dimensions of spaces of cusp forms are, in principle, computable via the Riemann–Roch theorem. For example, the Ramanujan tau function "τ"("n") arises as the sequence of Fourier coefficients of the cusp form of weight 12 for the modular group, with "a"1 = 1. The space of such forms has dimension 1, which means this definition is possible; and that accounts for the action of Hecke operators on the space being by scalar multiplication (Mordell's proof of Ramanujan's identities). Explicitly it is the modular discriminant formula_2 which represents (up to a normalizing constant) the discriminant of the cubic on the right side of the Weierstrass equation of an elliptic curve; and the 24-th power of the Dedekind eta function. The Fourier coefficients here are written formula_3 and called 'Ramanujan's tau function', with the normalization "τ"(1) = 1. Related concepts. In the larger picture of automorphic forms, the cusp forms are complementary to Eisenstein series, in a "discrete spectrum"/"continuous spectrum", or "discrete series representation"/"induced representation" distinction typical in different parts of spectral theory. That is, Eisenstein series can be 'designed' to take on given values at cusps. There is a large general theory, depending though on the quite intricate theory of parabolic subgroups, and corresponding cuspidal representations. Consider formula_4 a standard parabolic subgroup of some reductive group formula_5 (over formula_6, the adele ring), an automorphic form formula_7 on formula_8 is called cuspidal if for all parabolic subgroups formula_9 such that formula_10 we have formula_11, where formula_12 is the standard minimal parabolic subgroup. The notation formula_13 for formula_4 is defined as formula_14.
[ { "math_id": 0, "text": "\\sum a_n q^n." }, { "math_id": 1, "text": "z\\mapsto z+1." }, { "math_id": 2, "text": "\\Delta(z,q)," }, { "math_id": 3, "text": "\\tau(n)" }, { "math_id": 4, "text": "P=MU" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "\\mathbb{A}" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "U(\\mathbb{A})M(k)\\backslash G" }, { "math_id": 9, "text": "P'" }, { "math_id": 10, "text": "P_0\\subset P'\\subsetneq P" }, { "math_id": 11, "text": "\\phi_{P'}=0" }, { "math_id": 12, "text": "P_0" }, { "math_id": 13, "text": "\\phi_{P}" }, { "math_id": 14, "text": "\\phi_P (g) =\\int_{U(k)\\backslash U(\\mathbb{A})} \\phi(ug) du" } ]
https://en.wikipedia.org/wiki?curid=610165
61017112
K-D heap
A K-D heap is a data structure in computer science which implements a multidimensional priority queue without requiring additional space. It is a generalization of the Heap. It allows for efficient insertion, query of the minimum element, and deletion of the minimum element in any of the k dimensions, and therefore includes the double-ended heap as a special case. Structure. Given a collection of "n" items, where each has formula_0 keys (or priorities), the K-D heap organizes them in to a binary tree which satisfies two conditions: The property of "k-d heap order" is analogous to that of the heap property for regular heaps. A heap maintains k-d heap order if: One consequence of this structure is that the smallest 1-st property-element will trivially be in the root, and moreover all the smallest "i"-th property elements for every "i" will be in the first "k" levels. Operations. Creating a K-D heap from "n" items takes "O(n)" time. The following operations are supported: Importantly, the hidden constant in these operations is exponentially large relative formula_0, the number of dimensions, so K-D heaps are not practical for applications with very many dimensions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "(i \\mod k) +1" } ]
https://en.wikipedia.org/wiki?curid=61017112
6101721
Clamper (electronics)
Electronic circuit that fixes voltage A clamper (or clamping circuit or clamp) is an electronic circuit that fixes either the positive or the negative peak excursions of a signal to a defined voltage by adding a variable positive or negative DC voltage to it. The clamper does not restrict the peak-to-peak excursion of the signal (clipping); it moves the whole signal up or down so as to place its peaks at the reference level. A diode clamp (a simple, common type) consists of a diode, which conducts electric current in only one direction and prevents the signal exceeding the reference value; and a capacitor, which provides a DC offset from the stored charge. The capacitor forms a time constant with a resistor load, which determines the range of frequencies over which the clamper will be effective. General function. A clamper will bind the upper or lower extreme of a waveform to a fixed DC voltage level. These circuits are also known as DC voltage restorers. Clampers can be constructed in both positive and negative polarities. When unbiased, clamping circuits will fix the voltage lower limit (or upper limit, in the case of negative clampers) to 0 volts. These circuits clamp a peak of a waveform to a specific DC level compared with a capacitively coupled signal, which swings about its average DC level. The clamping network is one that will "clamp" a signal to a different DC level. The network must have a capacitor, a diode, and optionally a resistive element and/or load, but it can also employ an independent DC supply to introduce an additional shift. The magnitude of R and C must be chosen such that the time constant RC is large enough to ensure that the voltage across the capacitor does not discharge significantly during the interval the diode is nonconducting. Types. Clamp circuits are categorised by their operation: negative or positive, and biased or unbiased. A positive clamp circuit (negative peak clamper) outputs a purely positive waveform from an input signal; it offsets the input signal so that all of the waveform is greater than 0 V. A negative clamp is the opposite of this—this clamp outputs a purely negative waveform from an input signal. A bias voltage between the diode and ground offsets the output voltage by that amount. For example, an input signal of peak value 5 V (VINpeak = 5 V) is applied to a positive clamp with a bias of 3 V (VBIAS = 3 V), the peak output voltage will be: VOUTpeak = 2 × VINpeak + VBIAS VOUTpeak = 2 × 5 V + 3 V VOUTpeak = 13 V Positive unbiased. In the negative cycle of the input AC signal, the diode is forward biased and conducts, charging the capacitor to the peak negative value of VIN. During the positive cycle, the diode is reverse biased and thus does not conduct. The output voltage is therefore equal to the voltage stored in the capacitor plus the input voltage, so VOUT = VIN + VINpeak. This is also called a Villard circuit. Negative unbiased. A negative unbiased clamp is the opposite of the equivalent positive clamp. In the positive cycle of the input AC signal, the diode is forward biased and conducts, charging the capacitor to the peak positive value of VIN. During the negative cycle, the diode is reverse biased and thus does not conduct. The output voltage is therefore equal to the voltage stored in the capacitor plus the input voltage again, so VOUT = VIN − VINpeak. Positive biased. A positive biased voltage clamp is identical to an equivalent unbiased clamp but with the output voltage offset by the bias amount VBIAS. Thus, VOUT = VIN + (VINpeak + VBIAS). Negative biased. A negative biased voltage clamp is likewise identical to an equivalent unbiased clamp but with the output voltage offset in the negative direction by the bias amount VBIAS. Thus, VOUT = VIN − (VINpeak + VBIAS). Op-amp circuit. The figure shows an op-amp-based clamp circuit with a non-zero reference clamping voltage. The advantage here is that the clamping level is at precisely the reference voltage. There is no need to take into account the forward voltage drop of the diode (which is necessary in the preceding simple circuits as this adds to the reference voltage). The effect of the diode voltage drop on the circuit output will be divided down by the gain of the amplifier, resulting in an insignificant error. The circuit also has a great improvement in linearity at small input signals in comparison to the simple diode circuit and is largely unaffected by changes in the load. Clamping for input protection. Clamping can be used to adapt an input signal to a device that cannot make use of or may be damaged by the signal range of the original input. Principles of operation. During the first negative phase of the AC input voltage, the capacitor in a positive clamper circuit charges rapidly. As "V"in becomes positive, the capacitor serves as a voltage doubler; since it has stored the equivalent of "V"in during the negative cycle, it provides nearly that voltage during the positive cycle. This essentially doubles the voltage seen by the load. As "V"in becomes negative, the capacitor acts as a battery of the same voltage of "V"in. The voltage source and the capacitor counteract each other, resulting in a net voltage of zero as seen by the load. Loading. For passive type clampers with a capacitor, followed by a diode in parallel with the load, the load can significantly affect performance. The magnitude of "R" and "C" are chosen so that the time constant, formula_0, is large enough to ensure that the voltage across the capacitor does not discharge significantly during the diode's non-conducting interval. A load resistance that is too low (heavy load) will partially discharge the capacitor and cause the waveform peaks to drift off the intended clamp voltage. This effect is greatest at low frequencies. At a higher frequency, there is less time between cycles for the capacitor to discharge. The capacitor cannot be made arbitrarily large to overcome load discharge. During the conducting interval, the capacitor must be recharged. The time taken to do this is governed by a different time constant, this time set by the capacitance and the internal impedance of the driving circuit. Since the peak voltage is reached in one quarter cycle and then starts to fall again, the capacitor must be recharged in a quarter cycle. This requirement calls for a low value of capacitance. The two conflicting requirements for capacitance value may be irreconcilable in applications with a high driving impedance and low load impedance. In such cases, an active circuit must be used such as the op-amp circuit described above. Biased versus non-biased. By using a voltage source and resistor, the clamper can be biased to bind the output voltage to a different value. The voltage supplied to the potentiometer will be equal to the offset from zero (assuming an ideal diode) in the case of either a positive or negative clamper (the clamper type will determine the direction of the offset). If a negative voltage is supplied to either positive or negative, the waveform will cross the x-axis and be bound to a value of this magnitude on the opposite side. Zener diodes can also be used in place of a voltage source and potentiometer, hence setting the offset at the Zener voltage. Examples. Clamping circuits were common in analog television receivers. These sets have a DC restorer circuit, which returns the voltage of the video signal during the "back porch" of the line blanking (retrace) period to 0 V. Low-frequency interference, especially power line hum, induced onto the signal spoils the rendering of the image and, in extreme, cases causes the set to lose synchronization. This interference can be effectively removed via this method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau = RC " } ]
https://en.wikipedia.org/wiki?curid=6101721
61019132
Social golfer problem
Mathematics problem In discrete mathematics, the social golfer problem (SGP) is a combinatorial-design problem derived from a question posted in the usenet newsgroup "sci.op-research" in May 1998. The problem is as follows: 32 golfers play golf once a week in groups of 4. Schedule these golfers to play for as many weeks as possible without any two golfers playing together in a group more than once. More generally, this problem can be defined for any formula_0 golfers who play in formula_1 groups of formula_2 golfers for formula_3 weeks. The solution involves either affirming or denying the existence of a schedule and, if such a schedule exists, determining the number of unique schedules and constructing them. Challenges. The SGP is a challenging problem to solve for two main reasons: First is the large search space resulting from the combinatorial and highly symmetrical nature of the problem. There are a total of formula_4 schedules in the search space. For each schedule, the weeks formula_5, groups within each week formula_6, players within each group formula_7, and individual player formula_8 can all be permuted. This leads to a total of formula_9 isomorphisms, schedules that are identical through any of these symmetry operations. Due to its high symmetry, the SGP is commonly used as a standard benchmark in symmetry breaking in constraint programming (symmetry-breaking constraints). Second is the choice of variables. The SGP can be seen as an optimization problem to maximize the number of weeks in the schedule. Hence, incorrectly defined initial points and other variables in the model can lead the process to an area in the search space with no solution. Solutions. The SGP is the Steiner system S(2,4,32) because 32 golfers are divided into groups of 4 and both the group and week assignments of any 2 golfers can be uniquely identified. Soon after the problem was proposed in 1998, a solution for 9 weeks was found and the existence of a solution for 11 weeks was proven to be impossible. In the case of the latter, note that each player must play with 3 unique players each week. For a schedule lasting 11 weeks, a player will be grouped with a total of formula_10 other players. Since there are only 31 other players in the group, this is not possible. A solution for 10 weeks could be obtained from results already published in 1996. It was independently rediscovered using a different method in 2004, which is the solution presented below. There are many approaches to solving the SGP, namely design theory techniques, SAT formulations (propositional satisfiability problem), constraint-based approaches, metaheuristic methods, and radix approach. The radix approach assigns golfers into groups based on the addition of numbers in base formula_11. Variables in the general case of the SGP can be redefined as formula_12 golfers who play in formula_13 groups of formula_2 golfers for any number formula_11. The maximum number of weeks that these golfers can play without regrouping any two golfers is formula_14. Applications. Working in groups is encouraged in classrooms because it fosters active learning and development of critical-thinking and communication skills. The SGP has been used to assign students into groups in undergraduate chemistry classes and breakout rooms in online meeting software to maximize student interaction and socialization. The SGP has also been used as a model to study tournament scheduling.
[ { "math_id": 0, "text": "n = g \\times s" }, { "math_id": 1, "text": "g" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "(n!)^w" }, { "math_id": 5, "text": "(w!)" }, { "math_id": 6, "text": "(g!)" }, { "math_id": 7, "text": "(s!)" }, { "math_id": 8, "text": "(n!)" }, { "math_id": 9, "text": "w! \\times g! \\times s! \\times n!" }, { "math_id": 10, "text": "3 \\times 11 = 33" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "n = s^k" }, { "math_id": 13, "text": "g = s^{k-1}" }, { "math_id": 14, "text": "(s^k-1)/(s-1)" } ]
https://en.wikipedia.org/wiki?curid=61019132
6101938
Markov chain mixing time
In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution π and, regardless of the initial state, the time-"t" distribution of the chain converges to π as "t" tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must "t" be until the time-"t" distribution is approximately π? One variant, "total variation distance mixing time", is defined as the smallest "t" such that the total variation distance of probability measures is small: formula_0 Choosing a different formula_1, as long as formula_2, can only change the mixing time up to a constant factor (depending on formula_1) and so one often fixes formula_3 and simply writes formula_4. This is the sense in which Dave Bayer and Persi Diaconis (1992) proved that the number of riffle shuffles needed to mix an ordinary 52 card deck is 7. Mathematical theory focuses on how mixing times change as a function of the size of the structure underlying the chain. For an formula_5-card deck, the number of riffle shuffles needed grows as formula_6. The most developed theory concerns randomized algorithms for #P-complete algorithmic counting problems such as the number of graph colorings of a given formula_5 vertex graph. Such problems can, for sufficiently large number of colors, be answered using the Markov chain Monte Carlo method and showing that the mixing time grows only as formula_7 . This example and the shuffling example possess the rapid mixing property, that the mixing time grows at most polynomially fast in formula_8(number of states of the chain). Tools for proving rapid mixing include arguments based on conductance and the method of coupling. In broader uses of the Markov chain Monte Carlo method, rigorous justification of simulation results would require a theoretical bound on mixing time, and many interesting practical cases have resisted such theoretical analysis.
[ { "math_id": 0, "text": "t_{\\operatorname{mix}}(\\varepsilon) = \\min \\left\\{ t \\geq 0 : \\max_{x \\in S} \\Big[ \\max_{A \\subseteq S} \\left|\\Pr(X_t \\in A \\mid X_0 = x) - \\pi(A) \\right|\\Big] \\leq \\varepsilon \\right\\}. " }, { "math_id": 1, "text": "\\varepsilon" }, { "math_id": 2, "text": "\\varepsilon < 1/2" }, { "math_id": 3, "text": "\\varepsilon = 1/4" }, { "math_id": 4, "text": "t_{\\mathrm{mix}}" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "1.5 \\log_2 n" }, { "math_id": 7, "text": "n \\log(n)" }, { "math_id": 8, "text": "\\log" } ]
https://en.wikipedia.org/wiki?curid=6101938
610202
Fine structure
Details in the emission spectrum of an atom In atomic physics, the fine structure describes the splitting of the spectral lines of atoms due to electron spin and relativistic corrections to the non-relativistic Schrödinger equation. It was first measured precisely for the hydrogen atom by Albert A. Michelson and Edward W. Morley in 1887, laying the basis for the theoretical treatment by Arnold Sommerfeld, introducing the fine-structure constant. Background. Gross structure. The "gross structure" of line spectra is the structure predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels only depend on the principal quantum number "n". However, a more accurate model takes into account relativistic and spin effects, which break the degeneracy of the energy levels and split the spectral lines. The scale of the fine structure splitting relative to the gross structure energies is on the order of ("Zα")2, where "Z" is the atomic number and "α" is the fine-structure constant, a dimensionless number equal to approximately 1/137. Relativistic corrections. The fine structure energy corrections can be obtained by using perturbation theory. To perform this calculation one must add three corrective terms to the Hamiltonian: the leading order relativistic correction to the kinetic energy, the correction due to the spin–orbit coupling, and the Darwin term coming from the quantum fluctuating motion or zitterbewegung of the electron. These corrections can also be obtained from the non-relativistic limit of the Dirac equation, since Dirac's theory naturally incorporates relativity and spin interactions. Hydrogen atom. This section discusses the analytical solutions for the hydrogen atom as the problem is analytically solvable and is the base model for energy level calculations in more complex atoms. Kinetic energy relativistic correction. The gross structure assumes the kinetic energy term of the Hamiltonian takes the same form as in classical mechanics, which for a single electron means formula_0 where V is the potential energy, formula_1 is the momentum, and formula_2 is the electron rest mass. However, when considering a more accurate theory of nature via special relativity, we must use a relativistic form of the kinetic energy, formula_3 where the first term is the total relativistic energy, and the second term is the rest energy of the electron (formula_4 is the speed of light). Expanding the square root for large values of formula_4, we find formula_5 Although there are an infinite number of terms in this series, the later terms are much smaller than earlier terms, and so we can ignore all but the first two. Since the first term above is already part of the classical Hamiltonian, the first order "correction" to the Hamiltonian is formula_6 Using this as a perturbation, we can calculate the first order energy corrections due to relativistic effects. formula_7 where formula_8 is the unperturbed wave function. Recalling the unperturbed Hamiltonian, we see formula_9 We can use this result to further calculate the relativistic correction: formula_10 For the hydrogen atom, formula_11 formula_12 and formula_13 where formula_14 is the elementary charge, formula_15 is the vacuum permittivity, formula_16 is the Bohr radius, formula_17 is the principal quantum number, formula_18 is the azimuthal quantum number and formula_19 is the distance of the electron from the nucleus. Therefore, the first order relativistic correction for the hydrogen atom is formula_20 where we have used: formula_21 On final calculation, the order of magnitude for the relativistic correction to the ground state is formula_22. Spin–orbit coupling. For a hydrogen-like atom with formula_23 protons (formula_24 for hydrogen), orbital angular momentum formula_25 and electron spin formula_26, the spin–orbit term is given by: formula_27 where formula_28 is the spin g-factor. The spin–orbit correction can be understood by shifting from the standard frame of reference (where the electron orbits the nucleus) into one where the electron is stationary and the nucleus instead orbits it. In this case the orbiting nucleus functions as an effective current loop, which in turn will generate a magnetic field. However, the electron itself has a magnetic moment due to its intrinsic angular momentum. The two magnetic vectors, formula_29 and formula_30 couple together so that there is a certain energy cost depending on their relative orientation. This gives rise to the energy correction of the form formula_31 Notice that an important factor of 2 has to be added to the calculation, called the Thomas precession, which comes from the relativistic calculation that changes back to the electron's frame from the nucleus frame. Since formula_32 by Kramers–Pasternack relations and formula_33 the expectation value for the Hamiltonian is: formula_34 Thus the order of magnitude for the spin–orbital coupling is: formula_35 When weak external magnetic fields are applied, the spin–orbit coupling contributes to the Zeeman effect. Darwin term. There is one last term in the non-relativistic expansion of the Dirac equation. It is referred to as the Darwin term, as it was first derived by Charles Galton Darwin, and is given by: formula_36 The Darwin term affects only the s orbitals. This is because the wave function of an electron with formula_37 vanishes at the origin, hence the delta function has no effect. For example, it gives the 2s orbital the same energy as the 2p orbital by raising the 2s state by . The Darwin term changes potential energy of the electron. It can be interpreted as a smearing out of the electrostatic interaction between the electron and nucleus due to zitterbewegung, or rapid quantum oscillations, of the electron. This can be demonstrated by a short calculation. Quantum fluctuations allow for the creation of virtual electron-positron pairs with a lifetime estimated by the uncertainty principle formula_38. The distance the particles can move during this time is formula_39, the Compton wavelength. The electrons of the atom interact with those pairs. This yields a fluctuating electron position formula_40. Using a Taylor expansion, the effect on the potential formula_41 can be estimated: formula_42 Averaging over the fluctuations formula_43 formula_44 gives the average potential formula_45 Approximating formula_46, this yields the perturbation of the potential due to fluctuations: formula_47 To compare with the expression above, plug in the Coulomb potential: formula_48 This is only slightly different. Another mechanism that affects only the s-state is the Lamb shift, a further, smaller correction that arises in quantum electrodynamics that should not be confused with the Darwin term. The Darwin term gives the s-state and p-state the same energy, but the Lamb shift makes the s-state higher in energy than the p-state. Total effect. The full Hamiltonian is given by formula_49 where formula_50 is the Hamiltonian from the Coulomb interaction. The total effect, obtained by summing the three components up, is given by the following expression: formula_51 where formula_52 is the total angular momentum quantum number (formula_53 if formula_54 and formula_55 otherwise). It is worth noting that this expression was first obtained by Sommerfeld based on the old Bohr theory; i.e., before the modern quantum mechanics was formulated. Exact relativistic energies. The total effect can also be obtained by using the Dirac equation. In this case, the electron is treated as non-relativistic. The exact energies are given by formula_56 This expression, which contains all higher order terms that were left out in the other calculations, expands to first order to give the energy corrections derived from perturbation theory. However, this equation does not contain the hyperfine structure corrections, which are due to interactions with the nuclear spin. Other corrections from quantum field theory such as the Lamb shift and the anomalous magnetic dipole moment of the electron are not included. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{H}^0 = \\frac{p^{2}}{2m_e} + V" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "m_e" }, { "math_id": 3, "text": "T = \\sqrt{p^2 c^2 + m_e^2 c^4} - m_e c^2 = m_e c^2 \\left[ \\sqrt{1 + \\frac{p^{2}}{m_e^2 c^2}}-1\\right]" }, { "math_id": 4, "text": "c" }, { "math_id": 5, "text": "T = \\frac{p^2}{2m_e} - \\frac{p^4}{8m_e^3 c^2} + \\cdots" }, { "math_id": 6, "text": "\\mathcal{H}' = -\\frac{p^{4}}{8m_e^{3}c^{2}}" }, { "math_id": 7, "text": "E_n^{(1)} = \\left\\langle\\psi^0\\right\\vert \\mathcal{H}' \\left\\vert\\psi^0\\right\\rangle = -\\frac{1}{8m_e^3c^2} \\left\\langle\\psi^0\\right\\vert p^4 \\left\\vert\\psi^0\\right\\rangle = -\\frac{1}{8m_e^3c^2} \\left\\langle\\psi^0\\right\\vert p^2 p^2 \\left\\vert\\psi^0\\right\\rangle" }, { "math_id": 8, "text": "\\psi^{0}" }, { "math_id": 9, "text": "\\begin{align}\n \\mathcal{H}^0 \\left\\vert\\psi^0\\right\\rangle &= E_n \\left\\vert\\psi^0\\right\\rangle \\\\\n \\left(\\frac{p^2}{2m_e} + V\\right)\\left\\vert\\psi^0\\right\\rangle &= E_n \\left\\vert\\psi^0\\right\\rangle \\\\\n p^2 \\left\\vert\\psi^0\\right\\rangle &= 2m_e(E_n - V)\\left\\vert\\psi^0\\right\\rangle\n\\end{align}" }, { "math_id": 10, "text": "\\begin{align}\n E_n^{(1)} &= -\\frac{1}{8m_e^3 c^2}\\left\\langle\\psi^0\\right\\vert p^2 p^2 \\left\\vert\\psi^{0}\\right\\rangle \\\\[1ex]\n &= -\\frac{1}{8m_e^3 c^2}\\left\\langle\\psi^0\\right\\vert (2m_e)^2 (E_n - V)^2\\left\\vert\\psi^0\\right\\rangle \\\\[1ex]\n &= -\\frac{1}{2m_ec^2}\\left(E_n^2 - 2E_n\\langle V\\rangle + \\left\\langle V^2\\right\\rangle \\right)\n\\end{align}" }, { "math_id": 11, "text": "V(r) = \\frac{-e^2}{4\\pi \\varepsilon_0 r}," }, { "math_id": 12, "text": "\\left\\langle \\frac{1}{r} \\right\\rangle = \\frac{1}{a_0 n^2}," }, { "math_id": 13, "text": "\\left\\langle \\frac{1}{r^2} \\right\\rangle = \\frac{1}{(\\ell + 1/2) n^3 a_0^2}," }, { "math_id": 14, "text": "e" }, { "math_id": 15, "text": "\\varepsilon_0" }, { "math_id": 16, "text": "a_0" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "\\ell" }, { "math_id": 19, "text": "r" }, { "math_id": 20, "text": "\\begin{align}\n E_{n}^{(1)} &= -\\frac{1}{2m_ec^2}\\left(E_n^2 + 2E_n\\frac{e^2}{4\\pi \\varepsilon_0}\\frac{1}{a_0 n^2} + \\frac{1}{16\\pi^2 \\varepsilon_0^2}\\frac{e^4}{\\left(\\ell + \\frac{1}{2}\\right) n^3 a_0^2}\\right) \\\\\n &= -\\frac{E_n^2}{2m_ec^2}\\left(\\frac{4n}{\\ell + \\frac{1}{2}} - 3\\right)\n\\end{align}" }, { "math_id": 21, "text": " E_n = - \\frac{e^2}{8 \\pi \\varepsilon_0 a_0 n^2} " }, { "math_id": 22, "text": " -9.056 \\times 10^{-4}\\ \\text{eV}" }, { "math_id": 23, "text": "Z" }, { "math_id": 24, "text": "Z = 1" }, { "math_id": 25, "text": "\\mathbf L" }, { "math_id": 26, "text": "\\mathbf S" }, { "math_id": 27, "text": "\\mathcal{H}_\\mathrm{SO} = \\left(\\frac{Ze^2}{4\\pi\\varepsilon_0}\\right) \\left(\\frac{g_s - 1}{2m_e^2 c^2}\\right)\\frac{\\mathbf{L} \\cdot \\mathbf{S}}{r^3}" }, { "math_id": 28, "text": "g_s" }, { "math_id": 29, "text": "\\mathbf{B}" }, { "math_id": 30, "text": "\\boldsymbol\\mu_s" }, { "math_id": 31, "text": " \\Delta E_{\\mathrm{SO}} = \\xi (r) \\mathbf L \\cdot \\mathbf S" }, { "math_id": 32, "text": "\n\\left\\langle \\frac{1}{r^3} \\right\\rangle = \\frac{Z^3}{n^3 a_0^3} \\frac{1}{\\ell \\left(\\ell + \\frac{1}{2}\\right) (\\ell + 1)}\n" }, { "math_id": 33, "text": "\n\\left\\langle \\mathbf L \\cdot \\mathbf S \\right\\rangle = \\frac{\\hbar^2}{2} \\left[j(j + 1) - \\ell(\\ell + 1) - s(s + 1)\\right]\n" }, { "math_id": 34, "text": "\\left\\langle \\mathcal{H}_{\\mathrm{SO}} \\right\\rangle = \\frac{E_n{}^2}{m_e c^2} ~n~ \\frac{j(j + 1) - \\ell(\\ell + 1) - \\frac{3}{4}}{\\ell \\left( \\ell + \\frac{1}{2}\\right) (\\ell + 1) }" }, { "math_id": 35, "text": "\\frac{Z^4}{n^3 \\left(j + \\frac{1}{2}\\right)\\left(j + 1\\right)} 10^{-4}\\text{ eV}" }, { "math_id": 36, "text": "\\begin{align}\n \\mathcal{H}_{\\text{Darwin}} &= \\frac{\\hbar^2}{8m_e^2 c^2}\\,4\\pi\\left(\\frac{Ze^2}{4\\pi \\varepsilon_0}\\right) \\delta^3{\\left(\\mathbf r\\right)} \\\\\n \\langle \\mathcal{H}_{\\text{Darwin}} \\rangle &= \\frac{\\hbar^2}{8m_e^2 c^2}\\,4\\pi\\left(\\frac{Ze^2}{4\\pi \\varepsilon_0}\\right)| \\psi(0)|^2 \\\\[3pt]\n \\psi (0) &= \\begin{cases}\n 0 & \\text{ for } \\ell > 0 \\\\\n \\frac{1}{\\sqrt{4\\pi}} \\, 2 \\left( \\frac {Z}{n a_0} \\right)^\\frac{3}{2} & \\text{ for } \\ell = 0\n \\end{cases}\\\\[2pt]\n \\mathcal{H}_{\\text{Darwin}} &= \\frac{2n}{m_e c^2}\\,E_n^2\n\\end{align}" }, { "math_id": 37, "text": "\\ell > 0" }, { "math_id": 38, "text": "\\Delta t \\approx \\hbar/\\Delta E \\approx \\hbar/mc^2" }, { "math_id": 39, "text": "\\xi \\approx c\\Delta t \\approx \\hbar/mc = \\lambda_c" }, { "math_id": 40, "text": "\\mathbf r + \\boldsymbol \\xi" }, { "math_id": 41, "text": "U" }, { "math_id": 42, "text": " U(\\mathbf r + \\boldsymbol\\xi) \\approx U(\\mathbf r) + \\xi\\cdot\\nabla U(\\mathbf r) + \\frac 1 2 \\sum_{i,j} \\xi_i \\xi_j \\partial_i \\partial_j U(\\mathbf r) " }, { "math_id": 43, "text": "\\boldsymbol \\xi" }, { "math_id": 44, "text": " \\overline\\xi = 0, \\quad \\overline{\\xi_i\\xi_j} = \\frac 1 3 \\overline{\\boldsymbol\\xi^2} \\delta_{ij}, " }, { "math_id": 45, "text": " \\overline{U\\left(\\mathbf r + \\boldsymbol\\xi\\right)} = U{\\left(\\mathbf r\\right)} + \\frac{1}{6} \\overline{\\boldsymbol\\xi^2} \\nabla^2 U\\left(\\mathbf r\\right). " }, { "math_id": 46, "text": "\\overline{\\boldsymbol\\xi^2} \\approx \\lambda_c^2" }, { "math_id": 47, "text": " \\delta U \\approx \\frac16 \\lambda_c^2 \\nabla^2 U = \\frac{\\hbar^2}{6m_e^2c^2}\\nabla^2 U " }, { "math_id": 48, "text": "\n \\nabla^2 U = -\\nabla^2 \\frac{Z e^2}{4\\pi\\varepsilon_0 r} = 4\\pi \\left(\\frac{Z e^2}{4\\pi\\varepsilon_0}\\right) \\delta^3(\\mathbf r)\n \\quad\\Rightarrow\\quad \\delta U \\approx \\frac{\\hbar^2}{6m_e^2c^2} 4\\pi \\left(\\frac{Z e^2}{4\\pi\\varepsilon_0}\\right) \\delta^3(\\mathbf r)\n" }, { "math_id": 49, "text": "\\mathcal{H}=\\mathcal{H}_\\text{Coulomb} + \\mathcal{H}_{\\text{kinetic}}+\\mathcal{H}_{\\mathrm{SO}}+\\mathcal{H}_{\\text{Darwin}}," }, { "math_id": 50, "text": "\\mathcal{H}_\\text{Coulomb}" }, { "math_id": 51, "text": "\\Delta E = \\frac{E_{n}(Z\\alpha)^{2}}{n}\\left( \\frac{1}{j + \\frac{1}{2}} - \\frac{3}{4n} \\right)\\,," }, { "math_id": 52, "text": "j" }, { "math_id": 53, "text": "j = 1/2" }, { "math_id": 54, "text": "\\ell = 0" }, { "math_id": 55, "text": "j = \\ell \\pm 1/2" }, { "math_id": 56, "text": "E_{j\\,n} = -m_\\text{e}c^2\\left[1 - \\left(1 + \\left[\\frac{\\alpha}{n - j - \\frac{1}{2} + \\sqrt{\\left(j + \\frac{1}{2}\\right)^2 - \\alpha^2}}\\right]^2\\right)^{-\\frac{1}{2}}\\right]." } ]
https://en.wikipedia.org/wiki?curid=610202
61020908
Cyclical monotonicity
Mathematics concept In mathematics, cyclical monotonicity is a generalization of the notion of monotonicity to the case of vector-valued function. Definition. Let formula_0 denote the inner product on an inner product space formula_1 and let formula_2 be a nonempty subset of formula_1. A correspondence formula_3 is called "cyclically monotone" if for every set of points formula_4 with formula_5 it holds that formula_6 Properties. For the case of scalar functions of one variable the definition above is equivalent to usual monotonicity. Gradients of convex functions are cyclically monotone. In fact, the converse is true. Suppose formula_2 is convex and formula_7 is a correspondence with nonempty values. Then if formula_8 is cyclically monotone, there exists an upper semicontinuous convex function formula_9 such that formula_10 for every formula_11, where formula_12 denotes the subgradient of formula_13 at formula_14. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle\\cdot,\\cdot\\rangle" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "U" }, { "math_id": 3, "text": "f: U \\rightrightarrows X" }, { "math_id": 4, "text": "x_1,\\dots,x_{m+1} \\in U" }, { "math_id": 5, "text": "x_{m+1}=x_1" }, { "math_id": 6, "text": "\\sum_{k=1}^m \\langle x_{k+1},f(x_{k+1})-f(x_k)\\rangle\\geq 0." }, { "math_id": 7, "text": "f: U \\rightrightarrows \\mathbb{R}^n" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "F:U\\to \\mathbb{R}" }, { "math_id": 10, "text": "f(x)\\subset \\partial F(x)" }, { "math_id": 11, "text": "x\\in U" }, { "math_id": 12, "text": "\\partial F(x)" }, { "math_id": 13, "text": "F" }, { "math_id": 14, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=61020908
61023018
Crack growth equation
A crack growth equation is used for calculating the size of a fatigue crack growing from cyclic loads. The growth of a fatigue crack can result in catastrophic failure, particularly in the case of aircraft. When many growing fatigue cracks interact with one another it is known as widespread fatigue damage. A crack growth equation can be used to ensure safety, both in the design phase and during operation, by predicting the size of cracks. In critical structure, loads can be recorded and used to predict the size of cracks to ensure maintenance or retirement occurs prior to any of the cracks failing. Safety factors are used to reduce the predicted fatigue life to a service fatigue life because of the sensitivity of the fatigue life to the size and shape of crack initiating defects and the variability between assumed loading and actual loading experienced by a component. "Fatigue life" can be divided into an initiation period and a crack growth period. Crack growth equations are used to predict the crack size starting from a given initial flaw and are typically based on experimental data obtained from constant amplitude fatigue tests. One of the earliest crack growth equations based on the stress intensity factor range of a load cycle (formula_0) is the Paris–Erdogan equation formula_1 where formula_2 is the crack length and formula_3 is the fatigue crack growth for a single load cycle formula_4. A variety of crack growth equations similar to the Paris–Erdogan equation have been developed to include factors that affect the crack growth rate such as stress ratio, overloads and load history effects. The stress intensity range can be calculated from the maximum and minimum stress intensity for a cycle formula_5 A "geometry factor" formula_6 is used to relate the far field stress formula_7 to the crack tip stress intensity using formula_8. There are standard references containing the geometry factors for many different configurations. History of crack propagation equations. Many crack propagation equations have been proposed over the years to improve prediction accuracy and incorporate a variety of effects. The works of Head, Frost and Dugdale, McEvily and Illg, and Liu on fatigue crack-growth behaviour laid the foundation in this topic. The general form of these crack propagation equations may be expressed as formula_9 where, the crack length is denoted by formula_2, the number of cycles of load applied is given by formula_4, the stress range by formula_10, and the material parameters by formula_11. For symmetrical configurations, the length of the crack from the line of symmetry is defined as formula_2 and is half of the total crack length formula_12. Crack growth equations of the form formula_13 are not a true differential equation as they do not model the process of crack growth in a continuous manner throughout the loading cycle. As such, separate cycle counting or identification algorithms such as the commonly used rainflow-counting algorithm, are required to identify the maximum and minimum values in a cycle. Although developed for the stress/strain-life methods rainflow counting has also been shown to work for crack growth. There have been a small number of true derivative fatigue crack growth equations that have also been developed. Factors affecting crack growth rate. Regimes. Figure 1 shows a typical plot of the rate of crack growth as a function of the alternating stress intensity or crack tip driving force formula_0 plotted on log scales. The crack growth rate behaviour with respect to the alternating stress intensity can be explained in different regimes (see, figure 1) as follows Regime A: At low growth rates, variations in microstructure, mean stress (or load ratio), and environment have significant effects on the crack propagation rates. It is observed at low load ratios that the growth rate is most sensitive to microstructure and in low strength materials it is most sensitive to load ratio. Regime B: At mid-range of growth rates, variations in microstructure, mean stress (or load ratio), thickness, and environment have no significant effects on the crack propagation rates. Regime C: At high growth rates, crack propagation is highly sensitive to the variations in microstructure, mean stress (or load ratio), and thickness. Environmental effects have relatively very less influence. Stress ratio effect. Cycles with higher stress ratio formula_14 have an increased rate of crack growth. This effect is often explained using the crack closure concept which describes the observation that the crack faces can remain in contact with each other at loads above zero. This reduces the effective stress intensity factor range and the fatigue crack growth rate. Sequence effects. A formula_13 equation gives the rate of growth for a single cycle, but when the loading is not constant amplitude, changes in the loading can lead to temporary increases or decreases in the rate of growth. Additional equations have been developed to deal with some of these cases. The rate of growth is retarded when an overload occurs in a loading sequence. These loads generate are plastic zone that may delay the rate of growth. Two notable equations for modelling the delays occurring while the crack grows through the overload region are: formula_15 with formula_16 where formula_17 is the plastic zone corresponding to the ith cycle that occurs post the overload and formula_18 is the distance between the crack and the extent of the plastic zone at the overload. Crack growth equations. Threshold equation. To predict the crack growth rate at the near threshold region, the following relation has been used formula_19 Paris–Erdoğan equation. To predict the crack growth rate in the intermediate regime, the Paris–Erdoğan equation is used formula_20 Forman equation. In 1967, Forman proposed the following relation to account for the increased growth rates due to stress ratio and when approaching the fracture toughness formula_21 formula_22 McEvily–Groeger equation. McEvily and Groeger proposed the following power-law relationship which considers the effects of both high and low values of formula_0 formula_23. NASGRO equation. The NASGRO equation is used in the crack growth programs AFGROW, FASTRAN and NASGRO software. It is a general equation that covers the lower growth rate near the threshold formula_24 and the increased growth rate approaching the fracture toughness formula_25, as well as allowing for the mean stress effect by including the stress ratio formula_26. The NASGRO equation is formula_27 where formula_28, formula_29, formula_30, formula_31, formula_32, formula_24 and formula_25 are the equation coefficients. McClintock equation. In 1967, McClintock developed an equation for the upper limit of crack growth based on the cyclic crack tip opening displacement formula_33 formula_34 where formula_35 is the flow stress, formula_36 is the Young's modulus and formula_6 is a constant typically in the range 0.1–0.5. Walker equation. To account for the stress ratio effect, Walker suggested a modified form of the Paris–Erdogan equation formula_37 where, formula_38 is a material parameter which represents the influence of stress ratio on the fatigue crack growth rate. Typically, formula_38 takes a value around formula_39, but can vary between formula_40. In general, it is assumed that compressive portion of the loading cycle formula_41 has no effect on the crack growth by considering formula_42 which gives formula_43 This can be physically explained by considering that the crack closes at zero load and does not behave like a crack under compressive loads. In very ductile materials like Man-Ten steel, compressive loading does contribute to the crack growth according to formula_44. Elber equation. Elber modified the Paris–Erdogan equation to allow for crack closure with the introduction of the "opening" stress intensity level formula_45 at which contact occurs. Below this level there is no movement at the crack tip and hence no growth. This effect has been used to explain the stress ratio effect and the increased rate of growth observed with short cracks. Elber's equation is formula_46 formula_47 Ductile and brittle materials equation. The general form of the fatigue-crack growth rate in ductile and brittle materials is given by formula_48 where, formula_30 and formula_31 are material parameters. Based on different crack-advance and crack-tip shielding mechanisms in metals, ceramics, and intermetallics, it is observed that the fatigue crack growth rate in metals is significantly dependent on formula_0 term, in ceramics on formula_49, and intermetallics have almost similar dependence on formula_0 and formula_49 terms. Prediction of fatigue life. Computer programs. There are many computer programs that implement crack growth equations such as "Nasgro", AFGROW and Fastran. In addition, there are also programs that implement a probabilistic approach to crack growth that calculate the probability of failure throughout the life of a component. Crack growth programs grow a crack from an initial flaw size until it exceeds the fracture toughness of a material and fails. Because the fracture toughness depends on the boundary conditions, the fracture toughness may change from plane strain conditions for a semi-circular surface crack to plane stress conditions for a through crack. The fracture toughness for plane stress conditions is typically twice as large as that for plane strain. However, because of the rapid rate of growth of a crack near the end of its life, variations in fracture toughness do not significantly alter the life of a component. Crack growth programs typically provide a choice of: Analytical solution. The stress intensity factor is given by formula_50 where formula_51 is the applied uniform tensile stress acting on the specimen in the direction perpendicular to the crack plane, formula_2 is the crack length and formula_6 is a dimensionless parameter that depends on the geometry of the specimen. The alternating stress intensity becomes formula_52 where formula_53 is the range of the cyclic stress amplitude. By assuming the initial crack size to be formula_54, the critical crack size formula_55 before the specimen fails can be computed using formula_56 as formula_57 The above equation in formula_58 is implicit in nature and can be solved numerically if necessary. Case I. For formula_59 crack closure has negligible effect on the crack growth rate and the Paris–Erdogan equation can be used to compute the fatigue life of a specimen before it reaches the critical crack size formula_55 as formula_60 Crack growth model with constant value of 𝛽 and R = 0. For the Griffith-Irwin crack growth model or center crack of length formula_61 in an infinite sheet as shown in the figure 2, we have formula_62 and is independent of the crack length. Also, formula_63 can be considered to be independent of the crack length. By assuming formula_64 the above integral simplifies to formula_65 by integrating the above expression for formula_66 and formula_67 cases, the total number of load cycles formula_68 are given by formula_69 Now, for formula_70 and critical crack size to be very large in comparison to the initial crack size formula_71 will give formula_72 The above analytical expressions for the total number of load cycles to fracture formula_73 are obtained by assuming formula_74. For the cases, where formula_6 is dependent on the crack size such as the Single Edge Notch Tension (SENT), Center Cracked Tension (CCT) geometries, numerical integration can be used to compute formula_75. Case II. For formula_76 crack closure phenomenon has an effect on the crack growth rate and we can invoke Walker equation to compute the fatigue life of a specimen before it reaches the critical crack size formula_55 as formula_77 Numerical calculation. This scheme is useful when formula_6 is dependent on the crack size formula_2. The initial crack size is considered to be formula_78. The stress intensity factor at the current crack size formula_2 is computed using the maximum applied stress as formula_79If formula_49 is less than the fracture toughness formula_80, the crack has not reached its critical size formula_58 and the simulation is continued with the current crack size to calculate the alternating stress intensity as formula_81 Now, by substituting the stress intensity factor in Paris–Erdogan equation, the increment in the crack size formula_82 is computed as formula_83 where formula_84 is cycle step size. The new crack size becomes formula_85 where index formula_86 refers to the current iteration step. The new crack size is used to calculate the stress intensity at maximum applied stress for the next iteration. This iterative process is continued until formula_87 Once this failure criterion is met, the simulation is stopped. The schematic representation of the fatigue life prediction process is shown in figure 3. Example. The stress intensity factor in a SENT specimen (see, figure 4) under fatigue crack growth is given by formula_88 The following parameters are considered for the calculation formula_89 mm, formula_90 mm, formula_91 mm, formula_92, formula_93, formula_94MPa,formula_95, formula_96. The critical crack length, formula_97, can be computed when formula_98 as formula_99 By solving the above equation, the critical crack length is obtained as formula_100. Now, invoking the Paris–Erdogan equation gives formula_101 By numerical integration of the above expression, the total number of load cycles to failure is obtained as formula_102. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta K" }, { "math_id": 1, "text": "\n{da \\over dN} = C(\\Delta K)^m\n" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "{\\rm d}a/{\\rm d}N" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "\\Delta K = K_\\text{max} - K_\\text{min}" }, { "math_id": 6, "text": "\\beta" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "\nK = \\beta \\sigma \\sqrt{\\pi a}\n" }, { "math_id": 9, "text": "\n{da \\over dN} = f(\\Delta \\sigma, a, C_{i}),\n" }, { "math_id": 10, "text": "\\Delta \\sigma" }, { "math_id": 11, "text": "C_{i}" }, { "math_id": 12, "text": "2a" }, { "math_id": 13, "text": "da/dN" }, { "math_id": 14, "text": " R = K_{\\text{min}}}/{K_{\\text{max}} \\equiv P_{\\text{min}}/{P_{\\text{max}}}" }, { "math_id": 15, "text": "\n\\left(\\frac{da}{dN}\\right)_\\text{VA} = \\beta \\left(\\frac{da}{dN}\\right)_\\text{CA} " }, { "math_id": 16, "text": " \\beta = \\left(\\frac{r_\\text{pi}}{r_\\text{max}}\\right)^k " }, { "math_id": 17, "text": "r_\\text{pi}" }, { "math_id": 18, "text": "r_\\text{max}" }, { "math_id": 19, "text": "{da \\over dN} = A\\left(\\Delta K - \\Delta K_{\\text{th}}\\right)^{p}." }, { "math_id": 20, "text": "{da \\over dN} = C\\left(\\Delta K\\right)^{m}." }, { "math_id": 21, "text": "K_\\text{c}" }, { "math_id": 22, "text": "{da \\over dN} = \\frac{C(\\Delta K)^n}{(1-R)K_\\text{c} - \\Delta K}" }, { "math_id": 23, "text": "{da \\over dN}= A(\\Delta K - \\Delta K_{\\text{th}})^2\\Big[1 + \\frac{\\Delta K}{K_{\\text{Ic}} - K_{\\text{max}}}\\Big]" }, { "math_id": 24, "text": "\\Delta K_\\text{th}" }, { "math_id": 25, "text": "K_\\text{crit}" }, { "math_id": 26, "text": "R" }, { "math_id": 27, "text": "\n\\frac{da}{dN} = C \\left[ \\left( \\frac{1 - f}{1 - R}\\right) \\Delta K \\right] ^ {n} {\\left(1 - \\frac{\\Delta K_\\text{th}}{\\Delta K} \\right) ^ {p} \\over \\left (1 - \\frac{K_{\\max}}{K_\\text{crit}} \\right) ^ {q}}\n" }, { "math_id": 28, "text": "C" }, { "math_id": 29, "text": "f" }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "p" }, { "math_id": 32, "text": "q" }, { "math_id": 33, "text": "\\Delta \\text{CTOD}" }, { "math_id": 34, "text": "{da \\over dN} \\propto \\Delta \\text{CTOD} \\approx \\beta {(\\Delta K)^2 \\over {2 \\sigma_0 E'}}" }, { "math_id": 35, "text": "\\sigma_0" }, { "math_id": 36, "text": "E'" }, { "math_id": 37, "text": "{da \\over dN} = C\\Big(\\overline{\\Delta K}\\Big)^{m} = C\\bigg(\\frac{\\Delta K}{(1-R)^{1-\\gamma}}\\bigg)^{m} = C\\big(K_{\\text{max}}(1-R)^{\\gamma}\\big)^{m}" }, { "math_id": 38, "text": "\\gamma" }, { "math_id": 39, "text": "0.5" }, { "math_id": 40, "text": "0.3-1.0" }, { "math_id": 41, "text": "\\big(R < 0\\big)" }, { "math_id": 42, "text": "\\gamma = 0," }, { "math_id": 43, "text": "\\overline{\\Delta K} = K_{\\text{max}}." }, { "math_id": 44, "text": "\\gamma = 0.22" }, { "math_id": 45, "text": "K_\\text{op}" }, { "math_id": 46, "text": "\n\\Delta K_\\text{eff} = K_\\text{max} - K_\\text{op}\n" }, { "math_id": 47, "text": "\n{da \\over dN} = C(\\Delta K_\\text{eff})^m\n" }, { "math_id": 48, "text": "{da \\over dN} \\propto (K_{\\text{max}})^{n}(\\Delta K)^{p}," }, { "math_id": 49, "text": "K_{\\text{max}}" }, { "math_id": 50, "text": "K = \\beta \\sigma \\sqrt{\\pi a}, " }, { "math_id": 51, "text": "\\sigma " }, { "math_id": 52, "text": "\\begin{align}\n\\Delta K &= \\begin{cases}\n\\beta (\\sigma_{\\text{max}} - \\sigma_{\\text{min}}) \\sqrt{\\pi a} = \\beta \\Delta \\sigma \\sqrt{\\pi a}, \\qquad R \\geq 0 \\\\\n\\beta \\sigma_{\\text{max}} \\sqrt{\\pi a}, \\qquad R < 0\n\\end{cases},\n\\end{align} " }, { "math_id": 53, "text": "\\Delta \\sigma " }, { "math_id": 54, "text": "a_{0} " }, { "math_id": 55, "text": "a_{c} " }, { "math_id": 56, "text": "\\big(K = K_{\\text{max}} = K_{\\text{Ic}}\\big) " }, { "math_id": 57, "text": "\\begin{align}\nK_{\\text{Ic}} &= \\beta \\sigma_{\\text{max}}\\sqrt{\\pi a_{c}}, \\\\\n\\Rightarrow a_{c} &= \\frac{1}{\\pi}\\bigg(\\frac{K_{\\text{Ic}}}{\\beta \\sigma_{\\text{max}}}\\bigg)^2.\n\\end{align} " }, { "math_id": 58, "text": "a_{c}" }, { "math_id": 59, "text": "R \\geq 0.7," }, { "math_id": 60, "text": "\\begin{align}\n{da \\over dN} &= C(\\Delta K)^m = C\\bigg(\\beta \\Delta \\sigma \\sqrt{\\pi a}\\bigg)^m, \\\\\n\\Rightarrow N_{f} &= \\frac{1}{(\\sqrt{\\pi}\\Delta \\sigma)^{m}}\\int_{a_{0}}^{a_{c}} \\frac{da}{(C\\sqrt{a}\\beta)^{m}}.\n\\end{align} " }, { "math_id": 61, "text": "2a " }, { "math_id": 62, "text": "\\beta=1 " }, { "math_id": 63, "text": "C " }, { "math_id": 64, "text": "\\beta = \\text{constant}, " }, { "math_id": 65, "text": "N_{f} = \\frac{1}{C(\\sqrt{\\pi}\\beta \\Delta \\sigma)^{m}}\\int_{a_{0}}^{a_{c}} \\frac{da}{(\\sqrt{a})^{m}}, " }, { "math_id": 66, "text": "m \\neq 2" }, { "math_id": 67, "text": "m =2" }, { "math_id": 68, "text": "N_{f}" }, { "math_id": 69, "text": "\\begin{align}\nN_{f} &= \\frac{2}{(m-2)C(\\sqrt{\\pi}\\beta \\Delta \\sigma)^{m}}\\Bigg[\\frac{1}{(a_{0})^{\\frac{m-2}{2}}} - \\frac{1}{(a_{c})^{\\frac{m-2}{2}}}\\Bigg], \\qquad m \\neq 2, \\\\\nN_{f} &= \\frac{1}{\\pi C (\\beta\\Delta \\sigma)^2 } \\ln \\frac{a_{c}}{a_{0}}, \\qquad m = 2.\n\\end{align}" }, { "math_id": 70, "text": "m > 2 " }, { "math_id": 71, "text": "\\big(a_{c} >> a_{0}\\big) " }, { "math_id": 72, "text": "N_{f} = \\frac{2}{(m-2)C(\\sqrt{\\pi}\\Delta \\sigma \\beta)^{m}}(a_{0})^{\\frac{2-m}{2}}. " }, { "math_id": 73, "text": "\\big(N_{f}\\big) " }, { "math_id": 74, "text": "Y = \\text{constant} " }, { "math_id": 75, "text": "N_{f} " }, { "math_id": 76, "text": "R < 0.7," }, { "math_id": 77, "text": "\\begin{align}\n{da \\over dN} &= C\\bigg(\\frac{\\Delta K}{(1-R)^{1-\\gamma}}\\bigg)^m = \\frac{C}{(1-R)^{m(1-\\gamma)}}\\bigg( \\beta\\Delta \\sigma \\sqrt{\\pi a}\\bigg)^m, \\\\\n\\Rightarrow N_{f} &= \\frac{(1-R)^{m(1-\\gamma)}}{(\\sqrt{\\pi}\\Delta \\sigma)^{m}}\\int_{a_{0}}^{a_{c}} \\frac{da}{(C\\sqrt{a}\\beta)^{m}}.\n\\end{align} " }, { "math_id": 78, "text": "a_{0}" }, { "math_id": 79, "text": "\\begin{align}\nK_{\\text{max}} &= \\beta \\sigma_{\\text{max}} \\sqrt{\\pi a}.\n\\end{align}" }, { "math_id": 80, "text": "K_{\\text{Ic}}" }, { "math_id": 81, "text": "\\Delta K = \\beta\\Delta \\sigma \\sqrt{\\pi a}." }, { "math_id": 82, "text": "\\Delta a " }, { "math_id": 83, "text": "\\Delta a = C(\\Delta K)^m \\Delta N," }, { "math_id": 84, "text": "\\Delta N " }, { "math_id": 85, "text": "a_{i+1} = a_{i} + \\Delta a, " }, { "math_id": 86, "text": "i " }, { "math_id": 87, "text": "K_{\\text{max}} \\geq K_{\\text{Ic}}. " }, { "math_id": 88, "text": "\\begin{align}\nK_{I} &= \\beta\\sigma \\sqrt{\\pi a} = \\sigma \\sqrt{\\pi a}\\Bigg[0.265\\bigg[1 - \\frac{a}{W}\\bigg]^{4} + \\frac{0.857 + 0.265 \\frac{a}{W}}{\\big[1 - \\frac{a}{W}\\big]^{\\frac{3}{2}}}\\Bigg], \\\\\n\\Delta K_{I} &= K_{\\text{max}} - K_{\\text{min}} = \\beta \\Delta \\sigma \\sqrt{\\pi a}.\n\\end{align}" }, { "math_id": 89, "text": "a_0 = 5" }, { "math_id": 90, "text": "W = 100" }, { "math_id": 91, "text": "h = 200" }, { "math_id": 92, "text": "K_{\\text{Ic}} = 30\\text{ MPa}\\sqrt{\\text{m}}" }, { "math_id": 93, "text": "R = \\frac{K_{\\text{min}}}{K_{\\text{max}}} = 0.7" }, { "math_id": 94, "text": "\\Delta \\sigma = 20" }, { "math_id": 95, "text": "C = 4.6774 \\times 10^{-11} \\frac{\\text{m}}{\\text{cycle}}\\frac{1}{(\\text{MPa}\\sqrt{\\text{m}})^m}" }, { "math_id": 96, "text": "m = 3.874" }, { "math_id": 97, "text": "a = a_{c}" }, { "math_id": 98, "text": "K_{\\text{max}} = K_{\\text{Ic}}" }, { "math_id": 99, "text": "a_{c} = \\frac{1}{\\pi}\\Bigg(\\frac{0.45}\\beta\\Bigg)^2." }, { "math_id": 100, "text": "a_{c} = 26.7 \\text{mm}" }, { "math_id": 101, "text": "N_{f} = \\frac{1}{C (\\Delta \\sigma)^{m}(\\sqrt{\\pi})^{m}}\\int_{a_{0}}^{a_{c}} \\frac{da}{a^{\\frac{m}{2}}\\Bigg[0.265\\bigg[1 - \\frac{a}{W}\\bigg]^{4} + \\frac{0.857 + 0.265 \\frac{a}{W}}{\\big[1 - \\frac{a}{W}\\big]^{\\frac{3}{2}}}\\Bigg]^m} " }, { "math_id": 102, "text": "N_{f} = 1.2085 \\times 10^{6} \\text{ cycles}" } ]
https://en.wikipedia.org/wiki?curid=61023018
61023040
Combinatorial modelling
Combinatorial modelling is the process which lets us identify a suitable mathematical model to reformulate a problem. These combinatorial models will provide, through the combinatorics theory, the operations needed to solve the problem. Implicit combinatorial models. Simple combinatorial problems are the ones that can be solved by applying just one combinatorial operation (variations, permutations, combinations, …). These problems can be classified into three different models, called implicit combinatorial models. Selection. A selection problem requires to choose a sample of "k" elements out of a set of "n" elements. It is needed to know if the order in which the objects are selected matters and whether an object can be selected more than once or not. This table shows the operations that the model provides to get the number of different samples for each of the selections: Examples. "1.- At a party there are 50 people. Everybody shakes everybody’s hand once. How often are hands shaken in total?" What we need to do is calculate the number of all possible pairs of party guests, which means, a sample of 2 people out of the 50 guests, so formula_0 and formula_1. A pair will be the same no matter the order of the two people. A handshake must be carried out by two different people (no repetition). So, it is required to select an ordered sample of 2 elements out of a set of 50 elements, in which repetition is not allowed. That is all we need to know to choose the right operation, and the result is: formula_2 "2.- Unfortunately, you can’t remember the code for your four-digit lock. You only know that you didn’t use any digit more than once. How many different ways do you have to try?" We need to choose a sample of 4 digits out of the set of 10 digits (base 10), so formula_3 and formula_4. The digits must be ordered in a certain way to get the correct number, so we want to select an ordered sample. As the statement says, no digit was chosen more than once, so our sample will not have repeated digits. So, it is required to select an ordered sample of 4 elements out of a set of 10 elements, in which repetition is not allowed. That is all we need to know to choose the right operation, and the result is: formula_5 "3.- A boy wants to buy 20 invitation cards to give to their friends for his birthday party. There are 3 types of cards in the store, and he likes them all. In how many ways can he buy the 20 cards?" It is required to choose a sample of 20 invitation cards out of the set of 3 types of cards, so formula_6 and formula_7. The order in which you choose the different types of invitations does not matter. As a type of card must be selected more than once, there will be repetitions in our invitation cards. So, we want to select a non ordered sample of 20 elements (formula_6) out of a set of 3 elements (formula_7), in which repetition is allowed. That is all we need to know to choose the right operation, and the result is: formula_8 Distribution. In a distribution problem it is required to place "k" objects into "n" boxes or recipients.  In order to choose the right operation out of the ones that the model provides, it is necessary to know: The following table shows the operations that the model provides to get the number of different ways of distributing the objects for each of the distributions: formula_12 Stirling numbers of the second kind formula_13 Number of partitions of the integer "k" into "n" parts formula_14 Lah numbers (Stirling numbers of the third kind) Examples. "1.- A maths teacher has to give 3 studentships among his students. 7 of them got an 'outstanding' grade, so they are the candidates to get them. In how many ways can he distribute the grants?" Let's consider the 3 studentships are objects that have to be distributed into 7 boxes, which are the students. As the objects are identical studentships, they are indistinguishable. The boxes are distinguishable, as they are different students. Every studentship must be given to a different student, so every box must have at most 1 object. Furthermore, the order in which the objects are placed in a boxes does not matter, because there cannot be more than one on each box. So, it is a non ordered injective distribution of 3 indistinguishable objects (formula_15) into 7 distinguishable boxes (formula_16). That is all we need to know to choose the right operation, and the result is: formula_17 "2.- A group of 8 friends rent a 5-room cottage to spend their holidays. If the rooms are identical and no one can be empty, in how many ways can they be distributed in the cottage?" Let's consider the friends are objects that have to be distributed into 5 boxes, which are the rooms. As the objects are different people, they are distinguishable. The boxes are indistinguishable, as they are identical rooms. We can consider it as a non ordered distribution, because the ordered in which everyone is placed in the rooms does not matter. No room can be empty, so every box must have at least 1 object. So, it is a non ordered surjective distribution of 8 distinguishable objects (formula_18) into 5 indistinguishable boxes (formula_19). That is all we need to know to choose the right operation, and the result is: formula_20 "3.- 12 people are done shopping in a supermarket where 4 cashiers are working at the moment. In how many different ways can they be distributed into the checkouts?" Let's consider the people are objects that have to be distributed into boxes, which are the check-outs. As the people and the checkouts are different, the objects and the boxes are distinguishable. The order in which the objects are placed in the boxes matter, because they are people getting into queues. The statement does not mention any restriction. So, it is an ordered distribution with no restrictions of 12 distinguishable objects (formula_21) into 4 distinguishable boxes (formula_22). That is all we need to know to choose the right operation, and the result is: formula_23 Partition. A partition problem requires to divide a set of "k" elements into "n" subsets. This model is related to the distribution one, as we can consider the objects inside every box as subsets of the set of objects to distribute. So, each of the 24 distributions described previously has a matching kind of partition into subsets. So, a partition problem can be solved by transforming it into a distribution one and applying the correspondent operation provided by the distribution model (previous table). Following this method, we will get the number of possible ways of dividing the set. The relation between these two models is described in the following table: This relation let us transform the table provided by the distribution model into a new one that can be used to know the different ways of dividing a set of "k" elements into "n" subsets: Examples. "1.- A group of 3 classmates have to make a thesis about 8 different maths topics. In how many ways can they split the work to do?" We need to divide the set of 8 topics into 3 subsets. These subsets will be the topics that each of the students will work on. The elements in the set (topics) are distinguishable. The partitions must be ordered because each one will correspond to a different student, but the topics inside every subset do not have to be ordered because each student can decide which order to follow when working on the thesis. The statement does not mention any restriction of the subsets. So, it is required to divide a set of 8 elements (formula_18) into 3 ordered subsets (formula_24) of non ordered elements. That is all we need to know to choose the right operation, and the result is: formula_25
[ { "math_id": 0, "text": "k=2 " }, { "math_id": 1, "text": "n=50" }, { "math_id": 2, "text": "C_{50,2}=\\frac{50!}{2!\\cdot(50-2)!}=1225" }, { "math_id": 3, "text": "k=4\n" }, { "math_id": 4, "text": "n=10" }, { "math_id": 5, "text": "V_{10,4}=\\frac{10!}{(10-4)!}=5040" }, { "math_id": 6, "text": "k=20\n\n" }, { "math_id": 7, "text": "n=3\n" }, { "math_id": 8, "text": "CR_{3,20}=C_{22,20}=\\frac{22!}{20!\\cdot(22-20)!}=231" }, { "math_id": 9, "text": "k\\leq n" }, { "math_id": 10, "text": "k\\geq n" }, { "math_id": 11, "text": "k=n" }, { "math_id": 12, "text": "S(k,n)=\\left\\{ {k \\atop n} \\right\\}:" }, { "math_id": 13, "text": "P(k,n):" }, { "math_id": 14, "text": "L(k,n):" }, { "math_id": 15, "text": "k=3" }, { "math_id": 16, "text": "n=7" }, { "math_id": 17, "text": "C_{7,3}=\\frac{7!}{3!\\cdot(7-3)!}=35" }, { "math_id": 18, "text": "k=8" }, { "math_id": 19, "text": "n=5" }, { "math_id": 20, "text": "S(8,5)= \\left\\{{8 \\atop 5}\\right\\} =\\frac {1}{5!}\\sum _{i=0}^5(-1)^i \\binom {5}{i}(5-i)^8 = 1050" }, { "math_id": 21, "text": "k=12" }, { "math_id": 22, "text": "n=4\n" }, { "math_id": 23, "text": "12!\\cdot CR_{4,12}=12!\\cdot C_{15,12}=12!\\cdot\\frac{15!}{12!\\cdot(15-12)!}=217945728000" }, { "math_id": 24, "text": "n=3" }, { "math_id": 25, "text": "VR_{3,8}=3^8=6561" } ]
https://en.wikipedia.org/wiki?curid=61023040
61033016
STEREO experiment
Experiment investigating oscillation of neutrinos The STEREO experiment (Search for Sterile Reactor Neutrino Oscillations) investigated the possible oscillation of neutrinos from a nuclear reactor into light so-called sterile neutrinos. It was located at the Institut Laue–Langevin (ILL) in Grenoble, France. The experiment took data from November 2016 to November 2020. The final results of the experiment rejected the hypothesis of a light sterile neutrino. Detector. Measuring principle. The STEREO detector is placed at a distance of 10 m away from the research reactor at the ILL. The research reactor has a thermal power of 58 MW. STEREO is supposed to measure the neutrino flux and spectrum near the reactor. To be able to detect the neutrinos radiated from the reactor, the detector is filled up with 1800 litres of organic liquid scintillator which is doped with gadolinium. Inside the scintillator neutrinos are captured via the process of inverse beta decay formula_0 In this process a positron is produced. When the positron moves through the scintillator a light signal is produced, which is detected by the 48 photomultiplier tubes (PMTs) placed at the top of the detector cells. The capturing of the neutron which is also produced during the inverse beta decay produces a second coincidence signal. The expected distance between the oscillation maximum and minimum of light sterile neutrinos is about 2 m. To see the oscillation the detector is divided into 6 separate detector cells, which each measure the energy spectrum of the detected neutrinos. By comparing the measured spectra a possible oscillation could be discovered (see Figure 2). The STEREO experiment detects formula_1 neutrinos per day. Detector shielding. Neutrinos only interact weakly. Therefore, neutrino detectors such as STEREO need to be very sensitive and need a good shielding from additional background signals to be able to detect neutrinos precisely. To achieve this high sensitivity the 6 inner detector cells are surrounded by a liquid scintillator (without gadolinium) which acts as a "Gamma-Catcher" detecting in- and outgoing gamma radiation. This significantly increases the detection efficiency as well as the energy resolution of the detector. A cherenkov detector filled with water is placed on top of the detector to detect cosmic muons which are produced in the atmosphere and would otherwise act as a large background source. To shield the detector from radioactive sources coming from surrounding experiments it is surrounded and shielded by many layers (65 t) of mostly lead and polyethylene but also iron, steel and &lt;chem&gt;B_4C&lt;/chem&gt;. Motivation. Although neutrino oscillation is a phenomenon that is quite well understood today, there are still some experimental observations that question the completeness of our understanding. The most prominent of these observations is the so-called "reactor antineutrino anomaly" (RAA) (see Figure 3). A number of short baseline reactor-neutrino experiments have measured a significantly lower anti-electron neutrino () flux compared to the theoretical predictions (a 2,7 σ deviation). Further experimental anomalies are the unexpected appearance of in a short-baseline  beam (LSND anomaly) as well as the disappearance of at short distances during the calibration phase of the GALLEX and SAGE experiments known as the gallium neutrino anomaly. These anomalies could signify that our understanding of neutrino oscillations is not yet complete and that neutrinos oscillate into another 4th neutrino species. However measurements of the decay width of the Z boson at the Large Electron–Positron Collider (LEP) exclude the existence of a light 4th "active" (i.e. interacting via the weak force) neutrino. Hence the oscillation into additional light "sterile" neutrinos is considered as a possible explanation of the observed anomalies. In addition sterile neutrinos appear in many prominent extensions of the Standard Model of particle physics, e.g. in the seesaw type 1 mechanism. Results. Initial results were released in 2018 exploiting a dataset of 66 days of reactor turned on. Most of the parameter space that could account for the RAA was excluded at a 90% confidence level. The final results were published in 2023. 107,588 antineurinos were detected from October 2017 until November 2020. The sterile neutrino explanation for the RAA was rejected up to a few (eV)² for the square mass splitting between standard and sterile neutrino states (see figure 4). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\overline{\\nu}_e + p \\rightarrow n + e^+" }, { "math_id": 1, "text": "\\sim400" } ]
https://en.wikipedia.org/wiki?curid=61033016
61043737
Degenerate Higher-Order Scalar-Tensor theories
Theory of gravity Degenerate Higher-Order Scalar-Tensor theories (or DHOST theories) are theories of modified gravity. They have a Lagrangian containing second-order derivatives of a scalar field but do not generate ghosts (kinetic excitations with negative kinetic energy), because they only contain one propagating scalar mode (as well as the two usual tensor modes). History. DHOST theories were introduced in 2015 by David Langlois and Karim Noui. They are a generalisation of Beyond Horndeski (or GLPV) theories, which are themselves a generalisation of Horndeski theories. The equations of motion of Horndeski theories contain only two derivatives of the metric and the scalar field, and it was believed that only equations of motion of this form would not contain an extra scalar degree of freedom (which would lead to unwanted ghosts). However, it was first shown that a class of theories now named Beyond Horndeski also avoided the extra degree of freedom. Originally theories which were quadratic in the second derivative of the scalar field were studied, but DHOST theories up to cubic order have now been studied. A well-known specific example of a DHOST theory is mimetic gravity, introduced in 2013 by Chamseddine and Mukhanov. Action. All DHOST theories depend on a scalar field formula_0. The general action of DHOST theories is given by formula_1 where formula_2 is the kinetic energy of the scalar field, formula_3, and the quadratic terms in formula_4 are given by formula_5 where formula_6 and the cubic terms are given by formula_7 where formula_8 The formula_9 and formula_10 are arbitrary functions of formula_0 and formula_2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "\\begin{aligned} S[g, \\phi]=\\int d^{4} x \\sqrt{-g}\\left[f_{0}(X, \\phi)+f_{1}(X, \\phi) \\square \\phi+f_{2}(X, \\phi) R+C_{(2)}^{\\mu \\nu \\rho \\sigma} \\phi_{\\mu \\nu} \\phi_{\\rho \\sigma}+\\right. f_{3}(X, \\phi) G_{\\mu \\nu} \\phi^{\\mu \\nu}+C_{(3)}^{\\mu \\nu \\rho \\sigma \\alpha \\beta} \\phi_{\\mu \\nu} \\phi_{\\rho \\sigma} \\phi_{\\alpha \\beta} ] ,\\end{aligned}" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\phi_{\\mu\\nu}=\\nabla_\\mu \\nabla_\\nu \\phi" }, { "math_id": 4, "text": "\\phi_{\\mu\\nu}" }, { "math_id": 5, "text": "C_{(2)}^{\\mu \\nu \\rho \\sigma} \\phi_{\\mu \\nu} \\phi_{\\rho \\sigma}=\\sum_{A=1}^{5} a_{A}(X, \\phi) L_{A}^{(2)}," }, { "math_id": 6, "text": "\\begin{array}{l}{L_{1}^{(2)}=\\phi_{\\mu \\nu} \\phi^{\\mu \\nu}, \\quad L_{2}^{(2)}=(\\square \\phi)^{2}, \\quad L_{3}^{(2)}=(\\square \\phi) \\phi^{\\mu} \\phi_{\\mu \\nu} \\phi^{\\nu}}, \\quad {L_{4}^{(2)}=\\phi^{\\mu} \\phi_{\\mu \\rho} \\phi^{\\rho \\nu} \\phi_{\\nu}, \\quad L_{5}^{(2)}=\\left(\\phi^{\\mu} \\phi_{\\mu \\nu} \\phi^{\\nu}\\right)^{2}},\\end{array}" }, { "math_id": 7, "text": "C_{(3)}^{\\mu \\nu \\rho \\sigma \\alpha \\beta} \\phi_{\\mu \\nu} \\phi_{\\rho \\sigma} \\phi_{\\alpha \\beta}=\\sum_{A=1}^{10} b_{A}(X, \\phi) L_{A}^{(3)}," }, { "math_id": 8, "text": "\\begin{array}{l}{L_{1}^{(3)}=(\\square \\phi)^{3}, \\quad L_{2}^{(3)}=(\\square \\phi) \\phi_{\\mu \\nu} \\phi^{\\mu \\nu}, \\quad L_{3}^{(3)}=\\phi_{\\mu \\nu} \\phi^{\\nu \\rho} \\phi_{\\rho}^{\\mu}, \\quad L_{4}^{(3)}=(\\square \\phi)^{2} \\phi_{\\mu} \\phi^{\\mu \\nu} \\phi_{\\nu}}, \\\\ {L_{5}^{(3)}=\\square \\phi \\phi_{\\mu} \\phi^{\\mu \\nu} \\phi_{\\nu \\rho} \\phi^{\\rho}, \\quad L_{6}^{(3)}=\\phi_{\\mu \\nu} \\phi^{\\mu \\nu} \\phi_{\\rho} \\phi^{\\rho \\sigma} \\phi_{\\sigma}, \\quad L_{7}^{(3)}=\\phi_{\\mu} \\phi^{\\mu \\nu} \\phi_{\\nu \\rho} \\phi^{\\rho \\sigma} \\phi_{\\sigma}}, \\\\ {L_{8}^{(3)}=\\phi_{\\mu} \\phi^{\\mu \\nu} \\phi_{\\nu \\rho} \\phi^{\\rho} \\phi_{\\sigma} \\phi^{\\sigma \\lambda} \\phi_{\\lambda}, \\quad L_{9}^{(3)}=\\square \\phi\\left(\\phi_{\\mu} \\phi^{\\mu \\nu} \\phi_{\\nu}\\right)^{2}, \\quad L_{10}^{(3)}=\\left(\\phi_{\\mu} \\phi^{\\mu \\nu} \\phi_{\\nu}\\right)^{3}}.\\end{array}" }, { "math_id": 9, "text": "a_A" }, { "math_id": 10, "text": "b_A" } ]
https://en.wikipedia.org/wiki?curid=61043737
610583
Sinc function
Special mathematical function defined as sin(x)/x In mathematics, physics and engineering, the sinc function, denoted by sinc("x"), has two forms, normalized and unnormalized. In mathematics, the historical unnormalized sinc function is defined for "x" ≠ 0 by formula_0 Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa("x"). In digital signal processing and information theory, the normalized sinc function is commonly defined for "x" ≠ 0 by formula_1 In either case, the value at "x" = 0 is defined to be the limiting value formula_2 for all real "a" ≠ 0 (the limit can be proven using the squeeze theorem). The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of π). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of x. The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal. The only difference between the two definitions is in the scaling of the independent variable (the x axis) by a factor of π. In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function. The function has also been called the cardinal sine or sine cardinal function. The term "sinc" was introduced by Philip M. Woodward in his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own", and his 1953 book "Probability and Information Theory, with Applications to Radar". The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's formula) for the zeroth-order spherical Bessel function of the first kind. Properties. The zero crossings of the unnormalized sinc are at non-zero integer multiples of π, while zero crossings of the normalized sinc occur at non-zero integers. The local maxima and minima of the unnormalized sinc correspond to its intersections with the cosine function. That is, = cos("ξ") for all points ξ where the derivative of is zero and thus a local extremum is reached. This follows from the derivative of the sinc function: formula_3 The first few terms of the infinite series for the x coordinate of the n-th extremum with positive x coordinate are formula_4 where formula_5 and where odd n lead to a local minimum, and even n to a local maximum. Because of symmetry around the y axis, there exist extrema with x coordinates −"xn". In addition, there is an absolute maximum at "ξ"0 = (0, 1). The normalized sinc function has a simple representation as the infinite product: formula_6 and is related to the gamma function Γ("x") through Euler's reflection formula: formula_7 Euler discovered that formula_8 and because of the product-to-sum identity formula_9 Euler's product can be recast as a sum formula_10 The continuous Fourier transform of the normalized sinc (to ordinary frequency) is rect("f"): formula_11 where the rectangular function is 1 for argument between − and , and zero otherwise. This corresponds to the fact that the sinc filter is the ideal (brick-wall, meaning rectangular frequency response) low-pass filter. This Fourier integral, including the special case formula_12 is an improper integral (see Dirichlet integral) and not a convergent Lebesgue integral, as formula_13 The normalized sinc function has properties that make it ideal in relationship to interpolation of sampled bandlimited functions: Other properties of the two sinc functions include: Relationship to the Dirac delta distribution. The normalized sinc function can be used as a "nascent delta function", meaning that the following weak limit holds: formula_21 This is not an ordinary limit, since the left side does not converge. Rather, it means that formula_22 for every Schwartz function, as can be seen from the Fourier inversion theorem. In the above expression, as "a" → 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of ±, regardless of the value of a. This complicates the informal picture of "δ"("x") as being zero for all x except at the point "x" = 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in the Gibbs phenomenon. Summation. All sums in this section refer to the unnormalized sinc function. The sum of sinc("n") over integer n from 1 to ∞ equals : formula_23 The sum of the squares also equals : formula_24 When the signs of the addends alternate and begin with +, the sum equals : formula_25 The alternating sums of the squares and cubes also equal : formula_26 formula_27 Series expansion. The Taylor series of the unnormalized sinc function can be obtained from that of the sine (which also yields its value of 1 at "x" = 0): formula_28 The series converges for all x. The normalized version follows easily: formula_29 Euler famously compared this series to the expansion of the infinite product form to solve the Basel problem. Higher dimensions. The product of 1-D sinc functions readily provides a multivariate sinc function for the square Cartesian grid (lattice): sincC("x", "y") sinc("x") sinc("y"), whose Fourier transform is the indicator function of a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesian lattice (e.g., hexagonal lattice) is a function whose Fourier transform is the indicator function of the Brillouin zone of that lattice. For example, the sinc function for the hexagonal lattice is a function whose Fourier transform is the indicator function of the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for the hexagonal, body-centered cubic, face-centered cubic and other higher-dimensional lattices can be explicitly derived using the geometric properties of Brillouin zones and their connection to zonotopes. For example, a hexagonal lattice can be generated by the (integer) linear span of the vectors formula_30 Denoting formula_31 one can derive the sinc function for this hexagonal lattice as formula_32 This construction can be used to design Lanczos window for general multidimensional lattices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{sinc}x = \\frac{\\sin x}{x}." }, { "math_id": 1, "text": "\\operatorname{sinc}x = \\frac{\\sin(\\pi x)}{\\pi x}." }, { "math_id": 2, "text": "\\operatorname{sinc}0 := \\lim_{x \\to 0}\\frac{\\sin(a x)}{a x} = 1" }, { "math_id": 3, "text": "\\frac{d}{dx}\\operatorname{sinc}(x) = \\frac{\\cos(x) - \\operatorname{sinc}(x)}{x}." }, { "math_id": 4, "text": "x_n = q - q^{-1} - \\frac{2}{3} q^{-3} - \\frac{13}{15} q^{-5} - \\frac{146}{105} q^{-7} - \\cdots," }, { "math_id": 5, "text": "q = \\left(n + \\frac{1}{2}\\right) \\pi," }, { "math_id": 6, "text": "\\frac{\\sin(\\pi x)}{\\pi x} = \\prod_{n=1}^\\infty \\left(1 - \\frac{x^2}{n^2}\\right)" }, { "math_id": 7, "text": "\\frac{\\sin(\\pi x)}{\\pi x} = \\frac{1}{\\Gamma(1 + x)\\Gamma(1 - x)}." }, { "math_id": 8, "text": "\\frac{\\sin(x)}{x} = \\prod_{n=1}^\\infty \\cos\\left(\\frac{x}{2^n}\\right)," }, { "math_id": 9, "text": "\\prod_{n=1}^k \\cos\\left(\\frac{x}{2^n}\\right) = \\frac{1}{2^{k-1}} \\sum_{n=1}^{2^{k-1}} \\cos\\left(\\frac{n - 1/2}{2^{k-1}} x \\right),\\quad \\forall k \\ge 1," }, { "math_id": 10, "text": "\\frac{\\sin(x)}{x} = \\lim_{N\\to\\infty} \\frac{1}{N} \\sum_{n=1}^N \\cos\\left(\\frac{n - 1/2}{N} x\\right)." }, { "math_id": 11, "text": "\\int_{-\\infty}^\\infty \\operatorname{sinc}(t) \\, e^{-i 2 \\pi f t}\\,dt = \\operatorname{rect}(f)," }, { "math_id": 12, "text": "\\int_{-\\infty}^\\infty \\frac{\\sin(\\pi x)}{\\pi x} \\, dx = \\operatorname{rect}(0) = 1" }, { "math_id": 13, "text": "\\int_{-\\infty}^\\infty \\left|\\frac{\\sin(\\pi x)}{\\pi x} \\right| \\,dx = +\\infty." }, { "math_id": 14, "text": "\\int_0^x \\frac{\\sin(\\theta)}{\\theta}\\,d\\theta = \\operatorname{Si}(x)." }, { "math_id": 15, "text": "x \\frac{d^2 y}{d x^2} + 2 \\frac{d y}{d x} + \\lambda^2 x y = 0." }, { "math_id": 16, "text": "\\int_{-\\infty}^\\infty \\frac{\\sin^2(\\theta)}{\\theta^2}\\,d\\theta = \\pi \\quad \\Rightarrow \\quad \\int_{-\\infty}^\\infty \\operatorname{sinc}^2(x)\\,dx = 1," }, { "math_id": 17, "text": "\\int_{-\\infty}^\\infty \\frac{\\sin(\\theta)}{\\theta}\\,d\\theta = \\int_{-\\infty}^\\infty \\left( \\frac{\\sin(\\theta)}{\\theta} \\right)^2 \\,d\\theta = \\pi." }, { "math_id": 18, "text": "\\int_{-\\infty}^\\infty \\frac{\\sin^3(\\theta)}{\\theta^3}\\,d\\theta = \\frac{3\\pi}{4}." }, { "math_id": 19, "text": "\\int_{-\\infty}^\\infty \\frac{\\sin^4(\\theta)}{\\theta^4}\\,d\\theta = \\frac{2\\pi}{3}." }, { "math_id": 20, "text": "\\int_0^\\infty \\frac{dx}{x^n + 1} = 1 + 2\\sum_{k=1}^\\infty \\frac{(-1)^{k+1}}{(kn)^2 - 1} = \\frac{1}{\\operatorname{sinc}(\\frac{\\pi}{n})}." }, { "math_id": 21, "text": "\\lim_{a \\to 0} \\frac{\\sin\\left(\\frac{\\pi x}{a}\\right)}{\\pi x} = \\lim_{a \\to 0}\\frac{1}{a} \\operatorname{sinc}\\left(\\frac{x}{a}\\right) = \\delta(x)." }, { "math_id": 22, "text": "\\lim_{a \\to 0}\\int_{-\\infty}^\\infty \\frac{1}{a} \\operatorname{sinc}\\left(\\frac{x}{a}\\right) \\varphi(x) \\,dx = \\varphi(0)" }, { "math_id": 23, "text": "\\sum_{n=1}^\\infty \\operatorname{sinc}(n) = \\operatorname{sinc}(1) + \\operatorname{sinc}(2) + \\operatorname{sinc}(3) + \\operatorname{sinc}(4) +\\cdots = \\frac{\\pi - 1}{2}." }, { "math_id": 24, "text": "\\sum_{n=1}^\\infty \\operatorname{sinc}^2(n) = \\operatorname{sinc}^2(1) + \\operatorname{sinc}^2(2) + \\operatorname{sinc}^2(3) + \\operatorname{sinc}^2(4) + \\cdots = \\frac{\\pi - 1}{2}." }, { "math_id": 25, "text": "\\sum_{n=1}^\\infty (-1)^{n+1}\\,\\operatorname{sinc}(n) = \\operatorname{sinc}(1) - \\operatorname{sinc}(2) + \\operatorname{sinc}(3) - \\operatorname{sinc}(4) + \\cdots = \\frac{1}{2}." }, { "math_id": 26, "text": "\\sum_{n=1}^\\infty (-1)^{n+1}\\,\\operatorname{sinc}^2(n) = \\operatorname{sinc}^2(1) - \\operatorname{sinc}^2(2) + \\operatorname{sinc}^2(3) - \\operatorname{sinc}^2(4) + \\cdots = \\frac{1}{2}," }, { "math_id": 27, "text": "\\sum_{n=1}^\\infty (-1)^{n+1}\\,\\operatorname{sinc}^3(n) = \\operatorname{sinc}^3(1) - \\operatorname{sinc}^3(2) + \\operatorname{sinc}^3(3) - \\operatorname{sinc}^3(4) + \\cdots = \\frac{1}{2}." }, { "math_id": 28, "text": "\\frac{\\sin x}{x} = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n}}{(2n+1)!} = 1 - \\frac{x^2}{3!} + \\frac{x^4}{5!} - \\frac{x^6}{7!} + \\cdots" }, { "math_id": 29, "text": "\\frac{\\sin \\pi x}{\\pi x} = 1 - \\frac{\\pi^2x^2}{3!} + \\frac{\\pi^4x^4}{5!} - \\frac{\\pi^6x^6}{7!} + \\cdots" }, { "math_id": 30, "text": "\n \\mathbf{u}_1 = \\begin{bmatrix} \\frac{1}{2} \\\\ \\frac{\\sqrt{3}}{2} \\end{bmatrix} \\quad \\text{and} \\quad\n \\mathbf{u}_2 = \\begin{bmatrix} \\frac{1}{2} \\\\ -\\frac{\\sqrt{3}}{2} \\end{bmatrix}.\n" }, { "math_id": 31, "text": "\n \\boldsymbol{\\xi}_1 = \\tfrac{2}{3} \\mathbf{u}_1, \\quad\n \\boldsymbol{\\xi}_2 = \\tfrac{2}{3} \\mathbf{u}_2, \\quad\n \\boldsymbol{\\xi}_3 = -\\tfrac{2}{3} (\\mathbf{u}_1 + \\mathbf{u}_2), \\quad\n \\mathbf{x} = \\begin{bmatrix} x \\\\ y\\end{bmatrix},\n" }, { "math_id": 32, "text": "\\begin{align}\n \\operatorname{sinc}_\\text{H}(\\mathbf{x}) = \\tfrac{1}{3} \\big(\n & \\cos\\left(\\pi\\boldsymbol{\\xi}_1\\cdot\\mathbf{x}\\right) \\operatorname{sinc}\\left(\\boldsymbol{\\xi}_2\\cdot\\mathbf{x}\\right) \\operatorname{sinc}\\left(\\boldsymbol{\\xi}_3\\cdot\\mathbf{x}\\right) \\\\\n & {} + \\cos\\left(\\pi\\boldsymbol{\\xi}_2\\cdot\\mathbf{x}\\right) \\operatorname{sinc}\\left(\\boldsymbol{\\xi}_3\\cdot\\mathbf{x}\\right) \\operatorname{sinc}\\left(\\boldsymbol{\\xi}_1\\cdot\\mathbf{x}\\right) \\\\\n & {} + \\cos\\left(\\pi\\boldsymbol{\\xi}_3\\cdot\\mathbf{x}\\right) \\operatorname{sinc}\\left(\\boldsymbol{\\xi}_1\\cdot\\mathbf{x}\\right) \\operatorname{sinc}\\left(\\boldsymbol{\\xi}_2\\cdot\\mathbf{x}\\right)\n \\big).\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=610583
6105873
Tarski–Grothendieck set theory
System of mathematical set theory Tarski–Grothendieck set theory (TG, named after mathematicians Alfred Tarski and Alexander Grothendieck) is an axiomatic set theory. It is a non-conservative extension of Zermelo–Fraenkel set theory (ZFC) and is distinguished from other axiomatic set theories by the inclusion of Tarski's axiom, which states that for each set there is a Grothendieck universe it belongs to (see below). Tarski's axiom implies the existence of inaccessible cardinals, providing a richer ontology than ZFC. For example, adding this axiom supports category theory. The Mizar system and Metamath use Tarski–Grothendieck set theory for formal verification of proofs. Axioms. Tarski–Grothendieck set theory starts with conventional Zermelo–Fraenkel set theory and then adds “Tarski's axiom”. We will use the axioms, definitions, and notation of Mizar to describe it. Mizar's basic objects and processes are fully formal; they are described informally below. First, let us assume that: TG includes the following axioms, which are conventional because they are also part of ZFC: It is Tarski's axiom that distinguishes TG from other axiomatic set theories. Tarski's axiom also implies the axioms of infinity, choice, and power set. It also implies the existence of inaccessible cardinals, thanks to which the ontology of TG is much richer than that of conventional set theories such as ZFC. - formula_4 itself; - every element of every member of formula_5; - every subset of every member of formula_5; - the power set of every member of formula_5; - every subset of formula_5 of cardinality less than that of formula_5. More formally: formula_6 where “formula_7” denotes the power class of "x" and “formula_8” denotes equinumerosity. What Tarski's axiom states (in the vernacular) is that for each set formula_4 there is a Grothendieck universe it belongs to. That formula_5 looks much like a “universal set” for formula_4 – it not only has as members the powerset of formula_4, and all subsets of formula_4, it also has the powerset of that powerset and so on – its members are closed under the operations of taking powerset or taking a subset. It's like a “universal set” except that of course it is not a member of itself and is not a set of all sets. That's the guaranteed Grothendieck universe it belongs to. And then any such formula_5 is itself a member of an even larger “almost universal set” and so on. It's one of the strong cardinality axioms guaranteeing vastly more sets than one normally assumes to exist. Implementation in the Mizar system. The Mizar language, underlying the implementation of TG and providing its logical syntax, is typed and the types are assumed to be non-empty. Hence, the theory is implicitly taken to be non-empty. The existence axioms, e.g. the existence of the unordered pair, is also implemented indirectly by the definition of term constructors. The system includes equality, the membership predicate and the following standard definitions: Implementation in Metamath. The Metamath system supports arbitrary higher-order logics, but it is typically used with the "set.mm" definitions of axioms. The ax-groth axiom adds Tarski's axiom, which in Metamath is defined as follows: ⊢ ∃y(x ∈ y ∧ ∀z ∈ y (∀w(w ⊆ z → w ∈ y) ∧ ∃w ∈ y ∀v(v ⊆ z → v ∈ w)) ∧ ∀z(z ⊆ y → (z ≈ y ∨ z ∈ y)))
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\{A\\}" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "F(x)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "\\forall x \\exists y [x \\in y \\land \\forall z \\in y(z \\subseteq y \\land \\mathcal P(z) \\subseteq y \\land \\mathcal P(z) \\in y) \\land \\forall z \\in \\mathcal P(y) (\\neg( z \\approx y) \\to z \\in y)]" }, { "math_id": 7, "text": "\\mathcal P(x)" }, { "math_id": 8, "text": "\\approx" }, { "math_id": 9, "text": "\\{a,b\\} = \\{b,a\\}" }, { "math_id": 10, "text": "\\{\\{a,b\\},\\{a\\}\\} = (a,b) \\neq (b,a)" }, { "math_id": 11, "text": "Y" } ]
https://en.wikipedia.org/wiki?curid=6105873
61067122
2 Samuel 21
Second Book of Samuel chapter 2 Samuel 21 is the twenty-first chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 21–24 containing the appendices to the Books of Samuel. Text. This chapter was originally written in the Hebrew language. It is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 1Q7 (1QSam; 50 BCE) with extant verses 16–19 and 4Q51 (4QSama; 100–50 BCE) with extant verses 1, 3–6, 8–9, 12, 15–17. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The miscellaneous collection of narratives, lists, and poems in 2 Samuel 21–24 are appendices to the Books of Samuel, arranged not chronologically, but carefully crafted into a concentric three-tiered structure as follows: A. National crisis (21:1-14) – David's penultimate public act B. Lists of David's warriors and accounts of heroic deeds (21:15–22) – David's decline and his exit from military affairs C. Poem (22:1–51) – A penultimate testament: David sings a song C'. Poem (23:1–7) – David's ultimate testament B'. Lists of David's warriors and accounts of heroic deeds (23:8–39) – David's decline and his exit from military affairs A'. National crisis (24:1–25) – David's final public act These chapters center on two poems: the Psalm of David in 22:2–51, a review of the mighty acts of God, and the oracle in 23:1–7, an assurance that the Davidic dynasty was to endure, with the focal point of the incipit to David's second poem (23:1): "These are the last words of David" as a notice that the 'David Narrative' is drawing to a close. Directly framing the central poems are the warrior exploits in 21:15–22 and again in 23:8–39 (accompanied by a warrior list) and bracketing in the outer circle are a famine story (21:1–14) and a plague story (24:11-25). The episode related to the Gibeonites in 21:1-14 links to the relationship between David and the house of Saul in the preceding chapter. The final section containing the plague story in 2 Samuel 24 links to the building of Solomon's temple, so appropriately placed right before 1 Kings. After these episodes the next story is King Solomon's succession, so then King David can die (1 Kings 1–2). David Avenges the Gibeonites (21:1–14). This section initiates the closing portrait of David by reprising several events from 1 and 2 Samuel, reaching back to Saul's rise to power, his rescue of the people of Jabesh-gilead (1 Samuel 9–11), David's pact with Jonathan (1 Samuel 20:12–17; 20:42), Saul's death and his stealthy burial by the people of Jabesh-Gilead (1 Samuel 31). Long ago David lamented Saul's demise (2 Samuel 1:17-27), now he provided him with a proper burial, a sign of his enduring loyalty to the king he succeeded. A three-year continuous famine caused by drought led David to enquire of YHWH and received the reason that it was linked to the blood-guilt incurred by the house of Saul for putting the Gibeonites to death (verse 1). In Joshua 9 it is recorded that the Gibeonites who were "Amorites" (verse 2; the inhabitants of the land of Canaan before the Israelite occupation) had an irrevocable treaty with the Israelites to be left alive (verses 19–20), so breaching a treaty would lead to national calamity as is evident from biblical and extrabiblical documents. Saul could have been aggravated by their settlement in Benjaminite territory because he would want to build Gibeon as his capital. There is no supporting account that Saul slaughtered the Gibeonites, but the statement is credible when compared to his dealings with the priests of Nob (1 Samuel 22:6–23). David wished to expiate for the sin of Saul with a royal sacrifice (cf.2 Kings 3:26-7), made 'at the beginning of barley harvest' (verse 9). David's motives in allowing the death of Saulides certainly would come under suspicion, but this narrative (together with its sequel in 2 Samuel 9:1–13) shows that David was not acting solely to gain political advantage, but out of concern for the welfare of the land and in obedience to YHWH's will, for his actions were also tempered by his kindness to Mephibosheth (2 Samuel 9:1–13). After the episode David secured an honorable burial for Saul and Jonathan, as well as for those executed on this occasion. This whole episode contrasts David and Saul in their fidelity to their oaths, drawing to a theme that was introduced in the first allusions to David in 1 Samuel 13:14 ("the Lord has sought out a man after his own heart") and in 1 Samuel 15:28 ("The Lord has torn the kingdom of Israel from you this very day, and has given it to a neighbor of yours, who is better than you"). After multiple events in the David Narrative illustrate how David proved to be more faithful to God than Saul, this episode provides a final example: Saul's killing of the Gibeonites violated an oath that Israel had sworn to them (2 Samuel 21:2), but David preserved Mephibosheth alive (21:7) to keep the oath he had sworn to Jonathan. In one of his final acts, David had to resolve the problem that Saul's infidelity of oath had left behind. Philistine giants destroyed (21:15–22). This section provides a summary of clashes with persons of extraordinary size called 'descendants of the giants' during the wars against the Philistines. The first giant, Ishbi-benob, had hefty armour similar to Goliath (1 Samuel 17:7); he was killed by Abishai. The second giant, Saph, has no details other than he was killed by Sibbecai the Hushathite, who was one of David's elite 'Thirty' (2 Samuel 23:27 following the Septuagint in place of "Mebunnai" in Masoretic Text). Someone related to Goliath, the Gittite, was the third giant (cf. 1 Samuel 17), who was killed by Elhanan, a Bethlehemite. The unnamed fourth giant possessed some abnormal physical characteristics; he was killed by Jonathan, David's nephew. "And there was again a battle in Gob with the Philistines, where Elhanan the son of Jaareoregim, a Bethlehemite, slew the brother of Goliath the Gittite, the staff of whose spear was like a weaver's beam." Verse 19. The parallel verse in 1 Chronicles 20, written much later than 2 Samuel, provides clarification to this verse. The comparison of the two versions is as follows: (A: 2 Samuel 21:19; B: 1 Chronicles 20:5; Hebrew text is read from right to left) A: ויך אלחנן בן־יערי ארגים בית הלחמי את גלית transliteration: wa·yaḵ ’el·ḥā·nān ben-ya‘·rê ’ō·rə·ḡîm bêṯ ha·laḥ·mî, ’êṯ gā·lə·yāṯ English: "and slew Elhanan ben Jaare-Oregim "bet-ha-"Lahmi, (brother) "of Goliath" B: ויך אלחנן בן־יעיר             את־לחמי אחי גלית transliteration: wa·yaḵ ’el·ḥā·nān ben-yā·‘îr                               ’eṯ-laḥ·mî,   ’ă·ḥî gā·lə·yāṯ English: "and slew Elhanan ben Jair             Lahmi, brother of Goliath" The underlined words show a relation to Goliath, which is denoted in 2 Samuel 21 with the word "’êṯ" which can be translated as "together with; related to", whereas the newer version (1 Chronicles 20) uses the word "’ă·ḥî" meaning "brother". Thus, Elhanan killed the brother of Goliath, whereas Goliath was killed earlier by David (1 Samuel 17). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61067122
61067126
2 Samuel 24
Second Book of Samuel chapter 2 Samuel 24 is the twenty-fourth (and the final) chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 21–24 containing the appendices to the Books of Samuel. Text. This chapter was originally written in the Hebrew language. It is divided into 25 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 16–22. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The miscellaneous collection of narratives, lists, and poems in 2 Samuel 21–24 are appendices to the Books of Samuel, arranged not chronologically, but carefully crafted into a concentric three-tiered structure as follows: A. National crisis (21:1-14) – David's penultimate public act B. Lists of David's warriors and accounts of heroic deeds (21:15–22) – David's decline and his exit from military affairs C. Poem (22:1–51) – A penultimate testament: David sings a song C'. Poem (23:1–7) – David's ultimate testament B'. Lists of David's warriors and accounts of heroic deeds (23:8–39) – David's decline and his exit from military affairs A'. National crisis (24:1–25) – David's final public act These chapters center on two poems: the Psalm of David in 22:2–51, a review of the mighty acts of God, and the oracle in 23:1–7, an assurance that the Davidic dynasty was to endure, with the focal point of the incipit to David's second poem (23:1): "These are the last words of David" as a notice that the 'David Narrative' is drawing to a close. Directly framing the central poems are the warrior exploits in 21:15–22 and again in 23:8–39 (accompanied by a warrior list) and bracketing in the outer circle are a famine story (21:1–14) and a plague story (24:11–25), both were caused by divine anger in response to a transgression by a king (Saul and David, respectively). The episode related to the Gibeonites in 21:1–14 links to the relationship between David and the house of Saul in the preceding chapter. The final section containing the plague story in 2 Samuel 24 links to the building of Solomon's temple, so appropriately placed right before 1 Kings. After these episodes the next story is King Solomon's succession, so then King David can die (1 Kings 1–2). This chapter has the following structure: A. The Lord's anger (24:1) B. David's order, Joab's obedience (24:2–9) C. David acknowledges his sin (24:10) D. The penalty (24:11–13) E. David's choice (24:14) D'. The penalty exacted (24:15–16) C'. David acknowledges his sin (24:17) B'. Gad's order, David's obedience (24:18–25a) A'. The Lord's anger is appeased (24:25b) The center of this chapter is David's choice of his punishment as he left it to God's mercy. This is bracketed by the punishment choices and the punishment exacted (D/D' sections). The C/C' sections contain David's double confession. David's order and Joab's obedience (B section) parallels Gad's order and David's obedience (B' section). The inclusion (A/A' sections) is God's anger that raged at the beginning and was appeased at the end. David’s military census (24:1–9). Verse 1 suggests that David's census was incited so that God could punish Israel for a sin committed previously—from a theological perspective, whereas the Chronicler states that it was Satan who incited David to count the people (1 Chronicles 21:1) from a human perspective. Joab possibly sensed the danger of moving from 'a charismatic levy to a human organization' (verse 3) as there was a 'religious taboo' on counting people (cf. Exodus 30:11–16). The reference to those 'able to draw the sword' (verse 9, cf. Numbers 1:2–3) indicates an enrollment for military service, which may neglect rules of purity (cf. Joshua 3:5; Deuteronomy 23:9–14). "And again the anger of the Lord was kindled against Israel, and he moved David against them to say, Go, number Israel and Judah." Judgment for David’s sin (24:10–17). After David realized that he sinned against God, he was given choice through the prophet Gad (verse 11–14) between three possible punishments, varying in length of time from three years to three days, but on a reverse scale of intensity. David left the choice to God's mercy, which came down to pestillence (verse 15). David built an altar (24:18–25). This last section contains David's purchase of Araunah's threshing-floor which is an aetiological narrative explaining what would become the site of Solomon's temple (cf. the pillar at Bethel, Genesis 28:11-22, and the altar at Ophrah, Judges 6:11-24). Traditionally a threshing-floor could be a site of theophany (Judges 6:37) and a place for receiving divine messages (2 Kings 22:10) as extrabiblically also the case at Ugarit, but the text does not claim that Araunah's threshing-floor was originally a Jebusite sanctuary. It was the appearance of an angel (verse 16) and the erection of an altar (verses 18, 25) that made it a sanctuary. David's conversation with Araunah for purchasing the place recalls Abraham's conversation with the Hittites for the purchase of the cave of Machpelah (Genesis 23). In both cases the offer of a gift was rejected and a formal purchase was made (1 Chronicles 21:24 states explicitly that a gift from a non-Israelite could not be accepted for a site of the Jerusalem temple). David's response to God's words led to the erection of an altar offering pleasing sacrifice to God, which averted the plague (verse 25). The accounts in this chapter at the end of the Books of Samuel, ending with the erection of a holocaust altar on Araunah's threshing-floor, was to be continued in the next book (Books of Kings) with the accounts of the building of Solomon's temple. "Then the king said to Araunah, "No, but I will surely buy it from you for a price; nor will I offer burnt offerings to the LORD my God with that which costs me nothing."" "So David bought the threshing floor and the oxen for fifty shekels of silver." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61067126
61067687
Buckmaster equation
In mathematics, the Buckmaster equation is a second-order nonlinear partial differential equation, named after John D. Buckmaster, who derived the equation in 1977. The equation models the surface of a thin sheet of viscous liquid. The equation was derived earlier by S. H. Smith and by P Smith, but these earlier derivations focused on the steady version of the equation. The Buckmaster equation is formula_0 where formula_1 is a known parameter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_t = (u^4)_{xx} + \\lambda (u^3)_x" }, { "math_id": 1, "text": "\\lambda" } ]
https://en.wikipedia.org/wiki?curid=61067687
6106797
Charge carrier density
Charge carriers per volume; such as electrons, ions, "holes" or others Charge carrier density, also known as carrier concentration, denotes the number of charge carriers per volume. In SI units, it is measured in m−3. As with any density, in principle it can depend on position. However, usually carrier concentration is given as a single number, and represents the average carrier density over the whole material. Charge carrier densities involve equations concerning the electrical conductivity, related phenomena like the thermal conductivity, and chemicals bonds like covalent bond. Calculation. The carrier density is usually obtained theoretically by integrating the density of states over the energy range of charge carriers in the material (e.g. integrating over the conduction band for electrons, integrating over the valence band for holes). If the total number of charge carriers is known, the carrier density can be found by simply dividing by the volume. To show this mathematically, charge carrier density is a particle density, so integrating it over a volume formula_0 gives the number of charge carriers formula_1 in that volume formula_2 where formula_3 is the position-dependent charge carrier density. If the density does not depend on position and is instead equal to a constant formula_4 this equation simplifies to formula_5 Semiconductors. The carrier density is important for semiconductors, where it is an important quantity for the process of chemical doping. Using band theory, the electron density,formula_4 is number of electrons per unit volume in the conduction band. For holes, formula_6 is the number of holes per unit volume in the valence band. To calculate this number for electrons, we start with the idea that the total density of conduction-band electrons, formula_4, is just adding up the conduction electron density across the different energies in the band, from the bottom of the band formula_7 to the top of the band formula_8. formula_9 Because electrons are fermions, the density of conduction electrons at any particular energy, formula_10 is the product of the density of states, formula_11 or how many conducting states are possible, with the Fermi–Dirac distribution, formula_12 which tells us the portion of those states which will actually have electrons in them formula_13 In order to simplify the calculation, instead of treating the electrons as fermions, according to the Fermi–Dirac distribution, we instead treat them as a classical non-interacting gas, which is given by the Maxwell–Boltzmann distribution. This approximation has negligible effects when the magnitude formula_14, which is true for semiconductors near room temperature. This approximation is invalid at very low temperatures or an extremely small band-gap. formula_15 The three-dimensional density of states is: formula_16 After combination and simplification, these expressions lead to: formula_17 Here formula_18 is the effective mass of the electrons in that particular semiconductor, and the quantity formula_19 is the difference in energy between the conduction band and the Fermi level, which is half the band gap, formula_20: formula_21 A similar expression can be derived for holes. The carrier concentration can be calculated by treating electrons moving back and forth across the bandgap just like the equilibrium of a reversible reaction from chemistry, leading to an electronic mass action law. The mass action law defines a quantity formula_22 called the intrinsic carrier concentration, which for undoped materials: formula_23 The following table lists a few values of the intrinsic carrier concentration for intrinsic semiconductors, in order of increasing band gap. These carrier concentrations will change if these materials are doped. For example, doping pure silicon with a small amount of phosphorus will increase the carrier density of electrons, "n". Then, since "n" &gt; "p", the doped silicon will be a n-type extrinsic semiconductor. Doping pure silicon with a small amount of boron will increase the carrier density of holes, so then "p" &gt; "n", and it will be a p-type extrinsic semiconductor. Metals. The carrier density is also applicable to metals, where it can be estimated from the simple Drude model. In this case, the carrier density (in this context, also called the free electron density) can be estimated by: formula_24 Where formula_25 is the Avogadro constant, "Z" is the number of valence electrons, formula_26 is the density of the material, and formula_27 is the atomic mass. Since metals can display multiple oxidation numbers, the exact definition of how many "valence electrons" an element should have in elemental form is somewhat arbitrary, but the following table lists the free electron densities given in Ashcroft and Mermin, which were calculated using the formula above based on reasonable assumptions about valence, formula_28, and with mass densities, formula_26 calculated from experimental crystallography data. The values for n among metals inferred for example by the Hall effect are often on the same orders of magnitude, but this simple model cannot predict carrier density to very high accuracy. Measurement. The density of charge carriers can be determined in many cases using the Hall effect, the voltage of which depends inversely on the carrier density. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "N=\\int_V n(\\mathbf r) \\,dV." }, { "math_id": 3, "text": "n(\\mathbf r)" }, { "math_id": 4, "text": "n_0" }, { "math_id": 5, "text": "N = V \\cdot n_0." }, { "math_id": 6, "text": "p_0" }, { "math_id": 7, "text": "E_c" }, { "math_id": 8, "text": "E_\\text{top}" }, { "math_id": 9, "text": "n_0 = \\int_{E_c}^{E_\\text{top}}N(E) \\, dE" }, { "math_id": 10, "text": "N(E)" }, { "math_id": 11, "text": "g(E)" }, { "math_id": 12, "text": "f(E)" }, { "math_id": 13, "text": "N(E) = g(E) f(E)" }, { "math_id": 14, "text": "|E-E_f| \\gg k_\\text{B} T" }, { "math_id": 15, "text": " f(E)=\\frac{1}{1+e^{\\frac{E-E_f}{k_\\text{B} T}}} \\approx e^{-\\frac{E-E_f}{k_\\text{B} T}}" }, { "math_id": 16, "text": "g(E) = \\frac {1}{2\\pi^2} \\left(\\frac{2m^*}{\\hbar^2}\\right)^\\frac{3}{2}\\sqrt{E - E_0}" }, { "math_id": 17, "text": "n_0 = 2 \\left(\\frac{ m^* k_\\text{B} T}{2 \\pi \\hbar^2}\\right)^{3/2} e^{-\\frac{E_c - E_f}{k_\\text{B} T}}" }, { "math_id": 18, "text": "m^*" }, { "math_id": 19, "text": "E_c-E_f" }, { "math_id": 20, "text": "E_g" }, { "math_id": 21, "text": "E_g=2(E_c-E_f)" }, { "math_id": 22, "text": "n_i" }, { "math_id": 23, "text": "n_i=n_0=p_0" }, { "math_id": 24, "text": " n=\\frac{N_\\text{A} Z \\rho_m}{m_a}" }, { "math_id": 25, "text": "N_\\text{A}" }, { "math_id": 26, "text": "\\rho_m" }, { "math_id": 27, "text": "m_a" }, { "math_id": 28, "text": "Z" } ]
https://en.wikipedia.org/wiki?curid=6106797
61068319
Dreicer field
The Dreicer field (or Dreicer electric field) is the critical electric field above which electrons in a collisional plasma can be accelerated to become runaway electrons. It was named after Harry Dreicer who derived the expression in 1959 and expanded on the concept (i.e. runaway generation) in 1960. The Dreicer field is an important parameter in the study of tokamaks to suppress runaway generation in nuclear fusion. The Dreicer field is given by formula_0 where formula_1 is the electron density, formula_2 is the elementary charge, formula_3 is the Coulomb logarithm, formula_4 is the vacuum permittivity, formula_5 is the electron mass and formula_6 is the electron thermal speed. It was derived by considering the balance between the electric field and the collisional forces acting on a single electron within the plasma. Recent experiments have shown that the electric field required to accelerate electrons is significantly larger than the theoretically calculated Dreicer field. New models have been proposed to explain the discrepancy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_D = \\frac{1}{4\\pi\\epsilon_0^2} \\frac{ne^3 \\ln\\Lambda}{m_ev_{Te}^2}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "e" }, { "math_id": 3, "text": "\\ln\\Lambda" }, { "math_id": 4, "text": "\\epsilon_0" }, { "math_id": 5, "text": "m_e" }, { "math_id": 6, "text": "v_{Te}" } ]
https://en.wikipedia.org/wiki?curid=61068319
610752
Autoregressive conditional heteroskedasticity
Time series model In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model. ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm. ARCH-type models are sometimes considered to be in the family of stochastic volatility models, although this is strictly incorrect since at time "t" the volatility is completely predetermined (deterministic) given previous values. Model specification. To model a time series using an ARCH process, let formula_0denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These formula_0 are split into a stochastic piece formula_1 and a time-dependent standard deviation formula_2 characterizing the typical size of the terms so that formula_3 The random variable formula_1 is a strong white noise process. The series formula_4 is modeled by formula_5, where formula_6 and formula_7. An ARCH("q") model can be estimated using ordinary least squares. A method for testing whether the residuals formula_8 exhibit time-varying heteroskedasticity using the Lagrange multiplier test was proposed by Engle (1982). This procedure is as follows: GARCH. If an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model. In that case, the GARCH ("p", "q") model (where "p" is the order of the GARCH terms formula_18 and "q" is the order of the ARCH terms formula_19 ), following the notation of the original paper, is given by formula_20 formula_21 formula_22 Generally, when testing for heteroskedasticity in econometric models, the best test is the White test. However, when dealing with time series data, this means to test for ARCH and GARCH errors. Exponentially weighted moving average (EWMA) is an alternative model in a separate class of exponential smoothing models. As an alternative to GARCH modelling it has some attractive properties such as a greater weight upon more recent observations, but also drawbacks such as an arbitrary decay factor that introduces subjectivity into the estimation. GARCH("p", "q") model specification. The lag length "p" of a GARCH("p", "q") process is established in three steps: NGARCH. NAGARCH. Nonlinear Asymmetric GARCH(1,1) (NAGARCH) is a model with the specification: formula_28, where formula_29 and formula_30, which ensures the non-negativity and stationarity of the variance process. For stock returns, parameter formula_31 is usually estimated to be positive; in this case, it reflects a phenomenon commonly referred to as the "leverage effect", signifying that negative returns increase future volatility by a larger amount than positive returns of the same magnitude. This model should not be confused with the NARCH model, together with the NGARCH extension, introduced by Higgins and Bera in 1992. IGARCH. Integrated Generalized Autoregressive Conditional heteroskedasticity (IGARCH) is a restricted version of the GARCH model, where the persistent parameters sum up to one, and imports a unit root in the GARCH process. The condition for this is formula_32. EGARCH. The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson &amp; Cao (1991) is another form of the GARCH model. Formally, an EGARCH(p,q): formula_33 where formula_34, formula_35 is the conditional variance, formula_36, formula_37, formula_38, formula_39 and formula_40 are coefficients. formula_41 may be a standard normal variable or come from a generalized error distribution. The formulation for formula_42 allows the sign and the magnitude of formula_41 to have separate effects on the volatility. This is particularly useful in an asset pricing context. Since formula_43 may be negative, there are no sign restrictions for the parameters. GARCH-M. The GARCH-in-mean (GARCH-M) model adds a heteroskedasticity term into the mean equation. It has the specification: formula_44 The residual formula_45 is defined as: formula_46 QGARCH. The Quadratic GARCH (QGARCH) model by Sentana (1995) is used to model asymmetric effects of positive and negative shocks. In the example of a GARCH(1,1) model, the residual process formula_47 is formula_48 where formula_49 is i.i.d. and formula_50 GJR-GARCH. Similar to QGARCH, the Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten, Jagannathan and Runkle (1993) also models asymmetry in the ARCH process. The suggestion is to model formula_51 where formula_49 is i.i.d., and formula_52 where formula_53 if formula_54, and formula_55 if formula_56. TGARCH model. The Threshold GARCH (TGARCH) model by Zakoian (1994) is similar to GJR GARCH. The specification is one on conditional standard deviation instead of conditional variance: formula_57 where formula_58 if formula_59, and formula_60 if formula_61. Likewise, formula_62 if formula_61, and formula_63 if formula_59. fGARCH. Hentschel's fGARCH model, also known as Family GARCH, is an omnibus model that nests a variety of other popular symmetric and asymmetric GARCH models including APARCH, GJR, AVGARCH, NGARCH, etc. COGARCH. In 2004, Claudia Klüppelberg, Alexander Lindner and Ross Maller proposed a continuous-time generalization of the discrete-time GARCH(1,1) process. The idea is to start with the GARCH(1,1) model equations formula_64 formula_65 and then to replace the strong white noise process formula_49 by the infinitesimal increments formula_66 of a Lévy process formula_67, and the squared noise process formula_68 by the increments formula_69, where formula_70 is the purely discontinuous part of the quadratic variation process of formula_71. The result is the following system of stochastic differential equations: formula_72 formula_73 where the positive parameters formula_74, formula_75 and formula_76 are determined by formula_77, formula_78 and formula_79. Now given some initial condition formula_80, the system above has a pathwise unique solution formula_81 which is then called the continuous-time GARCH (COGARCH) model. ZD-GARCH. Unlike GARCH model, the Zero-Drift GARCH (ZD-GARCH) model by Li, Zhang, Zhu and Ling (2018) lets the drift term formula_82 in the first order GARCH model. The ZD-GARCH model is to model formula_51, where formula_49 is i.i.d., and formula_83 The ZD-GARCH model does not require formula_84, and hence it nests the Exponentially weighted moving average (EWMA) model in "RiskMetrics". Since the drift term formula_82, the ZD-GARCH model is always non-stationary, and its statistical inference methods are quite different from those for the classical GARCH model. Based on the historical data, the parameters formula_85 and formula_86 can be estimated by the generalized QMLE method. Spatial GARCH. Spatial GARCH processes by Otto, Schmid and Garthoff (2018) are considered as the spatial equivalent to the temporal generalized autoregressive conditional heteroscedasticity (GARCH) models. In contrast to the temporal ARCH model, in which the distribution is known given the full information set for the prior periods, the distribution is not straightforward in the spatial and spatiotemporal setting due to the interdependence between neighboring spatial locations. The spatial model is given by formula_87 and formula_88 where formula_89 denotes the formula_90-th spatial location and formula_91 refers to the formula_92-th entry of a spatial weight matrix and formula_93 for formula_94. The spatial weight matrix defines which locations are considered to be adjacent. Gaussian process-driven GARCH. In a different vein, the machine learning community has proposed the use of Gaussian process regression models to obtain a GARCH scheme. This results in a nonparametric modelling scheme, which allows for: (i) advanced robustness to overfitting, since the model marginalises over its parameters to perform inference, under a Bayesian inference rationale; and (ii) capturing highly-nonlinear dependencies without increasing model complexity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " ~\\epsilon_t~ " }, { "math_id": 1, "text": "z_t" }, { "math_id": 2, "text": "\\sigma_t" }, { "math_id": 3, "text": " ~\\epsilon_t=\\sigma_t z_t ~" }, { "math_id": 4, "text": " \\sigma_t^2 " }, { "math_id": 5, "text": " \\sigma_t^2=\\alpha_0+\\alpha_1 \\epsilon_{t-1}^2+\\cdots+\\alpha_q \\epsilon_{t-q}^2 = \\alpha_0 + \\sum_{i=1}^q \\alpha_{i} \\epsilon_{t-i}^2 " }, { "math_id": 6, "text": " ~\\alpha_0>0~ " }, { "math_id": 7, "text": " \\alpha_i\\ge 0,~i>0" }, { "math_id": 8, "text": " \\epsilon_t " }, { "math_id": 9, "text": " y_t = a_0 + a_1 y_{t-1} + \\cdots + a_q y_{t-q} + \\epsilon_t = a_0 + \\sum_{i=1}^q a_i y_{t-i} + \\epsilon_t " }, { "math_id": 10, "text": " \\hat \\epsilon^2 " }, { "math_id": 11, "text": " \\hat \\epsilon_t^2 = \\alpha_0 + \\sum_{i=1}^{q} \\alpha_i \\hat \\epsilon_{t-i}^2" }, { "math_id": 12, "text": " \\alpha_i = 0 " }, { "math_id": 13, "text": " i = 1, \\cdots, q " }, { "math_id": 14, "text": " \\alpha_i " }, { "math_id": 15, "text": " \\chi^2 " }, { "math_id": 16, "text": " T' " }, { "math_id": 17, "text": " T'=T-q " }, { "math_id": 18, "text": " ~\\sigma^2 " }, { "math_id": 19, "text": " ~\\epsilon^2 " }, { "math_id": 20, "text": " y_t=x'_t b +\\epsilon_t " }, { "math_id": 21, "text": " \\epsilon_t| \\psi_{t-1} \\sim\\mathcal{N}(0, \\sigma^2_t) " }, { "math_id": 22, "text": " \\sigma_t^2=\\omega + \\alpha_1 \\epsilon_{t-1}^2 + \\cdots + \\alpha_q \\epsilon_{t-q}^2 + \\beta_1 \\sigma_{t-1}^2 + \\cdots + \\beta_p\\sigma_{t-p}^2 = \\omega + \\sum_{i=1}^q \\alpha_i \\epsilon_{t-i}^2 + \\sum_{i=1}^p \\beta_i \\sigma_{t-i}^2 " }, { "math_id": 23, "text": " \\epsilon^2 " }, { "math_id": 24, "text": " \\rho = {{\\sum^T_{t=i+1} (\\hat \\epsilon^2_t - \\hat \\sigma^2_t) (\\hat \\epsilon^2_{t-1} - \\hat \\sigma^2_{t-1})} \\over {\\sum^T_{t=1} (\\hat \\epsilon^2_t - \\hat \\sigma^2_t)^2}} " }, { "math_id": 25, "text": " \\rho (i) " }, { "math_id": 26, "text": " 1/\\sqrt{T} " }, { "math_id": 27, "text": " \\epsilon^2_t " }, { "math_id": 28, "text": " ~\\sigma_{t}^2= ~\\omega + ~\\alpha (~\\epsilon_{t-1} - ~\\theta~\\sigma_{t-1})^2 + ~\\beta ~\\sigma_{t-1}^2" }, { "math_id": 29, "text": " ~\\alpha\\geq 0 , ~\\beta \\geq 0 , ~\\omega > 0 " }, { "math_id": 30, "text": " ~\\alpha (1 + ~\\theta^2) + ~\\beta < 1 " }, { "math_id": 31, "text": "~ \\theta" }, { "math_id": 32, "text": "\n\\sum^p_{i=1} ~\\beta_{i} +\\sum_{i=1}^q~\\alpha_{i} = 1\n" }, { "math_id": 33, "text": "\\log\\sigma_{t}^2=\\omega+\\sum_{k=1}^{q}\\beta_{k}g(Z_{t-k})+\\sum_{k=1}^{p}\\alpha_{k}\\log\\sigma_{t-k}^{2}" }, { "math_id": 34, "text": "g(Z_{t})=\\theta Z_{t}+\\lambda(|Z_{t}|-E(|Z_{t}|))" }, { "math_id": 35, "text": "\\sigma_{t}^{2}" }, { "math_id": 36, "text": "\\omega" }, { "math_id": 37, "text": "\\beta" }, { "math_id": 38, "text": "\\alpha" }, { "math_id": 39, "text": "\\theta" }, { "math_id": 40, "text": "\\lambda" }, { "math_id": 41, "text": "Z_{t}" }, { "math_id": 42, "text": "g(Z_{t})" }, { "math_id": 43, "text": "\\log\\sigma_{t}^{2}" }, { "math_id": 44, "text": "\ny_t = ~\\beta x_t + ~\\lambda ~\\sigma_t + ~\\epsilon_t\n" }, { "math_id": 45, "text": " ~\\epsilon_t " }, { "math_id": 46, "text": "\n~\\epsilon_t = ~\\sigma_t ~\\times z_t\n" }, { "math_id": 47, "text": " ~\\sigma_t " }, { "math_id": 48, "text": "\n~\\epsilon_t = ~\\sigma_t z_t\n" }, { "math_id": 49, "text": " z_t " }, { "math_id": 50, "text": "\n~\\sigma_t^2 = K + ~\\alpha ~\\epsilon_{t-1}^2 + ~\\beta ~\\sigma_{t-1}^2 + ~\\phi ~\\epsilon_{t-1}\n" }, { "math_id": 51, "text": " ~\\epsilon_t = ~\\sigma_t z_t " }, { "math_id": 52, "text": "\n~\\sigma_t^2 = K + ~\\delta ~\\sigma_{t-1}^2 + ~\\alpha ~\\epsilon_{t-1}^2 + ~\\phi ~\\epsilon_{t-1}^2 I_{t-1}\n" }, { "math_id": 53, "text": " I_{t-1} = 0 " }, { "math_id": 54, "text": " ~\\epsilon_{t-1} \\ge 0 " }, { "math_id": 55, "text": " I_{t-1} = 1 " }, { "math_id": 56, "text": " ~\\epsilon_{t-1} < 0 " }, { "math_id": 57, "text": "\n~\\sigma_t = K + ~\\delta ~\\sigma_{t-1} + ~\\alpha_1^{+} ~\\epsilon_{t-1}^{+} + ~\\alpha_1^{-} ~\\epsilon_{t-1}^{-}\n" }, { "math_id": 58, "text": " ~\\epsilon_{t-1}^{+} = ~\\epsilon_{t-1} " }, { "math_id": 59, "text": " ~\\epsilon_{t-1} > 0 " }, { "math_id": 60, "text": " ~\\epsilon_{t-1}^{+} = 0 " }, { "math_id": 61, "text": " ~\\epsilon_{t-1} \\le 0 " }, { "math_id": 62, "text": " ~\\epsilon_{t-1}^{-} = ~\\epsilon_{t-1} " }, { "math_id": 63, "text": " ~\\epsilon_{t-1}^{-} = 0 " }, { "math_id": 64, "text": "\\epsilon_t = \\sigma_t z_t," }, { "math_id": 65, "text": "\\sigma_t^2 = \\alpha_0 + \\alpha_1 \\epsilon^2_{t-1} + \\beta_1 \\sigma^2_{t-1} = \\alpha_0 + \\alpha_1 \\sigma_{t-1}^2 z_{t-1}^2 + \\beta_1 \\sigma^2_{t-1}, " }, { "math_id": 66, "text": " \\mathrm{d}L_t " }, { "math_id": 67, "text": " (L_t)_{t\\geq0} " }, { "math_id": 68, "text": " z^2_t " }, { "math_id": 69, "text": " \\mathrm{d}[L,L]^\\mathrm{d}_t " }, { "math_id": 70, "text": " [L,L]^\\mathrm{d}_t = \\sum_{s\\in[0,t]} (\\Delta L_t)^2,\\quad t\\geq0, " }, { "math_id": 71, "text": " L " }, { "math_id": 72, "text": "\\mathrm{d}G_t = \\sigma_{t-} \\,\\mathrm{d}L_t," }, { "math_id": 73, "text": "\\mathrm{d}\\sigma_t^2 = (\\beta - \\eta \\sigma^2_t)\\,\\mathrm{d}t + \\varphi \\sigma_{t-}^2 \\,\\mathrm{d}[L,L]^\\mathrm{d}_t, " }, { "math_id": 74, "text": " \\beta " }, { "math_id": 75, "text": " \\eta " }, { "math_id": 76, "text": " \\varphi " }, { "math_id": 77, "text": " \\alpha_0 " }, { "math_id": 78, "text": " \\alpha_1 " }, { "math_id": 79, "text": " \\beta_1 " }, { "math_id": 80, "text": " (G_0,\\sigma^2_0) " }, { "math_id": 81, "text": " (G_t,\\sigma^2_t)_{t\\geq0} " }, { "math_id": 82, "text": " ~\\omega= 0 " }, { "math_id": 83, "text": "\n~\\sigma_t^2 = ~\\alpha_{1} ~\\epsilon_{t-1}^2 + ~\\beta_{1} ~\\sigma_{t-1}^2.\n" }, { "math_id": 84, "text": " ~\\alpha_{1} + ~\\beta_{1}= 1 " }, { "math_id": 85, "text": " ~\\alpha_{1} " }, { "math_id": 86, "text": " ~\\beta_{1} " }, { "math_id": 87, "text": " ~\\epsilon(s_i) = ~\\sigma(s_i) z(s_i) " }, { "math_id": 88, "text": "\n~\\sigma(s_i)^2 = ~\\alpha_i + \\sum_{v=1}^{n} \\rho w_{iv} \\epsilon(s_v)^2,\n" }, { "math_id": 89, "text": " ~s_i" }, { "math_id": 90, "text": " i" }, { "math_id": 91, "text": " ~w_{iv}" }, { "math_id": 92, "text": " iv" }, { "math_id": 93, "text": " w_{ii}=0" }, { "math_id": 94, "text": "~i = 1, ..., n " } ]
https://en.wikipedia.org/wiki?curid=610752
610773
Per-unit system
In power systems, expression of system quantities as fractions In the power systems analysis field of electrical engineering, a per-unit system is the expression of system quantities as fractions of a defined base unit quantity. Calculations are simplified because quantities expressed as per-unit do not change when they are referred from one side of a transformer to the other. This can be a pronounced advantage in power system analysis where large numbers of transformers may be encountered. Moreover, similar types of apparatus will have the impedances lying within a narrow numerical range when expressed as a per-unit fraction of the equipment rating, even if the unit size varies widely. Conversion of per-unit quantities to volts, ohms, or amperes requires a knowledge of the base that the per-unit quantities were referenced to. The per-unit system is used in power flow, short circuit evaluation, motor starting studies etc. The main idea of a per unit system is to absorb large differences in absolute values into base relationships. Thus, representations of elements in the system with per unit values become more uniform. A per-unit system provides units for power, voltage, current, impedance, and admittance. With the exception of impedance and admittance, any two units are independent and can be selected as base values; power and voltage are typically chosen. All quantities are specified as multiples of selected base values. For example, the base power might be the rated power of a transformer, or perhaps an arbitrarily selected power which makes power quantities in the system more convenient. The base voltage might be the nominal voltage of a bus. Different types of quantities are labeled with the same symbol (pu); it should be clear whether the quantity is a voltage, current, or other unit of measurement. Purpose. There are several reasons for using a per-unit system: The per-unit system was developed to make manual analysis of power systems easier. Although power-system analysis is now done by computer, results are often expressed as per-unit values on a convenient system-wide base. Base quantities. Generally base values of power and voltage are chosen. The base power may be the rating of a single piece of apparatus such as a motor or generator. If a system is being studied, the base power is usually chosen as a convenient round number such as 10 MVA or 100 MVA. The base voltage is chosen as the nominal rated voltage of the system. All other base quantities are derived from these two base quantities. Once the base power and the base voltage are chosen, the base current and the base impedance are determined by the natural laws of electrical circuits. The base value should only be a magnitude, while the per-unit value is a phasor. The phase angles of complex power, voltage, current, impedance, etc., are not affected by the conversion to per unit values. The purpose of using a per-unit system is to simplify conversion between different transformers. Hence, it is appropriate to illustrate the steps for finding per-unit values for voltage and impedance. First, let the base power ("S"base) of each end of a transformer become the same. Once every "S" is set on the same base, the base voltage and base impedance for every transformer can easily be obtained. Then, the real numbers of impedances and voltages can be substituted into the per-unit calculation definition to get the answers for the per-unit system. If the per-unit values are known, the real values can be obtained by multiplying by the base values. By convention, the following two rules are adopted for base quantities: With these two rules, a per-unit impedance remains unchanged when referred from one side of a transformer to the other. This allows the ideal transformer to be eliminated from a transformer model. Relationship between units. The relationship between units in a per-unit system depends on whether the system is single-phase or three-phase. Single-phase. Assuming that the independent base values are power and voltage, we have: formula_1 formula_2 Alternatively, the base value for power may be given in terms of reactive or apparent power, in which case we have, respectively, formula_3 or formula_4 The rest of the units can be derived from power and voltage using the equations formula_5, formula_6, formula_7 and formula_8 (Ohm's law), formula_9 being represented by formula_10. We have: formula_11 formula_12 formula_13 Three-phase. Power and voltage are specified in the same way as single-phase systems. However, due to differences in what these terms usually represent in three-phase systems, the relationships for the derived units are different. Specifically, power is given as total (not per-phase) power, and voltage is line-to-line voltage. In three-phase systems the equations formula_6 and formula_7 also hold. The apparent power formula_14 now equals formula_15 formula_16 formula_17 formula_13 Example of per-unit. As an example of how per-unit is used, consider a three-phase power transmission system that deals with powers of the order of 500 MW and uses a nominal voltage of 138 kV for transmission. We arbitrarily select formula_18, and use the nominal voltage 138 kV as the base voltage formula_19. We then have: formula_20 formula_21 formula_22 If, for example, the actual voltage at one of the buses is measured to be 136 kV, we have: formula_23 Per-unit system formulas. The following tabulation of per-unit system formulas is adapted from Beeman's "Industrial Power Systems Handbook". In transformers. It can be shown that voltages, currents, and impedances in a per-unit system will have the same values whether they are referred to primary or secondary of a transformer. For instance, for voltage, we can prove that the per unit voltages of two sides of the transformer, side 1 and side 2, are the same. Here, the per-unit voltages of the two sides are "E"1pu and "E"2pu respectively. formula_24 "E"1 and "E"2 are the voltages of sides 1 and 2 in volts. "N"1 is the number of turns the coil on side 1 has. "N"2 is the number of turns the coil on side 2 has. "V"base1 and "V"base2 are the base voltages on sides 1 and 2. formula_25 For current, we can prove that the per-unit currents of the two sides are the same below. formula_26 where "I"1,pu and "I"2,pu are the per-unit currents of sides 1 and 2 respectively. In this, the base currents "I"base1 and "I"base2 are related in the opposite way that "V"base1 and Vbase2 are related, in that formula_27 The reason for this relation is for power conservation "S"base1 = "S"base2 The full load copper loss of a transformer in per-unit form is equal to the per-unit value of its resistance: formula_28 formula_29 Therefore, it may be more useful to express the resistance in per-unit form as it also represents the full-load copper loss. As stated above, there are two degrees of freedom within the per unit system that allow the engineer to specify any per unit system. The degrees of freedom are the choice of the base voltage ("V"base) and the base power ("S"base). By convention, a single base power ("S"base) is chosen for both sides of the transformer and its value is equal to the rated power of the transformer. By convention, there are actually two different base voltages that are chosen, "V"base1 and "V"base2 which are equal to the rated voltages for either side of the transformer. By choosing the base quantities in this manner, the transformer can be effectively removed from the circuit as described above. For example: Take a transformer that is rated at 10 kVA and 240/100 V. The secondary side has an impedance equal to 1∠0° Ω. The base impedance on the secondary side is equal to: formula_30 This means that the per unit impedance on the secondary side is 1∠0° Ω / 1 Ω = 1∠0° pu When this impedance is referred to the other side, the impedance becomes: formula_31 The base impedance for the primary side is calculated the same way as the secondary: formula_32 This means that the per unit impedance is 5.76∠0° Ω / 5.76 Ω = 1∠0° pu, which is the same as when calculated from the other side of the transformer, as would be expected. Another useful tool for analyzing transformers is to have the base change formula that allows the engineer to go from a base impedance with one set of a base voltage and base power to another base impedance for a different set of a base voltage and base power. This becomes especially useful in real life applications where a transformer with a secondary side voltage of 1.2 kV might be connected to the primary side of another transformer whose rated voltage is 1 kV. The formula is as shown below. formula_33 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textstyle \\sqrt{3} " }, { "math_id": 1, "text": "P_{\\text{base}} = 1 \\text{ pu}" }, { "math_id": 2, "text": "V_{\\text{base}} = 1 \\text{ pu}" }, { "math_id": 3, "text": "Q_{\\text{base}} = 1 \\text{ pu}" }, { "math_id": 4, "text": "S_{\\text{base}} = 1 \\text{ pu}" }, { "math_id": 5, "text": "S = IV" }, { "math_id": 6, "text": "P = S\\cos(\\phi)" }, { "math_id": 7, "text": "Q = S\\sin(\\phi)" }, { "math_id": 8, "text": " \\underline{V} = \\underline{I} \\underline{Z}" }, { "math_id": 9, "text": "Z" }, { "math_id": 10, "text": " \\underline{Z} = R + j X = Z\\cos(\\phi) + j Z\\sin(\\phi)" }, { "math_id": 11, "text": "I_{\\text{base}} = \\frac{S_{\\text{base}}}{V_{\\text{base}}} = 1 \\text{ pu}" }, { "math_id": 12, "text": "Z_{\\text{base}} = \\frac{V_{\\text{base}}}{I_{\\text{base}}} = \\frac{V_{\\text{base}}^{2}}{I_{\\text{base}}V_{\\text{base}}} = \\frac{V_{\\text{base}}^{2}}{S_{\\text{base}}} = 1 \\text{ pu}" }, { "math_id": 13, "text": "Y_{\\text{base}} = \\frac{1}{Z_{\\text{base}}} = 1 \\text{ pu}" }, { "math_id": 14, "text": "S" }, { "math_id": 15, "text": "S_{\\text{base}}= \\sqrt{3}V_{\\text{base}} I_{\\text{base}}" }, { "math_id": 16, "text": "I_{\\text{base}} = \\frac{S_{\\text{base}}}{V_{\\text{base}} \\times \\sqrt{3}} = 1 \\text{ pu}" }, { "math_id": 17, "text": "Z_{\\text{base}} = \\frac{V_{\\text{base}}}{I_{\\text{base}} \\times \\sqrt{3}} = \\frac{{V_{\\text{base}}^2}}{S_{\\text{base}}} = 1 \\text{ pu}" }, { "math_id": 18, "text": "S_{\\mathrm{base}} = 500\\, \\mathrm{MVA}" }, { "math_id": 19, "text": "V_{\\mathrm{base}}" }, { "math_id": 20, "text": "I_{\\text{base}} = \\frac{S_{\\text{base}}}{V_{\\text{base}} \\times \\sqrt{3}} = 2.09 \\, \\mathrm{kA}" }, { "math_id": 21, "text": "Z_{\\text{base}} = \\frac{V_{\\text{base}}}{I_{\\text{base}} \\times \\sqrt{3}} = \\frac{V_{\\text{base}}^{2}}{S_{\\text{base}}} = 38.1 \\, \\Omega" }, { "math_id": 22, "text": "Y_{\\mathrm{base}} = \\frac{1}{Z_{\\mathrm{base}}} = 26.3 \\, \\mathrm{mS}" }, { "math_id": 23, "text": "V_{\\mathrm{pu}} = \\frac{V}{V_{\\mathrm{base}}} = \\frac{136 \\, \\mathrm{kV}}{138 \\, \\mathrm{kV}} = 0.9855 \\, \\mathrm{pu}" }, { "math_id": 24, "text": "\n\\begin{align}\nE_\\text{1,pu}&=\\frac{E_{1}}{V_\\text{base1}}\\\\\n&= \\frac {N_{1}E_{2}}{N_{2}V_\\text{base1}} \\\\\n&= \\frac {N_{1}E_{2}}{N_{2} \\frac{N_{1}}{N_{2}} V_\\text{base2}}\\\\\n&= \\frac {E_{2}} {V_\\text{base2}}\\\\\n&= E_\\text{2,pu}\\\\\n\\end{align}\n" }, { "math_id": 25, "text": " V_\\text{base1}=\\frac{N_{1}}{N_{2}}V_\\text{base2}" }, { "math_id": 26, "text": "\n\\begin{align}\nI_\\text{1,pu}&=\\frac{I_{1}}{I_\\text{base1}}\\\\\n&= \\frac {N_{2}I_{2}}{N_{1}I_\\text{base1}} \\\\\n&= \\frac {N_{2}I_{2}}{N_{1} \\frac{N_{2}}{N_{1}} I_{base2}}\\\\\n&= \\frac {I_{2}} {I_\\text{base2}}\\\\\n&= I_\\text{2,pu}\\\\\n\\end{align}\n" }, { "math_id": 27, "text": "\n\\begin{align}\nI_\\text{base1} &= \\frac{S_\\text{base1}} {V_\\text{base1}}\\\\\nS_\\text{base1} &= S_\\text{base2}\\\\\nV_\\text{base2} &= \\frac{N_{2}}{N_{1}} V_\\text{base1}\\\\\nI_\\text{base2} &= \\frac{S_\\text{base2}} {V_\\text{base2}}\\\\\nI_\\text{base1} &= \\frac{S_\\text{base2}} { \\frac{N_{1}}{N_{2}} V_\\text{base2}}\\\\\n&= \\frac{N_{2}}{N_{1}} I_\\text{base2}\\\\\n\\end{align}\n" }, { "math_id": 28, "text": "\n\\begin{align}\nP_\\text{cu,FL}&=\\text{full-load copper loss}\\\\\n&= I_{R1}^2R_{eq1}\\\\\n\\end{align}\n" }, { "math_id": 29, "text": "\n\\begin{align}\nP_\\text{cu,FL,pu}&=\\frac{P_\\text{cu,FL}}{P_\\text{base}}\\\\\n&= \\frac {I_{R1}^2R_{eq1}} {V_{R1}I_{R1}}\\\\\n&= \\frac {R_\\text{eq1}} {V_{R1}/I_{R1}}\\\\\n&= \\frac {R_\\text{eq1}} {Z_{B1}}\\\\\n&= R_\\text{eq1,pu}\\\\\n\\end{align}\n" }, { "math_id": 30, "text": "\n\\begin{align}\nZ_\\text{base,2}&=\\frac{V_\\text{base,2}^2}{S_\\text{base}}\\\\\n&= \\frac {(100\\text{ V})^2} {10000\\text{ VA}}\\\\\n&= \\text{1 }\\Omega\\\\\n\\end{align}\n" }, { "math_id": 31, "text": "\n\\begin{align}\nZ_{2}&=\\left(\\frac{240}{100}\\right)^2\\times\\text{1∠0° }\\Omega\\\\\n&= \\text {5.76∠0° }\\Omega\\\\\n\\end{align}\n" }, { "math_id": 32, "text": "\n\\begin{align}\nZ_\\text{base,1}&=\\frac{V_\\text{base,1}^2}{S_\\text{base}}\\\\\n&= \\frac {(240\\text{ V})^2} {10000\\text{ VA}}\\\\\n&= \\text{5.76 }\\Omega\\\\\n\\end{align}\n" }, { "math_id": 33, "text": "\n\\begin{align}\nZ_\\text{pu,new}&=Z_\\text{pu,old} \\times \\frac{Z_\\text{base,old}}{Z_\\text{base,new}}=Z_\\text{pu,old} \\times \\left(\\frac{V_\\text{base,old}}{V_\\text{base,new}}\\right)^2\\times\\left(\\frac{S_\\text{base,new}}{S_\\text{base,old}}\\right)\\\\\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=610773
6107814
Fold equity
Fold equity is a concept in poker strategy that is especially important when a player becomes short-stacked in a no limit (or possibly pot limit) tournament. It is the equity a player can expect to gain due to the opponent folding to his or her bets. It equates to: formula_0 The first half of the formula can be estimated based on reads on opponents or their previous actions. The second part is the equity obtained when the opponent(s) fold to your raise (i.e. the total current pot), minus the equity resulting in case your opponent(s) call your raise (i.e. your showdown equity in the post-raise pot). As the post-raise pot is larger than the current pot, fold equity can be positive as well as negative. Fold equity becomes an important concept for short stacks for the following reason. Opponents can be considered likely to call all-ins with a certain range of hands. When they will have to use a large percentage of their stack to make the call, this range can be expected to be quite narrow (it will include all the hands the caller expects to win an all-in against the bettor). As the percentage of stack needed to call becomes lower, the range of cards the caller will need becomes wider, and he or she becomes less likely to fold. Consequently, fold equity diminishes. There will be a point at which a caller will need a sufficiently small percentage of their stack to call the all-in that they will do so with any two cards. At that point, the all-in bettor will have no fold equity. Example. Alice holds A♣ 6♥ playing against one opponent, Brian, who holds 2♥ 2♦. The flop is 9♠ 7♣ 3♦. At this point, Alice has a pot equity of 31.5% and Brian has a pot equity of 68.5%. In other words, if there were no further betting and both players simply turned up their hands and were dealt the turn and river cards, Alice would be 31.5% likely to win the pot. Because Brian's hand is so weak, though, and many hands that Alice might be playing can beat him easily, he may be 70% likely to fold facing a pot-sized bet. As such, Alice's fold equity is formula_1. Consequently, Alice can consider that her hand equity "if she bets" will equal formula_2. However, Alice cannot be sure that her equity will increase if she bets because she cannot see Brian's cards. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Fold Equity}\\, = \\text{likelihood that opponent folds } * \\text{ gain in equity if opponent(s) fold}" }, { "math_id": 1, "text": "70\\% * 68.5\\% = 47.95\\%" }, { "math_id": 2, "text": "31.5\\% + 47.95\\% = 79.45\\%" } ]
https://en.wikipedia.org/wiki?curid=6107814
6108400
Betti's theorem
Betti's theorem, also known as Maxwell–Betti reciprocal work theorem, discovered by Enrico Betti in 1872, states that for a linear elastic structure subject to two sets of forces {Pi} i=1...,n and {Qj}, j=1,2...,n, the work done by the set P through the displacements produced by the set Q is equal to the work done by the set Q through the displacements produced by the set P. This theorem has applications in structural engineering where it is used to define influence lines and derive the boundary element method. Betti's theorem is used in the design of compliant mechanisms by topology optimization approach. Proof. Consider a solid body subjected to a pair of external force systems, referred to as formula_0 and formula_1. Consider that each force system causes a displacement field, with the displacements measured at the external force's point of application referred to as formula_2 and formula_3. When the formula_0 force system is applied to the structure, the balance between the work performed by the external force system and the strain energy is: formula_4 The work-energy balance associated with the formula_1 force system is as follows: formula_5 Now, consider that with the formula_0 force system applied, the formula_1 force system is applied subsequently. As the formula_0 is already applied and therefore won't cause any extra displacement, the work-energy balance assumes the following expression: formula_6 Conversely, if we consider the formula_1 force system already applied and the formula_0 external force system applied subsequently, the work-energy balance will assume the following expression: formula_7 If the work-energy balance for the cases where the external force systems are applied in isolation are respectively subtracted from the cases where the force systems are applied simultaneously, we arrive at the following equations: formula_8 formula_9 If the solid body where the force systems are applied is formed by a linear elastic material and if the force systems are such that only infinitesimal strains are observed in the body, then the body's constitutive equation, which may follow Hooke's law, can be expressed in the following manner: formula_10 Replacing this result in the previous set of equations leads us to the following result: formula_11 formula_12 If we subtract both equations then we obtain the following result: formula_13 Example. For a simple example let m=1 and n=1. Consider a horizontal beam on which two points have been defined: point 1 and point 2. First we apply a vertical force P at point 1 and measure the vertical displacement of point 2, denoted formula_14. Next we remove force P and apply a vertical force Q at point 2, which produces the vertical displacement at point 1 of formula_15. Betti's reciprocity theorem states that: formula_16 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F^P_i" }, { "math_id": 1, "text": "F^Q_i" }, { "math_id": 2, "text": "d^P_i" }, { "math_id": 3, "text": "d^Q_i" }, { "math_id": 4, "text": "\n\\frac{1}{2}\\sum^n_{i=1}F^P_id^P_i = \\frac{1}{2}\\int_\\Omega \\sigma^P_{ij}\\epsilon^P_{ij}\\,d\\Omega\n" }, { "math_id": 5, "text": "\n\\frac{1}{2}\\sum^n_{i=1}F^Q_id^Q_i = \\frac{1}{2}\\int_\\Omega \\sigma^Q_{ij}\\epsilon^Q_{ij}\\,d\\Omega\n" }, { "math_id": 6, "text": "\n\\frac{1}{2}\\sum^n_{i=1}F^P_id^P_i + \\frac{1}{2}\\sum^n_{i=1}F^Q_id^Q_i + \\sum^n_{i=1}F^P_id^Q_i = \\frac{1}{2}\\int_\\Omega \\sigma^P_{ij}\\epsilon^P_{ij}\\,d\\Omega + \\frac{1}{2} \\int_\\Omega \\sigma^Q_{ij}\\epsilon^Q_{ij}\\,d\\Omega + \\int_\\Omega \\sigma^P_{ij}\\epsilon^Q_{ij}\\,d\\Omega\n" }, { "math_id": 7, "text": "\n\\frac{1}{2}\\sum^n_{i=1}F^Q_id^Q_i + \\frac{1}{2}\\sum^n_{i=1}F^P_id^P_i + \\sum^n_{i=1}F^Q_id^P_i = \\frac{1}{2}\\int_\\Omega \\sigma^Q_{ij}\\epsilon^Q_{ij}\\,d\\Omega + \\frac{1}{2}\\int_\\Omega \\sigma^P_{ij}\\epsilon^P_{ij}\\,d\\Omega + \\int_\\Omega \\sigma^Q_{ij}\\epsilon^P_{ij}\\,d\\Omega\n" }, { "math_id": 8, "text": "\n\\sum^n_{i=1}F^P_id^Q_i = \\int_\\Omega \\sigma^P_{ij}\\epsilon^Q_{ij}\\,d\\Omega\n" }, { "math_id": 9, "text": "\n\\sum^n_{i=1}F^Q_id^P_i = \\int_\\Omega \\sigma^Q_{ij}\\epsilon^P_{ij}\\,d\\Omega\n" }, { "math_id": 10, "text": "\n\\sigma_{ij}=D_{ijkl}\\epsilon_{kl}\n" }, { "math_id": 11, "text": "\n\\sum^n_{i=1}F^P_id^Q_i = \\int_\\Omega D_{ijkl}\\epsilon^P_{ij}\\epsilon^Q_{kl}\\,d\\Omega\n" }, { "math_id": 12, "text": "\n\\sum^n_{i=1}F^Q_id^P_i = \\int_\\Omega D_{ijkl}\\epsilon^Q_{ij}\\epsilon^P_{kl}\\,d\\Omega\n" }, { "math_id": 13, "text": "\n\\sum^n_{i=1}F^P_id^Q_i = \\sum^n_{i=1}F^Q_id^P_i\n" }, { "math_id": 14, "text": "\\Delta_{P2}" }, { "math_id": 15, "text": "\\Delta_{Q1}" }, { "math_id": 16, "text": "P \\,\\Delta_{Q1}=Q \\,\\Delta_{P2}." } ]
https://en.wikipedia.org/wiki?curid=6108400
6108552
Ordinal notation
Type of mathematical function In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. A recursive ordinal notation must satisfy the following two additional properties: There are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here. Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contradict each other. A simplified example using a pairing function. As usual, we must start off with a constant symbol for zero, "0", which we may consider to be a function of arity zero. This is necessary because there are no smaller ordinals in terms of which zero can be described. The most obvious next step would be to define a unary function, "S", which takes an ordinal to the smallest ordinal greater than it; in other words, S is the successor function. In combination with zero, successor allows one to name any natural number. The third function might be defined as one that maps each ordinal to the smallest ordinal that cannot yet be described with the above two functions and previous values of this function. This would map β to ω·β except when β is a fixed point of that function plus a finite number in which case one uses ω·(β+1). The fourth function would map α to ωω·α except when α is a fixed point of that plus a finite number in which case one uses ωω·(α+1). ξ-notation. One could continue in this way, but it would give us an infinite number of functions. So instead let us merge the unary functions into a binary function. By transfinite recursion on α, we can use transfinite recursion on β to define ξ(α,β) = the smallest ordinal γ such that α &lt; γ and β &lt; γ and γ is not the value of ξ for any smaller α or for the same α with a smaller β. Thus, define ξ-notations as follows: The function ξ is defined for all pairs of ordinals and is one-to-one. It always gives values larger than its arguments and its range is all ordinals other than 0 and the epsilon numbers (ε=ωε). One has ξ(α, β) &lt; ξ(γ, δ) if and only if either (α = γ and β &lt; δ) or (α &lt; γ and β &lt; ξ(γ, δ)) or (α &gt; γ and ξ(α, β) ≤ δ). With this definition, the first few ξ-notations are: "0" for 0. "ξ00" for 1. "ξ0ξ00" for ξ(0,1)=2. "ξξ000" for ξ(1,0)=ω. "ξ0ξ0ξ00" for 3. "ξ0ξξ000" for ω+1. "ξξ00ξ00" for ω·2. "ξξ0ξ000" for ωω. "ξξξ0000" for formula_0 In general, ξ(0,β) = β+1. While ξ(1+α,β) = ωωα·(β+k) for k = 0 or 1 or 2 depending on special situations:&lt;br&gt; k = 2 if α is an epsilon number and β is finite.&lt;br&gt; Otherwise, k = 1 if β is a multiple of ωωα+1 plus a finite number.&lt;br&gt; Otherwise, k = 0. The ξ-notations can be used to name any ordinal less than ε0 with an alphabet of only two symbols ("0" and "ξ"). If these notations are extended by adding functions that enumerate epsilon numbers, then they will be able to name any ordinal less than the first epsilon number that cannot be named by the added functions. This last property, adding symbols within an initial segment of the ordinals gives names within that segment, is called repleteness (after Solomon Feferman). List. There are many different systems for ordinal notation introduced by various authors. It is often quite hard to convert between the different systems. Cantor. "Exponential polynomials" in 0 and ω gives a system of ordinal notation for ordinals less than ε0. There are many equivalent ways to write these; instead of exponential polynomials, one can use rooted trees, or nested parentheses, or the system described above. Veblen. The 2-variable Veblen functions can be used to give a system of ordinal notation for ordinals less than the Feferman-Schutte ordinal. The Veblen functions in a finite or transfinite number of variables give systems of ordinal notations for ordinals less than the small and large Veblen ordinals. Ackermann. described a system of ordinal notation rather weaker than the system described earlier by Veblen. The limit of his system is sometimes called the Ackermann ordinal. Bachmann. introduced the key idea of using uncountable ordinals to produce new countable ordinals. His original system was rather cumbersome to use as it required choosing a special sequence converging to each ordinal. Later systems of notation introduced by Feferman and others avoided this complication. Takeuti (ordinal diagrams). described a very powerful system of ordinal notation called "ordinal diagrams", which is hard to understand but was later simplified by Feferman. Feferman's θ functions. Feferman introduced theta functions, described in as follows. For an ordinal α, θα is a function mapping ordinals to ordinals. Often θα(β) is written as θαβ. The set "C"(α, β) is defined by induction on α to be the set of ordinals that can be generated from 0, ω1, ω2, ..., ωω, together with the ordinals less than β by the operations of ordinal addition and the functions θξ for ξ&lt;α. And the function θγ is defined to be the function enumerating the ordinals δ with δ∉"C"(γ,δ). The problem with this system is that ordinal notations and collapsing functions are not identical, and therefore this function does not qualify as an ordinal notation. An associated ordinal notation is not known. Buchholz. described the following system of ordinal notation as a simplification of Feferman's theta functions. Define: The functions ψ"v"(α) for α an ordinal, "v" an ordinal at most ω, are defined by induction on α as follows: where "C""v"(α) is the smallest set such that This system has about the same strength as Fefermans system, as formula_1 for "v" ≤ ω. Yet, while this system is powerful, it does not qualify as an ordinal notation. Buchholz did create an associated ordinal notation, yet it is complicated: the definition is in the main article. Kleene's O. described a system of notation for all recursive ordinals (those less than the Church–Kleene ordinal). Unfortunately, unlike the other systems described above there is in general no effective way to tell whether some natural number represents an ordinal, or whether two numbers represent the same ordinal. However, one can effectively find notations that represent the ordinal sum, product, and power (see ordinal arithmetic) of any two given notations in Kleene's formula_2; and given any notation for an ordinal, there is a recursively enumerable set of notations that contains one element for each smaller ordinal and is effectively ordered. Kleene's formula_2 denotes a canonical (and very non-computable) set of notations. It uses a subset of the natural numbers instead of finite strings of symbols, and is not recursive, therefore, once again, not qualifying as an ordinal notation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega^{\\omega^{\\omega}}." }, { "math_id": 1, "text": "\\theta\\varepsilon_{\\Omega_v+1}0 = \\psi_0(\\varepsilon_{\\Omega_v+1})" }, { "math_id": 2, "text": "\\mathcal{O}" } ]
https://en.wikipedia.org/wiki?curid=6108552
6109094
Iron(II) carbonate
Chemical, compound of iron carbon and oxygen &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Iron(II) carbonate, or ferrous carbonate, is a chemical compound with formula FeCO3, that occurs naturally as the mineral siderite. At ordinary ambient temperatures, it is a green-brown ionic solid consisting of iron(II) cations Fe2+ and carbonate anions CO32-. Preparation. Ferrous carbonate can be prepared by reacting solution of the two ions, such as iron(II) chloride and sodium carbonate: FeCl2 + Na2CO3 → FeCO3 + 2NaCl Ferrous carbonate can be prepared also from solutions of an iron(II) salt, such as iron(II) perchlorate, with sodium bicarbonate, releasing carbon dioxide: Fe(ClO4)2 + 2NaHCO3 → FeCO3 + 2NaClO4 + CO2 + H2O Sel and others used this reaction (but with FeCl2 instead of Fe(ClO4)2) at 0.2 M to prepare amorphous FeCO3. Care must be taken to exclude oxygen O2 from the solutions, because the Fe2+ ion is easily oxidized to Fe3+, especially at pH above 6.0. Ferrous carbonate also forms directly on steel or iron surfaces exposed to solutions of carbon dioxide, forming an "iron carbonate" scale: Fe + CO2 + H2O → FeCO3 + H2 Properties. The dependency of the solubility in water with temperature was determined by Wei Sun and others to be formula_0 where "T" is the absolute temperature in kelvins, and "I" is the ionic strength of the liquid. Iron carbonate decomposes at about . Uses. Ferrous carbonate has been used as an iron dietary supplement to treat anemia. It is noted to have very poor bioavailability in cats and dogs. Toxicity. Ferrous carbonate is slightly toxic; the probable oral lethal dose is between 0.5 and 5 g/kg (between 35 and 350 g for a 70 kg person). Iron(III) carbonate. Unlike iron(II) carbonate, iron(III) carbonate has not been isolated. Attempts to produce iron(III) carbonate by the reaction of aqueous ferric ions and carbonate ions result in the production of iron(III) oxide with the release of carbon dioxide or bicarbonate.
[ { "math_id": 0, "text": "\n \\log K_{\\mathit{sp}} = -59.3498 - 0.041377 T - 2.1963/T + 24.5724 \\log T + 2.518 \\sqrt{I} - 0.657 I,\n" } ]
https://en.wikipedia.org/wiki?curid=6109094
61091295
Baer function
Special functions occurring in mathematical physics Baer functions formula_0 and formula_1, named after Karl Baer, are solutions of the Baer differential equation formula_2 which arises when separation of variables is applied to the Laplace equation in paraboloidal coordinates. The Baer functions are defined as the series solutions about formula_3 which satisfy formula_4, formula_5. By substituting a power series Ansatz into the differential equation, formal series can be constructed for the Baer functions. For special values of formula_6 and formula_7, simpler solutions may exist. For instance, formula_8 Moreover, Mathieu functions are special-case solutions of the Baer equation, since the latter reduces to the Mathieu differential equation when formula_9 and formula_10, and making the change of variable formula_11. Like the Mathieu differential equation, the Baer equation has two regular singular points (at formula_12 and formula_13), and one "irregular" singular point at infinity. Thus, in contrast with many other special functions of mathematical physics, Baer functions cannot in general be expressed in terms of hypergeometric functions. The Baer wave equation is a generalization which results from separating variables in the Helmholtz equation in paraboloidal coordinates: formula_14 which reduces to the original Baer equation when formula_15. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B_p^q(z)" }, { "math_id": 1, "text": "C_p^q(z)" }, { "math_id": 2, "text": "\n\\frac{d^2B}{dz^2} + \\frac{1}{2}\\left[\\frac{1}{z-b} + \\frac{1}{z-c} \\right]\\frac{dB}{dz} - \\left[\\frac{p(p+1)z + q(b+c)}{(z-b)(z-c)} \\right]B = 0\n" }, { "math_id": 3, "text": "z = 0" }, { "math_id": 4, "text": "B_p^q(0) = 0" }, { "math_id": 5, "text": "C_p^q(0) = 1" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "\nB_0^0(z) = \\ln \\left[ \\frac{z+\\sqrt{(z-b)(z-c)}-(b+c)/2}{\\sqrt{bc} - (b+c)/2} \\right]\n" }, { "math_id": 9, "text": "b = 0" }, { "math_id": 10, "text": "c = 1" }, { "math_id": 11, "text": "z = \\cos^2 t" }, { "math_id": 12, "text": "z = b" }, { "math_id": 13, "text": "z = c" }, { "math_id": 14, "text": "\n\\frac{d^2B}{dz^2} + \\frac{1}{2}\\left[\\frac{1}{z-b} + \\frac{1}{z-c} \\right]\\frac{dB}{dz} + \\left[\\frac{k^2 z^2 - p(p+1)z - q(b+c)}{(z-b)(z-c)} \\right]B = 0\n" }, { "math_id": 15, "text": "k = 0" } ]
https://en.wikipedia.org/wiki?curid=61091295
61093
Deductive reasoning
Form of reasoning Deductive reasoning is the process of drawing valid inferences. An inference is valid if its conclusion follows logically from its premises, meaning that it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises "all men are mortal" and "Socrates is a man" to the conclusion "Socrates is mortal" is deductively valid. An argument is "sound" if it is valid "and" all its premises are true. Some theorists define deduction in terms of the intentions of the author: they have to intend for the premises to offer deductive support to the conclusion. With the help of this modification, it is possible to distinguish valid from invalid deductive reasoning: it is invalid if the author's belief about the deductive support is false, but even invalid deductive reasoning is a form of deductive reasoning. Deductive logic studies under what conditions an argument is valid. According to the semantic approach, an argument is valid if there is no possible interpretation of the argument whereby its premises are true and its conclusion is false. The syntactic approach, by contrast, focuses on rules of inference, that is, schemas of drawing a conclusion from a set of premises based only on their logical form. There are various rules of inference, such as modus ponens and modus tollens. Invalid deductive arguments, which do not follow a rule of inference, are called formal fallacies. Rules of inference are definitory rules and contrast with strategic rules, which specify what inferences one needs to draw in order to arrive at an intended conclusion. Deductive reasoning contrasts with non-deductive or ampliative reasoning. For ampliative arguments, such as inductive or abductive arguments, the premises offer weaker support to their conclusion: they indicate that it is most likely, but they do not guarantee its truth. They make up for this drawback with their ability to provide genuinely new information (that is, information not already found in the premises), unlike deductive arguments. Cognitive psychology investigates the mental processes responsible for deductive reasoning. One of its topics concerns the factors determining whether people draw valid or invalid deductive inferences. One such factor is the form of the argument: for example, people draw valid inferences more successfully for arguments of the form modus ponens than of the form modus tollens. Another factor is the content of the arguments: people are more likely to believe that an argument is valid if the claim made in its conclusion is plausible. A general finding is that people tend to perform better for realistic and concrete cases than for abstract cases. Psychological theories of deductive reasoning aim to explain these findings by providing an account of the underlying psychological processes. "Mental logic theories" hold that deductive reasoning is a language-like process that happens through the manipulation of representations using rules of inference. "Mental model theories", on the other hand, claim that deductive reasoning involves models of possible states of the world without the medium of language or rules of inference. According to "dual-process theories" of reasoning, there are two qualitatively different cognitive systems responsible for reasoning. The problem of deduction is relevant to various fields and issues. Epistemology tries to understand how justification is transferred from the belief in the premises to the belief in the conclusion in the process of deductive reasoning. Probability logic studies how the probability of the premises of an inference affects the probability of its conclusion. The controversial thesis of deductivism denies that there are other correct forms of inference besides deduction. Natural deduction is a type of proof system based on simple and self-evident rules of inference. In philosophy, the geometrical method is a way of philosophizing that starts from a small set of self-evident axioms and tries to build a comprehensive logical system using deductive reasoning. Definition. Deductive reasoning is the psychological process of drawing deductive inferences. An inference is a set of premises together with a conclusion. This psychological process starts from the premises and reasons to a conclusion based on and supported by these premises. If the reasoning was done correctly, it results in a valid deduction: the truth of the premises ensures the truth of the conclusion. For example, in the syllogistic argument "all frogs are amphibians; no cats are amphibians; therefore, no cats are frogs" the conclusion is true because its two premises are true. But even arguments with wrong premises can be deductively valid if they obey this principle, as in "all frogs are mammals; no cats are mammals; therefore, no cats are frogs". If the premises of a valid argument are true, then it is called a sound argument. The relation between the premises and the conclusion of a deductive argument is usually referred to as "logical consequence". According to Alfred Tarski, logical consequence has 3 essential features: it is necessary, formal, and knowable a priori. It is necessary in the sense that the premises of valid deductive arguments necessitate the conclusion: it is impossible for the premises to be true and the conclusion to be false, independent of any other circumstances. Logical consequence is formal in the sense that it depends only on the form or the syntax of the premises and the conclusion. This means that the validity of a particular argument does not depend on the specific contents of this argument. If it is valid, then any argument with the same logical form is also valid, no matter how different it is on the level of its contents. Logical consequence is knowable a priori in the sense that no empirical knowledge of the world is necessary to determine whether a deduction is valid. So it is not necessary to engage in any form of empirical investigation. Some logicians define deduction in terms of possible worlds: A deductive inference is valid if and only if, there is no possible world in which its conclusion is false while its premises are true. This means that there are no counterexamples: the conclusion is true in "all" such cases, not just in "most" cases. It has been argued against this and similar definitions that they fail to distinguish between valid and invalid deductive reasoning, i.e. they leave it open whether there are invalid deductive inferences and how to define them. Some authors define deductive reasoning in psychological terms in order to avoid this problem. According to Mark Vorobey, whether an argument is deductive depends on the psychological state of the person making the argument: "An argument is deductive if, and only if, the author of the argument believes that the truth of the premises necessitates (guarantees) the truth of the conclusion". A similar formulation holds that the speaker "claims" or "intends" that the premises offer deductive support for their conclusion. This is sometimes categorized as a "speaker-determined" definition of deduction since it depends also on the speaker whether the argument in question is deductive or not. For "speakerless" definitions, on the other hand, only the argument itself matters independent of the speaker. One advantage of this type of formulation is that it makes it possible to distinguish between good or valid and bad or invalid deductive arguments: the argument is good if the author's belief concerning the relation between the premises and the conclusion is true, otherwise it is bad. One consequence of this approach is that deductive arguments cannot be identified by the law of inference they use. For example, an argument of the form modus ponens may be non-deductive if the author's beliefs are sufficiently confused. That brings with it an important drawback of this definition: it is difficult to apply to concrete cases since the intentions of the author are usually not explicitly stated. Deductive reasoning is studied in logic, psychology, and the cognitive sciences. Some theorists emphasize in their definition the difference between these fields. On this view, psychology studies deductive reasoning as an empirical mental process, i.e. what happens when humans engage in reasoning. But the descriptive question of how actual reasoning happens is different from the normative question of how it "should" happen or what constitutes "correct" deductive reasoning, which is studied by logic. This is sometimes expressed by stating that, strictly speaking, logic does not study deductive reasoning but the deductive relation between premises and a conclusion known as logical consequence. But this distinction is not always precisely observed in the academic literature. One important aspect of this difference is that logic is not interested in whether the conclusion of an argument is sensible. So from the premise "the printer has ink" one may draw the unhelpful conclusion "the printer has ink and the printer has ink and the printer has ink", which has little relevance from a psychological point of view. Instead, actual reasoners usually try to remove redundant or irrelevant information and make the relevant information more explicit. The psychological study of deductive reasoning is also concerned with how good people are at drawing deductive inferences and with the factors determining their performance. Deductive inferences are found both in natural language and in formal logical systems, such as propositional logic. Conceptions of deduction. Deductive arguments differ from non-deductive arguments in that the truth of their premises ensures the truth of their conclusion. There are two important conceptions of what this exactly means. They are referred to as the syntactic and the semantic approach. According to the syntactic approach, whether an argument is deductively valid depends only on its form, syntax, or structure. Two arguments have the same form if they use the same logical vocabulary in the same arrangement, even if their contents differ. For example, the arguments "if it rains then the street will be wet; it rains; therefore, the street will be wet" and "if the meat is not cooled then it will spoil; the meat is not cooled; therefore, it will spoil" have the same logical form: they follow the modus ponens. Their form can be expressed more abstractly as "if A then B; A; therefore B" in order to make the common syntax explicit. There are various other valid logical forms or rules of inference, like modus tollens or the disjunction elimination. The syntactic approach then holds that an argument is deductively valid if and only if its conclusion can be deduced from its premises using a valid rule of inference. One difficulty for the syntactic approach is that it is usually necessary to express the argument in a formal language in order to assess whether it is valid. This often brings with it the difficulty of translating the natural language argument into a formal language, a process that comes with various problems of its own. Another difficulty is due to the fact that the syntactic approach depends on the distinction between formal and non-formal features. While there is a wide agreement concerning the paradigmatic cases, there are also various controversial cases where it is not clear how this distinction is to be drawn. The semantic approach suggests an alternative definition of deductive validity. It is based on the idea that the sentences constituting the premises and conclusions have to be interpreted in order to determine whether the argument is valid. This means that one ascribes semantic values to the expressions used in the sentences, such as the reference to an object for singular terms or to a truth-value for atomic sentences. The semantic approach is also referred to as the model-theoretic approach since the branch of mathematics known as model theory is often used to interpret these sentences. Usually, many different interpretations are possible, such as whether a singular term refers to one object or to another. According to the semantic approach, an argument is deductively valid if and only if there is no possible interpretation where its premises are true and its conclusion is false. Some objections to the semantic approach are based on the claim that the semantics of a language cannot be expressed in the same language, i.e. that a richer metalanguage is necessary. This would imply that the semantic approach cannot provide a universal account of deduction for language as an all-encompassing medium. Rules of inference. Deductive reasoning usually happens by applying rules of inference. A rule of inference is a way or schema of drawing a conclusion from a set of premises. This happens usually based only on the logical form of the premises. A rule of inference is valid if, when applied to true premises, the conclusion cannot be false. A particular argument is valid if it follows a valid rule of inference. Deductive arguments that do not follow a valid rule of inference are called formal fallacies: the truth of their premises does not ensure the truth of their conclusion. In some cases, whether a rule of inference is valid depends on the logical system one is using. The dominant logical system is classical logic and the rules of inference listed here are all valid in classical logic. But so-called deviant logics provide a different account of which inferences are valid. For example, the rule of inference known as double negation elimination, i.e. that if a proposition is "not not true" then it is also "true", is accepted in classical logic but rejected in intuitionistic logic. Prominent rules of inference. Modus ponens. Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductive rule of inference. It applies to arguments that have as first premise a conditional statement (formula_0) and as second premise the antecedent (formula_1) of the conditional statement. It obtains the consequent (formula_2) of the conditional statement as its conclusion. The argument form is listed below: In this form of deductive reasoning, the consequent (formula_2) obtains as the conclusion from the premises of a conditional statement (formula_0) and its antecedent (formula_1). However, the antecedent (formula_1) cannot be similarly obtained as the conclusion from the premises of the conditional statement (formula_0) and the consequent (formula_2). Such an argument commits the logical fallacy of affirming the consequent. The following is an example of an argument using modus ponens: Modus tollens. Modus tollens (also known as "the law of contrapositive") is a deductive rule of inference. It validates an argument that has as premises a conditional statement (formula) and the negation of the consequent (formula_3) and as conclusion the negation of the antecedent (formula_4). In contrast to modus ponens, reasoning with modus tollens goes in the opposite direction to that of the conditional. The general expression for modus tollens is the following: The following is an example of an argument using modus tollens: Hypothetical syllogism. A "hypothetical syllogism" is an inference that takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general form: In there being a subformula in common between the two premises that does not occur in the consequence, this resembles syllogisms in term logic, although it differs in that this subformula is a proposition whereas in Aristotelian logic, this common element is a term and not a proposition. The following is an example of an argument using a hypothetical syllogism: Fallacies. Various formal fallacies have been described. They are invalid forms of deductive reasoning. An additional aspect of them is that they appear to be valid on some occasions or on the first impression. They may thereby seduce people into accepting and committing them. One type of formal fallacy is affirming the consequent, as in "if John is a bachelor, then he is male; John is male; therefore, John is a bachelor". This is similar to the valid rule of inference named modus ponens, but the second premise and the conclusion are switched around, which is why it is invalid. A similar formal fallacy is denying the antecedent, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore, Othello is not male". This is similar to the valid rule of inference called modus tollens, the difference being that the second premise and the conclusion are switched around. Other formal fallacies include affirming a disjunct, denying a conjunct, and the fallacy of the undistributed middle. All of them have in common that the truth of their premises does not ensure the truth of their conclusion. But it may still happen by coincidence that both the premises and the conclusion of formal fallacies are true. Definitory and strategic rules. Rules of inferences are definitory rules: they determine whether an argument is deductively valid or not. But reasoners are usually not just interested in making any kind of valid argument. Instead, they often have a specific point or conclusion that they wish to prove or refute. So given a set of premises, they are faced with the problem of choosing the relevant rules of inference for their deduction to arrive at their intended conclusion. This issue belongs to the field of strategic rules: the question of which inferences need to be drawn to support one's conclusion. The distinction between definitory and strategic rules is not exclusive to logic: it is also found in various games. In chess, for example, the definitory rules state that bishops may only move diagonally while the strategic rules recommend that one should control the center and protect one's king if one intends to win. In this sense, definitory rules determine whether one plays chess or something else whereas strategic rules determine whether one is a good or a bad chess player. The same applies to deductive reasoning: to be an effective reasoner involves mastering both definitory and strategic rules. Validity and soundness. Deductive arguments are evaluated in terms of their "validity" and "soundness". An argument is "valid" if it is impossible for its premises to be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be “valid” even if one or more of its premises are false. An argument is "sound" if it is "valid" and the premises are true. It is possible to have a deductive argument that is logically "valid" but is not "sound". Fallacious arguments often take that form. The following is an example of an argument that is “valid”, but not “sound”: The example's first premise is false – there are people who eat carrots who are not quarterbacks – but the conclusion would necessarily be true, if the premises were true. In other words, it is impossible for the premises to be true and the conclusion false. Therefore, the argument is “valid”, but not “sound”. False generalizations – such as "Everyone who eats carrots is a quarterback" – are often used to make unsound arguments. The fact that there are some people who eat carrots but are not quarterbacks proves the flaw of the argument. In this example, the first statement uses categorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known as term logic – was developed by Aristotle, but was superseded by propositional (sentential) logic and predicate logic. Deductive reasoning can be contrasted with inductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is “valid”, it is possible for the conclusion to be false (determined to be false with a counterexample or other means). Difference from ampliative reasoning. Deductive reasoning is usually contrasted with non-deductive or ampliative reasoning. The hallmark of valid deductive inferences is that it is impossible for their premises to be true and their conclusion to be false. In this way, the premises provide the strongest possible support to their conclusion. The premises of ampliative inferences also support their conclusion. But this support is weaker: they are not necessarily truth-preserving. So even for correct ampliative arguments, it is possible that their premises are true and their conclusion is false. Two important forms of ampliative reasoning are inductive and abductive reasoning. Sometimes the term "inductive reasoning" is used in a very wide sense to cover all forms of ampliative reasoning. However, in a more strict usage, inductive reasoning is just one form of ampliative reasoning. In the narrow sense, inductive inferences are forms of statistical generalization. They are usually based on many individual observations that all show a certain pattern. These observations are then used to form a conclusion either about a yet unobserved entity or about a general law. For abductive inferences, the premises support the conclusion because the conclusion is the best explanation of why the premises are true. The support ampliative arguments provide for their conclusion comes in degrees: some ampliative arguments are stronger than others. This is often explained in terms of probability: the premises make it more likely that the conclusion is true. Strong ampliative arguments make their conclusion very likely, but not absolutely certain. An example of ampliative reasoning is the inference from the premise "every raven in a random sample of 3200 ravens is black" to the conclusion "all ravens are black": the extensive random sample makes the conclusion very likely, but it does not exclude that there are rare exceptions. In this sense, ampliative reasoning is defeasible: it may become necessary to retract an earlier conclusion upon receiving new related information. Ampliative reasoning is very common in everyday discourse and the sciences. An important drawback of deductive reasoning is that it does not lead to genuinely new information. This means that the conclusion only repeats information already found in the premises. Ampliative reasoning, on the other hand, goes beyond the premises by arriving at genuinely new information. One difficulty for this characterization is that it makes deductive reasoning appear useless: if deduction is uninformative, it is not clear why people would engage in it and study it. It has been suggested that this problem can be solved by distinguishing between surface and depth information. On this view, deductive reasoning is uninformative on the depth level, in contrast to ampliative reasoning. But it may still be valuable on the surface level by presenting the information in the premises in a new and sometimes surprising way. A popular misconception of the relation between deduction and induction identifies their difference on the level of particular and general claims. On this view, deductive inferences start from general premises and draw particular conclusions, while inductive inferences start from particular premises and draw general conclusions. This idea is often motivated by seeing deduction and induction as two inverse processes that complement each other: deduction is "top-down" while induction is "bottom-up". But this is a misconception that does not reflect how valid deduction is defined in the field of logic: a deduction is valid if it is impossible for its premises to be true while its conclusion is false, independent of whether the premises or the conclusion are particular or general. Because of this, some deductive inferences have a general conclusion and some also have particular premises. In various fields. Cognitive psychology. Cognitive psychology studies the psychological processes responsible for deductive reasoning. It is concerned, among other things, with how good people are at drawing valid deductive inferences. This includes the study of the factors affecting their performance, their tendency to commit fallacies, and the underlying biases involved. A notable finding in this field is that the type of deductive inference has a significant impact on whether the correct conclusion is drawn. In a meta-analysis of 65 studies, for example, 97% of the subjects evaluated modus ponens inferences correctly, while the success rate for modus tollens was only 72%. On the other hand, even some fallacies like affirming the consequent or denying the antecedent were regarded as valid arguments by the majority of the subjects. An important factor for these mistakes is whether the conclusion seems initially plausible: the more believable the conclusion is, the higher the chance that a subject will mistake a fallacy for a valid argument. An important bias is the "matching bias", which is often illustrated using the Wason selection task. In an often-cited experiment by Peter Wason, 4 cards are presented to the participant. In one case, the visible sides show the symbols D, K, 3, and 7 on the different cards. The participant is told that every card has a letter on one side and a number on the other side, and that "[e]very card which has a D on one side has a 3 on the other side". Their task is to identify which cards need to be turned around in order to confirm or refute this conditional claim. The correct answer, only given by about 10%, is the cards D and 7. Many select card 3 instead, even though the conditional claim does not involve any requirements on what symbols can be found on the opposite side of card 3. But this result can be drastically changed if different symbols are used: the visible sides show "drinking a beer", "drinking a coke", "16 years of age", and "22 years of age" and the participants are asked to evaluate the claim "[i]f a person is drinking beer, then the person must be over 19 years of age". In this case, 74% of the participants identified correctly that the cards "drinking a beer" and "16 years of age" have to be turned around. These findings suggest that the deductive reasoning ability is heavily influenced by the content of the involved claims and not just by the abstract logical form of the task: the more realistic and concrete the cases are, the better the subjects tend to perform. Another bias is called the "negative conclusion bias", which happens when one of the premises has the form of a negative material conditional, as in "If the card does not have an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card has an A on the left". The increased tendency to misjudge the validity of this type of argument is not present for positive material conditionals, as in "If the card has an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card does not have an A on the left". Psychological theories of deductive reasoning. Various psychological theories of deductive reasoning have been proposed. These theories aim to explain how deductive reasoning works in relation to the underlying psychological processes responsible. They are often used to explain the empirical findings, such as why human reasoners are more susceptible to some types of fallacies than to others. An important distinction is between "mental logic theories", sometimes also referred to as "rule theories", and "mental model theories". "Mental logic theories" see deductive reasoning as a language-like process that happens through the manipulation of representations. This is done by applying syntactic rules of inference in a way very similar to how systems of natural deduction transform their premises to arrive at a conclusion. On this view, some deductions are simpler than others since they involve fewer inferential steps. This idea can be used, for example, to explain why humans have more difficulties with some deductions, like the modus tollens, than with others, like the modus ponens: because the more error-prone forms do not have a native rule of inference but need to be calculated by combining several inferential steps with other rules of inference. In such cases, the additional cognitive labor makes the inferences more open to error. "Mental model theories", on the other hand, hold that deductive reasoning involves models or mental representations of possible states of the world without the medium of language or rules of inference. In order to assess whether a deductive inference is valid, the reasoner mentally constructs models that are compatible with the premises of the inference. The conclusion is then tested by looking at these models and trying to find a counterexample in which the conclusion is false. The inference is valid if no such counterexample can be found. In order to reduce cognitive labor, only such models are represented in which the premises are true. Because of this, the evaluation of some forms of inference only requires the construction of very few models while for others, many different models are necessary. In the latter case, the additional cognitive labor required makes deductive reasoning more error-prone, thereby explaining the increased rate of error observed. This theory can also explain why some errors depend on the content rather than the form of the argument. For example, when the conclusion of an argument is very plausible, the subjects may lack the motivation to search for counterexamples among the constructed models. Both mental logic theories and mental model theories assume that there is one general-purpose reasoning mechanism that applies to all forms of deductive reasoning. But there are also alternative accounts that posit various different special-purpose reasoning mechanisms for different contents and contexts. In this sense, it has been claimed that humans possess a special mechanism for permissions and obligations, specifically for detecting cheating in social exchanges. This can be used to explain why humans are often more successful in drawing valid inferences if the contents involve human behavior in relation to social norms. Another example is the so-called dual-process theory. This theory posits that there are two distinct cognitive systems responsible for reasoning. Their interrelation can be used to explain commonly observed biases in deductive reasoning. System 1 is the older system in terms of evolution. It is based on associative learning and happens fast and automatically without demanding many cognitive resources. System 2, on the other hand, is of more recent evolutionary origin. It is slow and cognitively demanding, but also more flexible and under deliberate control. The dual-process theory posits that system 1 is the default system guiding most of our everyday reasoning in a pragmatic way. But for particularly difficult problems on the logical level, system 2 is employed. System 2 is mostly responsible for deductive reasoning. Intelligence. The ability of deductive reasoning is an important aspect of intelligence and many tests of intelligence include problems that call for deductive inferences. Because of this relation to intelligence, deduction is highly relevant to psychology and the cognitive sciences. But the subject of deductive reasoning is also pertinent to the computer sciences, for example, in the creation of artificial intelligence. Epistemology. Deductive reasoning plays an important role in epistemology. Epistemology is concerned with the question of justification, i.e. to point out which beliefs are justified and why. Deductive inferences are able to transfer the justification of the premises onto the conclusion. So while logic is interested in the truth-preserving nature of deduction, epistemology is interested in the justification-preserving nature of deduction. There are different theories trying to explain why deductive reasoning is justification-preserving. According to reliabilism, this is the case because deductions are truth-preserving: they are reliable processes that ensure a true conclusion given the premises are true. Some theorists hold that the thinker has to have explicit awareness of the truth-preserving nature of the inference for the justification to be transferred from the premises to the conclusion. One consequence of such a view is that, for young children, this deductive transference does not take place since they lack this specific awareness. Probability logic. Probability logic is interested in how the probability of the premises of an argument affects the probability of its conclusion. It differs from classical logic, which assumes that propositions are either true or false but does not take into consideration the probability or certainty that a proposition is true or false. History. Aristotle, a Greek philosopher, started documenting deductive reasoning in the 4th century BC. René Descartes, in his book Discourse on Method, refined the idea for the Scientific Revolution. Developing four rules to follow for proving an idea deductively, Descartes laid the foundation for the deductive portion of the scientific method. Descartes' background in geometry and mathematics influenced his ideas on the truth and reasoning, causing him to develop a system of general reasoning now used for most mathematical reasoning. Similar to postulates, Descartes believed that ideas could be self-evident and that reasoning alone must prove that observations are reliable. These ideas also lay the foundations for the ideas of rationalism. Related concepts and theories. Deductivism. Deductivism is a philosophical position that gives primacy to deductive reasoning or arguments over their non-deductive counterparts. It is often understood as the evaluative claim that only deductive inferences are "good" or "correct" inferences. This theory would have wide-reaching consequences for various fields since it implies that the rules of deduction are "the only acceptable standard of evidence". This way, the rationality or correctness of the different forms of inductive reasoning is denied. Some forms of deductivism express this in terms of degrees of reasonableness or probability. Inductive inferences are usually seen as providing a certain degree of support for their conclusion: they make it more likely that their conclusion is true. Deductivism states that such inferences are not rational: the premises either ensure their conclusion, as in deductive reasoning, or they do not provide any support at all. One motivation for deductivism is the problem of induction introduced by David Hume. It consists in the challenge of explaining how or whether inductive inferences based on past experiences support conclusions about future events. For example, a chicken comes to expect, based on all its past experiences, that the person entering its coop is going to feed it, until one day the person "at last wrings its neck instead". According to Karl Popper's falsificationism, deductive reasoning alone is sufficient. This is due to its truth-preserving nature: a theory can be falsified if one of its deductive consequences is false. So while inductive reasoning does not offer positive evidence for a theory, the theory still remains a viable competitor until falsified by empirical observation. In this sense, deduction alone is sufficient for discriminating between competing hypotheses about what is the case. Hypothetico-deductivism is a closely related scientific method, according to which science progresses by formulating hypotheses and then aims to falsify them by trying to make observations that run counter to their deductive consequences. Natural deduction. The term "natural deduction" refers to a class of proof systems based on self-evident rules of inference. The first systems of natural deduction were developed by Gerhard Gentzen and Stanislaw Jaskowski in the 1930s. The core motivation was to give a simple presentation of deductive reasoning that closely mirrors how reasoning actually takes place. In this sense, natural deduction stands in contrast to other less intuitive proof systems, such as Hilbert-style deductive systems, which employ axiom schemes to express logical truths. Natural deduction, on the other hand, avoids axioms schemes by including many different rules of inference that can be used to formulate proofs. These rules of inference express how logical constants behave. They are often divided into introduction rules and elimination rules. Introduction rules specify under which conditions a logical constant may be introduced into a new sentence of the proof. For example, the introduction rule for the logical constant "formula_7" (and) is "formula_8". It expresses that, given the premises "formula_9" and "formula_10" individually, one may draw the conclusion "formula_11" and thereby include it in one's proof. This way, the symbol "formula_7" is introduced into the proof. The removal of this symbol is governed by other rules of inference, such as the elimination rule "formula_12", which states that one may deduce the sentence "formula_9" from the premise "formula_13". Similar introduction and elimination rules are given for other logical constants, such as the propositional operator "formula_14", the propositional connectives "formula_15" and "formula_16", and the quantifiers "formula_17" and "formula_18". The focus on rules of inferences instead of axiom schemes is an important feature of natural deduction. But there is no general agreement on how natural deduction is to be defined. Some theorists hold that all proof systems with this feature are forms of natural deduction. This would include various forms of sequent calculi or tableau calculi. But other theorists use the term in a more narrow sense, for example, to refer to the proof systems developed by Gentzen and Jaskowski. Because of its simplicity, natural deduction is often used for teaching logic to students. Geometrical method. The geometrical method is a method of philosophy based on deductive reasoning. It starts from a small set of self-evident axioms and tries to build a comprehensive logical system based only on deductive inferences from these first axioms. It was initially formulated by Baruch Spinoza and came to prominence in various rationalist philosophical systems in the modern era. It gets its name from the forms of mathematical demonstration found in traditional geometry, which are usually based on axioms, definitions, and inferred theorems. An important motivation of the geometrical method is to repudiate philosophical skepticism by grounding one's philosophical system on absolutely certain axioms. Deductive reasoning is central to this endeavor because of its necessarily truth-preserving nature. This way, the certainty initially invested only in the axioms is transferred to all parts of the philosophical system. One recurrent criticism of philosophical systems build using the geometrical method is that their initial axioms are not as self-evident or certain as their defenders proclaim. This problem lies beyond the deductive reasoning itself, which only ensures that the conclusion is true if the premises are true, but not that the premises themselves are true. For example, Spinoza's philosophical system has been criticized this way based on objections raised against the causal axiom, i.e. that "the knowledge of an effect depends on and involves knowledge of its cause". A different criticism targets not the premises but the reasoning itself, which may at times implicitly assume premises that are themselves not self-evident. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\rightarrow Q" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "Q" }, { "math_id": 3, "text": "\\lnot Q" }, { "math_id": 4, "text": "\\lnot P" }, { "math_id": 5, "text": "Q \\rightarrow R" }, { "math_id": 6, "text": "P \\rightarrow R" }, { "math_id": 7, "text": "\\land" }, { "math_id": 8, "text": "\\frac{A, B}{(A \\land B)}" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "B" }, { "math_id": 11, "text": "A \\land B" }, { "math_id": 12, "text": "\\frac{(A \\land B)}{A}" }, { "math_id": 13, "text": "(A \\land B)" }, { "math_id": 14, "text": "\\lnot" }, { "math_id": 15, "text": "\\lor" }, { "math_id": 16, "text": "\\rightarrow" }, { "math_id": 17, "text": "\\exists" }, { "math_id": 18, "text": "\\forall" } ]
https://en.wikipedia.org/wiki?curid=61093
6109308
Prefix sum
Sequence in computer science In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers "x"0, "x"1, "x"2, ... is a second sequence of numbers "y"0, "y"1, "y"2, ..., the sums of prefixes (running totals) of the input sequence: "y"0 = "x"0 "y"1 = "x"0 + "x"1 "y"2 = "x"0 + "x"1+ "x"2 For instance, the prefix sums of the natural numbers are the triangular numbers: Prefix sums are trivial to compute in sequential models of computation, by using the formula "yi" = "y""i" − 1 + "xi" to compute each output value in sequence order. However, despite their ease of computation, prefix sums are a useful primitive in certain algorithms such as counting sort, and they form the basis of the scan higher-order function in functional programming languages. Prefix sums have also been much studied in parallel algorithms, both as a test problem to be solved and as a useful primitive to be used as a subroutine in other parallel algorithms. Abstractly, a prefix sum requires only a binary associative operator ⊕, making it useful for many applications from calculating well-separated pair decompositions of points to string processing. Mathematically, the operation of taking prefix sums can be generalized from finite to infinite sequences; in that context, a prefix sum is known as a partial sum of a series. Prefix summation or partial summation form linear operators on the vector spaces of finite or infinite sequences; their inverses are finite difference operators. Scan higher order function. In functional programming terms, the prefix sum may be generalized to any binary operation (not just the addition operation); the higher order function resulting from this generalization is called a scan, and it is closely related to the fold operation. Both the scan and the fold operations apply the given binary operation to the same sequence of values, but differ in that the scan returns the whole sequence of results from the binary operation, whereas the fold returns only the final result. For instance, the sequence of factorial numbers may be generated by a scan of the natural numbers using multiplication instead of addition: Inclusive and exclusive scans. Programming language and library implementations of scan may be either "inclusive" or "exclusive". An inclusive scan includes input "x""i" when computing output "y""i" (i.e., formula_0) while an exclusive scan does not (i.e., formula_1). In the latter case, implementations either leave "y"0 undefined or accept a separate ""x"−1" value with which to seed the scan. Either type of scan can be transformed into the other: an inclusive scan can be transformed into an exclusive scan by shifting the array produced by the scan right by one element and inserting the identity value at the left of the array. Conversely, an exclusive scan be transformed into an inclusive scan by shifting the array produced by the scan left and inserting the sum of the last element of the scan and the last element of the input array at the right of the array. The following table lists examples of the inclusive and exclusive scan functions provided by a few programming languages and libraries: The directive-based OpenMP parallel programming model supports both inclusive and exclusive scan support beginning with Version 5.0. Parallel algorithms. There are two key algorithms for computing a prefix sum in parallel. The first offers a shorter span and more parallelism but is not work-efficient. The second is work-efficient but requires double the span and offers less parallelism. These are presented in turn below. Algorithm 1: Shorter span, more parallel. Hillis and Steele present the following parallel prefix sum algorithm: for "i" &lt;- 0 to floor(log2("n")) do for "j" &lt;- 0 to "n" - 1 do in parallel if "j" &lt; 2"i" then "x" &lt;- "x" else "x" &lt;- "x" + "x" In the above, the notation formula_2 means the value of the "j"th element of array "x" in timestep "i". With a single processor this algorithm would run in "O"("n" log "n") time. However if the machine has at least "n" processors to perform the inner loop in parallel, the algorithm as a whole runs in "O"(log "n") time, the number of iterations of the outer loop. Algorithm 2: Work-efficient. A work-efficient parallel prefix sum can be computed by the following steps. If the input sequence has n steps, then the recursion continues to a depth of "O"(log "n"), which is also the bound on the parallel running time of this algorithm. The number of steps of the algorithm is "O"("n"), and it can be implemented on a parallel random access machine with "O"("n"/log "n") processors without any asymptotic slowdown by assigning multiple indices to each processor in rounds of the algorithm for which there are more elements than processors. Discussion. Each of the preceding algorithms runs in "O"(log "n") time. However, the former takes exactly log2 "n" steps, while the latter requires 2 log2 "n" − 2 steps. For the 16-input examples illustrated, Algorithm 1 is 12-way parallel (49 units of work divided by a span of 4) while Algorithm 2 is only 4-way parallel (26 units of work divided by a span of 6). However, Algorithm 2 is work-efficient—it performs only a constant factor (2) of the amount of work required by the sequential algorithm—while Algorithm 1 is work-inefficient—it performs asymptotically more work (a logarithmic factor) than is required sequentially. Consequently, Algorithm 1 is likely to perform better when abundant parallelism is available, but Algorithm 2 is likely to perform better when parallelism is more limited. Parallel algorithms for prefix sums can often be generalized to other scan operations on associative binary operations, and they can also be computed efficiently on modern parallel hardware such as a GPU. The idea of building in hardware a functional unit dedicated to computing multi-parameter prefix-sum was patented by Uzi Vishkin. Many parallel implementations follow a two pass procedure where partial prefix sums are calculated in the first pass on each processing unit; the prefix sum of these partial sums is then calculated and broadcast back to the processing units for a second pass using the now known prefix as the initial value. Asymptotically this method takes approximately two read operations and one write operation per item. Concrete implementations of prefix sum algorithms. An implementation of a parallel prefix sum algorithm, like other parallel algorithms, has to take the parallelization architecture of the platform into account. More specifically, multiple algorithms exist which are adapted for platforms working on shared memory as well as algorithms which are well suited for platforms using distributed memory, relying on message passing as the only form of interprocess communication. Shared memory: Two-level algorithm. The following algorithm assumes a shared memory machine model; all processing elements (PEs) have access to the same memory. A version of this algorithm is implemented in the Multi-Core Standard Template Library (MCSTL), a parallel implementation of the C++ standard template library which provides adapted versions for parallel computing of various algorithms. In order to concurrently calculate the prefix sum over formula_3 data elements with formula_4 processing elements, the data is divided into formula_5 blocks, each containing formula_6 elements (for simplicity we assume that formula_5 divides formula_3). Note, that although the algorithm divides the data into formula_5 blocks, only formula_4 processing elements run in parallel at a time. In a first sweep, each PE calculates a local prefix sum for its block. The last block does not need to be calculated, since these prefix sums are only calculated as offsets to the prefix sums of succeeding blocks and the last block is by definition not succeeded. The formula_4 offsets which are stored in the last position of each block are accumulated in a prefix sum of their own and stored in their succeeding positions. For formula_4 being a small number, it is faster to do this sequentially, for a large formula_4, this step could be done in parallel as well. A second sweep is performed. This time the first block does not have to be processed, since it does not need to account for the offset of a preceding block. However, in this sweep the last block is included instead and the prefix sums for each block are calculated taking the prefix sum block offsets calculated in the previous sweep into account. function prefix_sum(elements) { n := size(elements) p := number of processing elements prefix_sum := [0...0] of size n do parallel i = 0 to p-1 { // i := index of current PE from j = i * n / (p+1) to (i+1) * n / (p+1) - 1 do { // This only stores the prefix sum of the local blocks store_prefix_sum_with_offset_in(elements, 0, prefix_sum) x = 0 for i = 1 to p { // Serial accumulation of total sum of blocks x += prefix_sum[i * n / (p+1) - 1] // Build the prefix sum over the first p blocks prefix_sum[i * n / (p+1)] = x // Save the results to be used as offsets in second sweep do parallel i = 1 to p { // i := index of current PE from j = i * n / (p+1) to (i+1) * n / (p+1) - 1 do { offset := prefix_sum[i * n / (p+1)] // Calculate the prefix sum taking the sum of preceding blocks as offset store_prefix_sum_with_offset_in(elements, offset, prefix_sum) return prefix_sum Improvement: In case that the number of blocks are too much that makes the serial step time-consuming by deploying a single processor, the can be used to accelerate the second phase. Distributed memory: Hypercube algorithm. The Hypercube Prefix Sum Algorithm is well adapted for distributed memory platforms and works with the exchange of messages between the processing elements. It assumes to have formula_7 processor elements (PEs) participating in the algorithm equal to the number of corners in a formula_8-dimensional hypercube. Throughout the algorithm, each PE is seen as a corner in a hypothetical hyper cube with knowledge of the total prefix sum formula_9 as well as the prefix sum formula_10 of all elements up to itself (according to the ordered indices among the PEs), both in its own hypercube. In a formula_8-dimensional hyper cube with formula_11 PEs at the corners, the algorithm has to be repeated formula_8 times to have the formula_11zero-dimensional hyper cubes be unified into one formula_8-dimensional hyper cube. Assuming a duplex communication model where the formula_9 of two adjacent PEs in different hyper cubes can be exchanged in both directions in one communication step, this means formula_12 communication startups. i := Index of own processor element (PE) m := prefix sum of local elements of this PE d := number of dimensions of the hyper cube x = m; // Invariant: The prefix sum up to this PE in the current sub cube σ = m; // Invariant: The prefix sum of all elements in the current sub cube for (k=0; k &lt;= d-1; k++) { y = σ @ PE(i xor 2^k) // Get the total prefix sum of the opposing sub cube along dimension k σ = σ + y // Aggregate the prefix sum of both sub cubes if (i &amp; 2^k) { x = x + y // Only aggregate the prefix sum from the other sub cube, if this PE is the higher index one. Large message sizes: pipelined binary tree. The Pipelined Binary Tree Algorithm is another algorithm for distributed memory platforms which is specifically well suited for large message sizes. Like the hypercube algorithm, it assumes a special communication structure. The processing elements (PEs) are hypothetically arranged in a binary tree (e.g. a Fibonacci Tree) with infix numeration according to their index within the PEs. Communication on such a tree always occurs between parent and child nodes. The infix numeration ensures that for any given PEj, the indices of all nodes reachable by its left subtree formula_13 are less than formula_14 and the indices formula_15 of all nodes in the right subtree are greater than formula_14. The parent's index is greater than any of the indices in PEj's subtree if PEj is a left child and smaller if PEj is a right child. This allows for the following reasoning: Note the distinction between subtree-local and total prefix sums. The points two, three and four can lead to believe they would form a circular dependency, but this is not the case. Lower level PEs might require the total prefix sum of higher level PEs to calculate their total prefix sum, but higher level PEs only require subtree local prefix sums to calculate their total prefix sum. The root node as highest level node only requires the local prefix sum of its left subtree to calculate its own prefix sum. Each PE on the path from PE0 to the root PE only requires the local prefix sum of its left subtree to calculate its own prefix sum, whereas every node on the path from PEp-1 (last PE) to the PEroot requires the total prefix sum of its parent to calculate its own total prefix sum. This leads to a two-phase algorithm: Upward PhasePropagate the subtree local prefix sum formula_23 to its parent for each PEj. Downward phasePropagate the exclusive (exclusive PEj as well as the PEs in its left subtree) total prefix sum formula_22 of all lower index PEs which are not included in the addressed subtree of PEj to lower level PEs in the left child subtree of PEj. Propagate the inclusive prefix sum formula_20 to the right child subtree of PEj. Note that the algorithm is run in parallel at each PE and the PEs will block upon receive until their children/parents provide them with packets. k := number of packets in a message m of a PE m @ {left, right, parent, this} := // Messages at the different PEs x = m @ this // Upward phase - Calculate subtree local prefix sums for j=0 to k-1: // Pipelining: For each packet of a message if hasLeftChild: blocking receive m[j] @ left // This replaces the local m[j] with the received m[j] // Aggregate inclusive local prefix sum from lower index PEs x[j] = m[j] ⨁ x[j] if hasRightChild: blocking receive m[j] @ right // We do not aggregate m[j] into the local prefix sum, since the right children are higher index PEs send x[j] ⨁ m[j] to parent else: send x[j] to parent // Downward phase for j=0 to k-1: m[j] @ this = 0 if hasParent: blocking receive m[j] @ parent // For a left child m[j] is the parents exclusive prefix sum, for a right child the inclusive prefix sum x[j] = m[j] ⨁ x[j] send m[j] to left // The total prefix sum of all PE's smaller than this or any PE in the left subtree send x[j] to right // The total prefix sum of all PE's smaller or equal than this PE Pipelining. If the message m of length n can be divided into k packets and the operator ⨁ can be used on each of the corresponding message packets separately, pipelining is possible. If the algorithm is used without pipelining, there are always only two levels (the sending PEs and the receiving PEs) of the binary tree at work while all other PEs are waiting. If there are p processing elements and a balanced binary tree is used, the tree has formula_24 levels, the length of the path from formula_25 to formula_26 is therefore formula_27 which represents the maximum number of non parallel communication operations during the upward phase, likewise, the communication on the downward path is also limited to formula_28 startups. Assuming a communication startup time of formula_29 and a bytewise transmission time of formula_30, upward and downward phase are limited to formula_31 in a non pipelined scenario. Upon division into k packets, each of size formula_32 and sending them separately, the first packet still needs formula_33 to be propagated to formula_34 as part of a local prefix sum and this will occur again for the last packet if formula_35. However, in between, all the PEs along the path can work in parallel and each third communication operation (receive left, receive right, send to parent) sends a packet to the next level, so that one phase can be completed in formula_36 communication operations and both phases together need formula_37 which is favourable for large message sizes n. The algorithm can further be optimised by making use of full-duplex or telephone model communication and overlapping the upward and the downward phase. Data structures. When a data set may be updated dynamically, it may be stored in a Fenwick tree data structure. This structure allows both the lookup of any individual prefix sum value and the modification of any array value in logarithmic time per operation. However, an earlier 1982 paper presents a data structure called Partial Sums Tree (see Section 5.1) that appears to overlap Fenwick trees; in 1982 the term prefix-sum was not yet as common as it is today. For higher-dimensional arrays, the summed area table provides a data structure based on prefix sums for computing sums of arbitrary rectangular subarrays. This can be a helpful primitive in image convolution operations. Applications. Counting sort is an integer sorting algorithm that uses the prefix sum of a histogram of key frequencies to calculate the position of each key in the sorted output array. It runs in linear time for integer keys that are smaller than the number of items, and is frequently used as part of radix sort, a fast algorithm for sorting integers that are less restricted in magnitude. List ranking, the problem of transforming a linked list into an array that represents the same sequence of items, can be viewed as computing a prefix sum on the sequence 1, 1, 1, ... and then mapping each item to the array position given by its prefix sum value; by combining list ranking, prefix sums, and Euler tours, many important problems on trees may be solved by efficient parallel algorithms. An early application of parallel prefix sum algorithms was in the design of binary adders, Boolean circuits that can add two n-bit binary numbers. In this application, the sequence of carry bits of the addition can be represented as a scan operation on the sequence of pairs of input bits, using the majority function to combine the previous carry with these two bits. Each bit of the output number can then be found as the exclusive or of two input bits with the corresponding carry bit. By using a circuit that performs the operations of the parallel prefix sum algorithm, it is possible to design an adder that uses "O"("n") logic gates and "O"(log "n") time steps. In the parallel random access machine model of computing, prefix sums can be used to simulate parallel algorithms that assume the ability for multiple processors to access the same memory cell at the same time, on parallel machines that forbid simultaneous access. By means of a sorting network, a set of parallel memory access requests can be ordered into a sequence such that accesses to the same cell are contiguous within the sequence; scan operations can then be used to determine which of the accesses succeed in writing to their requested cells, and to distribute the results of memory read operations to multiple processors that request the same result. In Guy Blelloch's Ph.D. thesis, parallel prefix operations form part of the formalization of the data parallelism model provided by machines such as the Connection Machine. The Connection Machine CM-1 and CM-2 provided a hypercubic network on which the Algorithm 1 above could be implemented, whereas the CM-5 provided a dedicated network to implement Algorithm 2. In the construction of Gray codes, sequences of binary values with the property that consecutive sequence values differ from each other in a single bit position, a number n can be converted into the Gray code value at position n of the sequence simply by taking the exclusive or of n and "n"/2 (the number formed by shifting n right by a single bit position). The reverse operation, decoding a Gray-coded value x into a binary number, is more complicated, but can be expressed as the prefix sum of the bits of x, where each summation operation within the prefix sum is performed modulo two. A prefix sum of this type may be performed efficiently using the bitwise Boolean operations available on modern computers, by computing the exclusive or of x with each of the numbers formed by shifting x to the left by a number of bits that is a power of two. Parallel prefix (using multiplication as the underlying associative operation) can also be used to build fast algorithms for parallel polynomial interpolation. In particular, it can be used to compute the divided difference coefficients of the Newton form of the interpolation polynomial. This prefix based approach can also be used to obtain the generalized divided differences for (confluent) Hermite interpolation as well as for parallel algorithms for Vandermonde systems. Prefix sum is used for load balancing as a low-cost algorithm to distribute the work between multiple processors, where the overriding goal is achieving an equal amount of work on each processor. The algorithms uses an array of weights representing the amount of work required for each item. After the prefix sum is calculated, the work item i is sent for processing to the processor unit with the number formula_38. Graphically this corresponds to an operation where the amount of work in each item is represented by the length of a linear segment, all segments are sequentially placed onto a line and the result cut into number of pieces, corresponding to the number of the processors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_i = \\bigoplus_{j=0}^i x_j" }, { "math_id": 1, "text": "y_i = \\bigoplus_{j=0}^{i-1} x_j" }, { "math_id": 2, "text": "x^i_j" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "p+1" }, { "math_id": 6, "text": "\\frac n {p+1}" }, { "math_id": 7, "text": "p=2^d" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "\\sigma" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "2^d" }, { "math_id": 12, "text": "d=\\log_2 p" }, { "math_id": 13, "text": "\\mathbb{[l...j-1]}" }, { "math_id": 14, "text": "j" }, { "math_id": 15, "text": "\\mathbb{[j+1...r]}" }, { "math_id": 16, "text": "\\mathbb{\\oplus[l..j-1]}" }, { "math_id": 17, "text": "\\mathbb{\\oplus[l..j]}" }, { "math_id": 18, "text": "\\mathbb{\\oplus[j+1..r]}" }, { "math_id": 19, "text": "h > j" }, { "math_id": 20, "text": "\\mathbb{\\oplus[0..j]}" }, { "math_id": 21, "text": "\\mathbb{\\oplus[0..j..r]}" }, { "math_id": 22, "text": "\\mathbb{\\oplus[0..l-1]}" }, { "math_id": 23, "text": "\\mathbb{\\oplus[l..j..r]}" }, { "math_id": 24, "text": "\\log _{2}p" }, { "math_id": 25, "text": "PE_0" }, { "math_id": 26, "text": "PE_\\mathbb{root}" }, { "math_id": 27, "text": "\\log _{2}p - 1" }, { "math_id": 28, "text": "\\log _{2}p -1" }, { "math_id": 29, "text": "T_\\mathbb{start}" }, { "math_id": 30, "text": "T_\\mathbb{byte}" }, { "math_id": 31, "text": "(2\\log _{2}p-2)(T_\\mathbb{start} + n\\cdot T_\\mathbb{byte})" }, { "math_id": 32, "text": "\\tfrac{n}{k}" }, { "math_id": 33, "text": "(\\log _{2}p-1)\\left\n(T_\\mathbb{start} + \\frac{n}{k} \\cdot T_\\mathbb{byte}\\right)" }, { "math_id": 34, "text": "PE_{\\mathbb{root}}" }, { "math_id": 35, "text": "k > \\log_{2}p" }, { "math_id": 36, "text": "2\\log_{2}p-1 + 3(k-1)" }, { "math_id": 37, "text": "(4\\cdot\\log_{2}p-2 + 6(k-1))\\left(T_\\mathbb{start} + \\frac{n}{k} \\cdot T_\\mathbb{byte}\\right)" }, { "math_id": 38, "text": "[ \\frac {prefixSumValue_i} {{totalWork} / {numberOfProcessors}} ]" } ]
https://en.wikipedia.org/wiki?curid=6109308
61094236
Interpolation sort
Sorting algorithm in computer science Interpolation sort is a sorting algorithm that is a kind of bucket sort. It uses an interpolation formula to assign data to the bucket. A general interpolation formula is: Interpolation = INT(((Array[i] - min) / (max - min)) * (ArraySize - 1)) Algorithm. Interpolation sort (or histogram sort). It is a sorting algorithm that uses the interpolation formula to disperse data divide and conquer. Interpolation sort is also a variant of bucket sort algorithm. The interpolation sort method uses an array of record bucket lengths corresponding to the original number column. By operating the maintenance length array, the recursive algorithm can be prevented from changing the space complexity to formula_2 due to memory stacking. The segmentation record of the length array can using secondary function dynamically declare and delete the memory space of the array. The space complexity required to control the recursive program is formula_3. Contains a two-dimensional array of dynamically allocated memories and an array of record lengths. However the execution complexity can still be maintained as an efficient sorting method of formula_4. Array of dynamically allocated memory can be implemented by linked list, stack, queue, associative array, tree structure, etc. An array object such as JavaScript is applicable. The difference in data structure is related to the speed of data access and thus the time required for sorting.When the values in the ordered array are uniformly distributed approximately the arithmetic progression, the linear time of interpolation sort ordering is formula_0. Histogram sort algorithm. The NIST definition: An efficient 3-pass refinement of a bucket sort algorithm. Practice. Interpolation sort implementation. JavaScript code: Array.prototype.interpolationSort = function() var divideSize = new Array(); var end = this.length; divideSize[0] = end; while (divideSize.length &gt; 0) { divide(this); } // Repeat function divide to ArrayList function divide(A) { var size = divideSize.pop(); var start = end - size; var min = A[start]; var max = A[start]; for (var i = start + 1; i &lt; end; i++) { else { var p = 0; var bucket = new Array(size); for (var i = start; i &lt; end; i++) { p = Math.floor(((A[i] - min ) / (max - min ) ) * (size - 1 )); bucket[p].push(A[i]); for (var i = 0; i &lt; size; i++) { if (bucket[i].length &gt; 0) { divideSize.push(bucket[i].length); Interpolation sort recursive method. Worst-case space complexity : formula_1 Array.prototype.interpolationSort= function() var start = 0; var size = this.length; var min = this[0]; var max = this[0]; for (var i = 1; i &lt; size; i++) { if (min != max) { var bucket = new Array(size); var interpolation = 0; for (var i = 0; i &lt; size; i++) { interpolation = Math.floor(((this[i] - min) / (max - min)) * (size - 1)); bucket[interpolation].push(this[i]); for (var i = 0; i &lt; size; i++) { if (bucket[i].length &gt; 1) { bucket[i].interpolationSort(); } // Recursion Histogram sort implementation. Array.prototype.histogramSort = function() var end = this.length; var sortedArray = new Array(end); var interpolation = new Array(end); var hitCount = new Array(end); var divideSize = new Array(); divideSize[0] = end; while (divideSize.length &gt; 0) { distribute(this); } // Repeat function distribute to Array function distribute(A) { var size = divideSize.pop(); var start = end - size; var min = A[start]; var max = A[start]; for (var i = start + 1; i &lt; end; i++) { else { for (var i = start; i &lt; end; i++) { interpolation[i] = start + Math.floor(((A[i] - min ) / (max - min ) ) * (size - 1 )); hitCount[interpolation[i]]++; for (var i = start; i &lt; end; i++) { hitCount[end-1] = end - hitCount[end-1]; for (var i = end-1; i &gt; start; i--) { hitCount[i-1] = hitCount[i] - hitCount[i-1]; for (var i = start; i &lt; end; i++) { sortedArray[hitCount[interpolation[i]]] = A[i]; hitCount[interpolation[i]]++; Variant. Interpolation tag sort. Interpolation Tag Sort is a variant of Interpolation Sort. Applying the bucket sorting and dividing method, the array data is distributed into a limited number of buckets by mathematical interpolation formula, and the bucket then recursively the original processing program until the sorting is completed. Interpolation tag sort is a recursive sorting method for interpolation sorting. To avoid stacking overflow caused by recursion, the memory crashes. Instead, use a Boolean data type tag array to operate the recursive function to release the memory. The extra memory space required is close to formula_5. Contains a two-dimensional array of dynamically allocated memory and a Boolean data type tag array. Stack, queue, associative array, and tree structure can be implemented as buckets. As the JavaScript array object is suitable for this sorting method, the difference in data structure is related to the speed of data access and thus the time required for sorting. The linear time Θ(n) is used when the values in the array to be sorted are evenly distributed. The bucket sort algorithm does not limit the sorting to the lower limit of formula_6. Interpolation tag sort average performance complexity is formula_4. Practice. JavaScript code: Array.prototype.InterpolaionTagSort = function() var end = this.length; if (end &gt; 1) { var start = 0 ; var Tag = new Array(end); // Algorithm step-1 for (var i = 0; i &lt; end; i++) { Tag[i] = false; } Divide(this); while (end &gt; 1) { // Algorithm step-2 while (Tag[--start] == false) { } // Find the next bucket's start Divide(this); function Divide(A) { var min = A[start]; var max = A[start]; for (var i = start + 1; i &lt; end; i++) { if (A[i] &lt; min) { min = A[i]; } if ( min == max) { end = start; } // Algorithm step-3 Start to be the next bucket's end else { var interpolation = 0; var size = end - start; var Bucket = new Array( size ); // Algorithm step-4 for (var i = 0; i &lt; size; i++) { Bucket[i] = new Array(); } for (var i = start; i &lt; end; i++) { interpolation = Math.floor (((A[i] - min) / (max - min)) * (size - 1)); Bucket[interpolation].push(A[i]); for (var i = 0; i &lt; size; i++) { if (Bucket[i].length &gt; 0) { // Algorithm step-5 Tag[start] = true; for (var j = 0; j &lt; Bucket[i].length; j++) { A[start++] = Bucket[i][j]; } } } // Algorithm step-6 In-place Interpolation Tag Sort. The in-place interpolation tag sort is an in-place algorithm of interpolation sort. In-place Interpolation Tag Sort can achieve sorting by only N times of swapping by maintaining N bit tags; however, the array to be sorted must be a continuous integer sequence and not repeated, or the series is completely evenly distributed to approximate The number of arithmetical progression. The factor column data must not be repeated. For example, sorting 0~100 can be sorted in one step. The number of exchanges is: formula_0, the calculation time complexity is: formula_0, and the worst space complexity is formula_7. If the characteristics of the series meet the conditional requirements of this sorting method: "The array is a continuous integer or an arithmetical progression that does not repeat", the in-place interpolation tag sort will be an excellent sorting method that is extremely fast and saves memory space. In-place Interpolation Tag Sort Algorithm. In-place Interpolation Tag Sort sorts non-repeating consecutive integer series, only one Boolean data type tag array with the same length as the original array, the array calculates the interpolation of the data from the beginning, and the interpolation points to a new position of the array. Position, the position that has been swapped is marked as true in the corresponding position of the tag array, and is incremented until the end of the array is sorted. Algorithm process: Practice. JavaScript code: Array.prototype.InPlaceTagSort = function() var n = this.length; var Tag = new Array(n); var min = this[0]; var max = this[0]; else { if (this[i] &gt; max) { max = this[i]; } } } var p = 0; var temp = 0; for (i = 0; i &lt; n; i++) { while (Tag[i] == false) { p = Math.floor(((this[i] - min) / (max - min)) * (n - 1)); temp = this[i]; this[i] = this[p]; this[p] = temp; Tag[p] = true; } needSortArray.InPlaceTagSort(); The origin of In-place sorting performed in O(n) time. In "Mathematical Analysis of Algorithms", Donald Knuth remarked "... that research on computational complexity is an interesting way to sharpen our tools for more routine problems we face from day to day." Knuth further pointed out that, with respect to the sorting problem, time effective in-situ permutation is inherently connected with the problem of finding the cycle leaders, and in-situ permutations could easily be performed in formula_0 time if we would be allowed to manipulate formula_8 extra "tag" bits specifying how much of the permutation has been carried out at any time. Without such tag bits, he concludes "it seems reasonable to conjecture that every algorithm will require for in-situ permutation at least formula_9 steps on the average." The In-place Interpolation Tag Sort is one of the sorting algorithms that prof. Donald Knuth said: "manipulate formula_8 extra "tag" bits...finding the cycle leaders, and in-situ permutations could easily be performed in formula_0 time". Similar sorting method. Bucket sort mixing other sorting methods and recursive algorithm. Bucket sort can be mixed with other sorting methods to complete sorting. If it is sorted by bucket sort and insert sort, also is a fairly efficient sorting method. But when the series appears a large deviation from the value: For example, when the maximum value of the series is greater than N times the next largest value. After the series of columns are processed, the distribution is that all the elements except the maximum value fall into the same bucket. The second sorting method uses insert sort. May cause execution complexity to fall into formula_1. This has lost the meaning and high-speed performance of using bucket sort. Interpolation sort is a way of recursively using bucket sort. After performing recursion, still use bucket sort to disperse the series. This can avoid the above situation. If you want to make the recursive interpolation sort execution complexity fall into formula_1, it is necessary to present a factorial amplification in the entire series. In fact, there is very little chance that a series of special distributions will occur. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" }, { "math_id": 1, "text": "O(n^2)" }, { "math_id": 2, "text": "O(n^ 2)" }, { "math_id": 3, "text": "O(3n)" }, { "math_id": 4, "text": "O(n + k)" }, { "math_id": 5, "text": "2n+(n)bits" }, { "math_id": 6, "text": "O(n log n)" }, { "math_id": 7, "text": "O(n)bits" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "n\\log n" } ]
https://en.wikipedia.org/wiki?curid=61094236
6109428
Irregular moon
Captured satellite following an irregular orbit In astronomy, an irregular moon, irregular satellite, or irregular natural satellite is a natural satellite following a distant, inclined, and often highly elliptical and retrograde orbit. They have been captured by their parent planet, unlike regular satellites, which formed in orbit around them. Irregular moons have a stable orbit, unlike temporary satellites which often have similarly irregular orbits but will eventually depart. The term does not refer to shape; Triton, for example, is a round moon but is considered irregular due to its orbit and origins. , 228 irregular moons are known, orbiting all four of the outer planets (Jupiter, Saturn, Uranus, and Neptune). The largest of each planet are Himalia of Jupiter, Phoebe of Saturn, Sycorax of Uranus, and Triton of Neptune. Triton is rather unusual for an irregular moon; if it is excluded, then Nereid is the largest irregular moon around Neptune. It is currently thought that the irregular satellites were once independent objects orbiting the Sun before being captured by a nearby planet, early in the history of the Solar System. An alternative theory, that they originated further out in the Kuiper belt, is not supported by current observations. Definition. There is no widely accepted precise definition of an irregular satellite. Informally, satellites are considered irregular if they are far enough from the planet that the precession of their orbital plane is primarily controlled by the Sun, other planets, or other moons. In practice, the satellite's semi-major axis is compared with the radius of the planet's Hill sphere (that is, the sphere of its gravitational influence), formula_0. Irregular satellites have semi-major axes greater than 0.05 formula_0 with apoapses extending as far as to 0.65 formula_0. The radius of the Hill sphere is given in the adjacent table: Uranus and Neptune have larger Hill sphere radii than Jupiter and Saturn, despite being less massive, because they are farther from the Sun. However, no known irregular satellite has a semi-major axis exceeding 0.47 formula_0. Earth's Moon seems to be an exception: it is not usually listed as an irregular satellite even though its precession is primarily controlled by the Sun and its semi-major axis is greater than 0.05 of the radius of Earth's Hill sphere. On the other hand, Neptune's Triton, which is probably a captured object, is usually listed as irregular despite being within 0.05 of the radius of Neptune's Hill sphere, so that Triton's precession is primarily controlled by Neptune's oblateness instead of by the Sun. Neptune's Nereid and Saturn's Iapetus have semi-major axes close to 0.05 of the radius of their parent planets' Hill spheres: Nereid (with a very eccentric orbit) is usually listed as irregular, but not Iapetus. Orbits. Current distribution. The orbits of the known irregular satellites are extremely diverse, but there are certain patterns. Retrograde orbits are far more common (83%) than prograde orbits. No satellites are known with orbital inclinations higher than 60° (or smaller than 130° for retrograde satellites); moreover, apart from Nereid, no irregular moon has inclination less than 26°, and inclinations greater than 170° are only found in Saturn's system. In addition, some groupings can be identified, in which one large satellite shares a similar orbit with a few smaller ones. Given their distance from the planet, the orbits of the outer satellites are highly perturbed by the Sun and their orbital elements change widely over short intervals. The semi-major axis of Pasiphae, for example, changes as much as 1.5 Gm in two years (single orbit), the inclination around 10°, and the eccentricity as much as 0.4 in 24 years (twice Jupiter's orbit period). Consequently, "mean" orbital elements (averaged over time) are used to identify the groupings rather than osculating elements at the given date. (Similarly, the proper orbital elements are used to determine the families of asteroids.) Origin. Irregular satellites have been captured from heliocentric orbits. (Indeed, it appears that the irregular moons of the giant planets, the Jovian and Neptunian trojans, and grey Kuiper belt objects have a similar origin.) For this to occur, at least one of three things needs to have happened: After the capture, some of the satellites could break up leading to groupings of smaller moons following similar orbits. Resonances could further modify the orbits making these groupings less recognizable. Long-term stability. The current orbits of the irregular moons are stable, in spite of substantial perturbations near the apocenter. The cause of this stability in a number of irregulars is the fact that they orbit with a secular or Kozai resonance. In addition, simulations indicate the following conclusions: Increasing eccentricity results in smaller pericenters and large apocenters. The satellites enter the zone of the regular (larger) moons and are lost or ejected via collision and close encounters. Alternatively, the increasing perturbations by the Sun at the growing apocenters push them beyond the Hill sphere. Retrograde satellites can be found further from the planet than prograde ones. Detailed numerical integrations have shown this asymmetry. The limits are a complicated function of the inclination and eccentricity, but in general, prograde orbits with semi-major axes up to 0.47 rH (Hill sphere radius) can be stable, whereas for retrograde orbits stability can extend out to 0.67 rH. The boundary for the semimajor axis is surprisingly sharp for the prograde satellites. A satellite on a prograde, circular orbit (inclination=0°) placed at 0.5 rH would leave Jupiter in as little as forty years. The effect can be explained by so-called "evection resonance". The apocenter of the satellite, where the planet's grip on the moon is at its weakest, gets locked in resonance with the position of the Sun. The effects of the perturbation accumulate at each passage pushing the satellite even further outwards. The asymmetry between the prograde and retrograde satellites can be explained very intuitively by the Coriolis acceleration in the frame rotating with the planet. For the prograde satellites the acceleration points outward and for the retrograde it points inward, stabilising the satellite. Temporary captures. The capture of an asteroid from a heliocentric orbit is not always permanent. According to simulations, temporary satellites should be a common phenomenon. The only observed examples are and , which were temporary satellites of Earth discovered in 2006 and 2020, respectively. Physical characteristics. Comparative masses of the largest irregular moons and Jupiter's largest inner moon Amalthea (for comparison). Values are ×1018 kg. One at each outer planet is &gt; 1×1018 kg. Sycorax and Nereid are estimated, not measured; Nereid may not be a captured body. Mars's moons Phobos and Deimos would not be visible at this scale while Triton would dominate. Size. Because objects of a given size are more difficult to see the greater their distance from Earth, the known irregular satellites of Uranus and Neptune are larger than those of Jupiter and Saturn; smaller ones probably exist but have not yet been observed. Bearing this observational bias in mind, the size distribution of irregular satellites appears to be similar for all four giant planets. The size distribution of asteroids and many similar populations can be expressed as a power law: there are many more small objects than large ones, and the smaller the size, the more numerous the object. The mathematical relation expressing the number of objects, formula_1, with a diameter smaller than a particular size, formula_2, is approximated as: formula_3 with "q" defining the slope. The value of "q" is determined through observation. For irregular moons, a shallow power law ("q" ≃ 2) is observed for sizes of 10 to 100 km,† but a steeper law ("q" ≃ 3.5) is observed for objects smaller than 10 km. An analysis of images taken by the Canada-France-Hawaii Telescope in 2010 shows that the power law for Jupiter's population of small retrograde satellites, down to a detection limit of ≈ 400 m, is relatively shallow, at "q" ≃ 2.5. Thus it can be extrapolated that Jupiter should have moons 400 m in diameter or greater. For comparison, the distribution of large Kuiper belt objects is much steeper ("q" ≈ 4). That is, for every object of 1000 km there are a thousand objects with a diameter of 100 km, though it's unknown how far this distribution extends. The size distribution of a population may provide insights into its origin, whether through capture, collision and break-up, or accretion. Around each giant planet, there is one irregular satellite that dominates, by having over three-quarters the mass of the entire irregular satellite system: Jupiter's Himalia (about 75%), Saturn's Phoebe (about 98%), Uranus' Sycorax (about 90%), and Neptune's Nereid (about 98%). Nereid also dominates among irregular satellites taken altogether, having about two-thirds the mass of all irregular moons combined. Phoebe makes up about 17%, Sycorax about 7%, and Himalia about 5%: the remaining moons add up to about 4%. (In this discussion, Triton is not included.) Colours. The colours of irregular satellites can be studied via colour indices: simple measures of differences of the apparent magnitude of an object through blue (B), visible "i.e." green-yellow (V), and red (R) filters. The observed colours of the irregular satellites vary from neutral (greyish) to reddish (but not as red as the colours of some Kuiper belt objects). Each planet's system displays slightly different characteristics. Jupiter's irregulars are grey to slightly red, consistent with C, P and D-type asteroids. Some groups of satellites are observed to display similar colours (see later sections). Saturn's irregulars are slightly redder than those of Jupiter. The large Uranian irregular satellites (Sycorax and Caliban) are light red, whereas the smaller Prospero and Setebos are grey, as are the Neptunian satellites Nereid and Halimede. Spectra. With the current resolution, the visible and near-infrared spectra of most satellites appear featureless. So far, water ice has been inferred on Phoebe and Nereid and features attributed to aqueous alteration were found on Himalia. Rotation. Regular satellites are usually tidally locked (that is, their orbit is synchronous with their rotation so that they only show one face toward their parent planet). In contrast, tidal forces on the irregular satellites are negligible given their distance from the planet, and rotation periods in the range of only ten hours have been measured for the biggest moons Himalia, Phoebe, Sycorax, and Nereid (to compare with their orbital periods of hundreds of days). Such rotation rates are in the same range that is typical for asteroids. Triton, being much larger and closer to its parent planet, is tidally locked. Families with a common origin. Some irregular satellites appear to orbit in 'groups', in which several satellites share similar orbits. The leading theory is that these objects constitute collisional families, parts of a larger body that broke up. Dynamic groupings. Simple collision models can be used to estimate the possible dispersion of the orbital parameters given a velocity impulse Δ"v". Applying these models to the known orbital parameters makes it possible to estimate the Δ"v" necessary to create the observed dispersion. A Δ"v" of tens of meters per seconds (5–50 m/s) could result from a break-up. Dynamical groupings of irregular satellites can be identified using these criteria and the likelihood of the common origin from a break-up evaluated. When the dispersion of the orbits is too wide (i.e. it would require Δ"v" in the order of hundreds of m/s) Colour groupings. When the colours and spectra of the satellites are known, the homogeneity of these data for all the members of a given grouping is a substantial argument for a common origin. However, lack of precision in the available data often makes it difficult to draw statistically significant conclusions. In addition, the observed colours are not necessarily representative of the bulk composition of the satellite. Observed groupings. Irregular satellites of Jupiter. Typically, the following groupings are listed (dynamically tight groups displaying homogenous colours are listed in bold) Sinope, sometimes included into the Pasiphae group, is red and given the difference in inclination, it could be captured independently. Pasiphae and Sinope are also trapped in secular resonances with Jupiter. Irregular satellites of Saturn. The following groupings are commonly listed for Saturn's satellites: Irregular satellites of Uranus and Neptune. According to current knowledge, the number of irregular satellites orbiting Uranus and Neptune is smaller than that of Jupiter and Saturn. However, it is thought that this is simply a result of observational difficulties due to the greater distance of Uranus and Neptune. The table at right shows the minimum radius (rmin) of satellites that can be detected with current technology, assuming an albedo of 0.04; thus, there are almost certainly small Uranian and Neptunian moons that cannot yet be seen. Due to the smaller numbers, statistically significant conclusions about the groupings are difficult. A single origin for the retrograde irregulars of Uranus seems unlikely given a dispersion of the orbital parameters that would require high impulse (Δ"v" ≈ 300 km), implying a large diameter of the impactor (395 km), which is incompatible in turn with the size distribution of the fragments. Instead, the existence of two groupings has been speculated: These two groups are distinct (with 3σ confidence) in their distance from Uranus and in their eccentricity. However, these groupings are not directly supported by the observed colours: Caliban and Sycorax appear light red, whereas the smaller moons are grey. For Neptune, a possible common origin of Psamathe and Neso has been noted. Given the similar (grey) colours, it was also suggested that Halimede could be a fragment of Nereid. The two satellites have had a very high probability (41%) of collision over the age of the solar system. Exploration. To date, the only irregular satellites to have been visited close-up by a spacecraft are Triton and Phoebe, the largest of Neptune's and Saturn's irregulars respectively. Triton was imaged by "Voyager 2" in 1989 and Phoebe by the "Cassini" probe in 2004. "Voyager" 2 also captured a distant image of Neptune's Nereid in 1989, and "Cassini" captured a distant, low-resolution image of Jupiter's Himalia in 2000. "New Horizons" captured low-resolution images of Jupiter's Himalia, Elara, and Callirrhoe in 2007. Throughout the "Cassini" mission, many Saturnian irregulars were observed from a distance: Albiorix, Bebhionn, Bergelmir, Bestla, Erriapus, Fornjot, Greip, Hati, Hyrrokkin, Ijiraq, Kari, Kiviuq, Loge, Mundilfari, Narvi, Paaliaq, Siarnaq, Skathi, Skoll, Suttungr, Tarqeq, Tarvos, Thrymr, and Ymir. The Tianwen-4 mission (to launch 2029) is planned to focus on the regular moon Callisto around Jupiter, but it may fly-by several irregular Jovian satellites before settling into Callistonian orbit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_H" }, { "math_id": 1, "text": "N\\,\\! " }, { "math_id": 2, "text": "D\\,\\! " }, { "math_id": 3, "text": " \\frac{d N}{d D} \\sim D^{-q}" } ]
https://en.wikipedia.org/wiki?curid=6109428
61097
Roche limit
Orbital radius at which a satellite might break up due to gravitational force In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitation. Inside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesce. The Roche radius depends on the radius of the first body and on the ratio of the bodies' densities. The term is named after Édouard Roche (, ), the French astronomer who first calculated this theoretical limit in 1848. Explanation. The Roche limit typically applies to a satellite's disintegrating due to tidal forces induced by its "primary", the body around which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit. (Notable exceptions are Saturn's E-Ring and Phoebe ring. These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.) The gravitational effects occurring below the Roche limit is not the only factor that causes comets to break apart. Splitting by thermal stress, internal gas pressure and rotational splitting are other ways for a comet to split under stress. Determination. The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. For example, a rubble-pile asteroid will behave more like a fluid than a solid rocky one; an icy body will behave quite rigidly at first but become more fluid as tidal heating accumulates and its ices begin to melt. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory. Rigid satellites. The "rigid-body" Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium. These assumptions, although unrealistic, greatly simplify calculations. The Roche limit for a rigid spherical satellite is the distance, formula_0, from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object: formula_1 where formula_2 is the radius of the primary, formula_3 is the density of the primary, and formula_4 is the density of the satellite. This can be equivalently written as formula_5 where formula_6 is the radius of the secondary, formula_7 is the mass of the primary, and formula_8 is the mass of the secondary. This does not depend on the size of the objects, but on the ratio of densities. This is the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also go away from, rather than toward, the satellite. Fluid satellites. A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid. The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit: formula_9 However, a better approximation that takes into account the primary's oblateness and the satellite's mass is: formula_10 where formula_11 is the oblateness of the primary. The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": " d = R_M\\left(2 \\frac {\\rho_M} {\\rho_m} \\right)^{\\frac{1}{3}} " }, { "math_id": 2, "text": "R_M" }, { "math_id": 3, "text": "\\rho_M" }, { "math_id": 4, "text": "\\rho_m" }, { "math_id": 5, "text": " d = R_m\\left(2 \\frac {M_M} {M_m} \\right)^{\\frac{1}{3}} " }, { "math_id": 6, "text": "R_m" }, { "math_id": 7, "text": "M_M" }, { "math_id": 8, "text": "M_m" }, { "math_id": 9, "text": " d \\approx 2.44R\\left( \\frac {\\rho_M} {\\rho_m} \\right)^{1/3} " }, { "math_id": 10, "text": " d \\approx 2.423 R\\left( \\frac {\\rho_M} {\\rho_m} \\right)^{1/3} \\left( \\frac{(1+\\frac{m}{3M})+\\frac{c}{3R}(1+\\frac{m}{M})}{1-c/R} \\right)^{1/3} " }, { "math_id": 11, "text": "c/R" } ]
https://en.wikipedia.org/wiki?curid=61097
61098866
Circular measure
Type of unit of measurement of area A circular measure was used in comparing circular cross-sections, e.g., of wires, etc. A circular unit of the ares is the area of the circle whose diameter is one linear unit. For example, 1 circular mil is equivalent to 0.7854 square mil in area, 1 circular millimeter = 1550 circular mils = 0.7854 square millimeter. Here formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.7854 = \\pi / 4." } ]
https://en.wikipedia.org/wiki?curid=61098866
61099017
Gestalt pattern matching
String-matching algorithm Gestalt pattern matching, also Ratcliff/Obershelp pattern recognition, is a string-matching algorithm for determining the similarity of two strings. It was developed in 1983 by John W. Ratcliff and John A. Obershelp and published in the Dr. Dobb's Journal in July 1988. Algorithm. The similarity of two strings formula_0 and formula_1 is determined by the formula, calculating twice the number of matching characters formula_2 divided by the total number of characters of both strings. The matching characters are defined as some longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring: formula_3 where the similarity metric can take a value between zero and one: formula_4 The value of 1 stands for the complete match of the two strings, whereas the value of 0 means there is no match and not even one common letter. Sample. The longest common substring is codice_0 (light grey) with 5 characters. There is no further substring on the left. The non-matching substrings on the right side are codice_1 and codice_2. They again have a longest common substring codice_3 (dark gray) with length 2. The similarity metric is determined by: formula_5 Properties. The Ratcliff/Obershelp matching characters can be substantially different from each longest common subsequence of the given strings. For example formula_6 and formula_7 have formula_8 as their only longest common substring, and no common characters right of its occurrence, and likewise left, leading to formula_9. However, the longest common subsequence of formula_0 and formula_1 is formula_10, with a total length of formula_11. Complexity. The execution time of the algorithm is formula_12 in a worst case and formula_13 in an average case. By changing the computing method, the execution time can be improved significantly. Commutative property. The Python library implementation of the gestalt pattern matching algorithm is not commutative: formula_14 For the two strings formula_15 and formula_16 the metric result for formula_17 is formula_18 with the substrings codice_4, codice_5, codice_6, codice_7 and for formula_19 the metric is formula_20 with the substrings codice_4, codice_9, codice_5, codice_11, codice_12. Applications. The Python codice_13 library, which was introduced in version 2.1, implements a similar algorithm that predates the Ratcliff-Obershelp algorithm. Due to the unfavourable runtime behaviour of this similarity metric, three methods have been implemented. Two of them return an upper bound in a faster execution time. The fastest variant only compares the length of the two substrings: formula_21, The second upper bound calculates twice the sum of all used characters formula_0 which occur in formula_1 divided by the length of both strings but the sequence is ignored. formula_22 import collections def quick_ratio(s1: str, s2: str) -&gt; float: """Return an upper bound on ratio() relatively quickly.""" length = len(s1) + len(s2) if not length: return 1.0 intersect = (collections.Counter(s1) &amp; collections.Counter(s2)) matches = sum(intersect.values()) return 2.0 * matches / length Trivially the following applies: formula_23 and formula_24.
[ { "math_id": 0, "text": "S_1" }, { "math_id": 1, "text": "S_2" }, { "math_id": 2, "text": "K_m" }, { "math_id": 3, "text": "\nD_{ro} = \\frac{2K_m}{|S_1|+|S_2|}\n" }, { "math_id": 4, "text": "0 \\leq D_{ro} \\leq 1" }, { "math_id": 5, "text": "\n\\frac{2K_m}{|S_1|+|S_2|} = \\frac{2 \\cdot (|\\text{''WIKIM''}|+|\\text{''IA''}|)}{|S_1|+|S_2|} = \\frac{2 \\cdot (5 + 2)}{9 + 9} = \\frac{14}{18} = 0.\\overline{7}\n" }, { "math_id": 6, "text": "S_1 = q \\; ccccc \\; r \\; ddd \\; s \\; bbbb \\; t \\; eee \\; u" }, { "math_id": 7, "text": "S_2 = v \\; ddd \\; w \\; bbbb \\; x \\; eee \\; y \\; ccccc \\; z" }, { "math_id": 8, "text": "ccccc" }, { "math_id": 9, "text": "K_m = 5" }, { "math_id": 10, "text": "(ddd) \\; (bbbb) \\; (eee)" }, { "math_id": 11, "text": "10" }, { "math_id": 12, "text": "O(n^3)" }, { "math_id": 13, "text": "O(n^2)" }, { "math_id": 14, "text": "\nD_{ro}(S_1, S_2) \\neq D_{ro}(S_2, S_1).\n" }, { "math_id": 15, "text": "\nS_1 = \\text{GESTALT PATTERN MATCHING}\n" }, { "math_id": 16, "text": "\nS_2 = \\text{GESTALT PRACTICE}\n" }, { "math_id": 17, "text": "D_{ro}(S_1, S_2)" }, { "math_id": 18, "text": "\\frac{24}{40}" }, { "math_id": 19, "text": " D_{ro}(S_2, S_1)" }, { "math_id": 20, "text": "\\frac{26}{40}" }, { "math_id": 21, "text": "D_{rqr} = \\frac{2 \\cdot \\min(|S1|, |S2|)}{|S1| + |S2|}" }, { "math_id": 22, "text": "D_{qr} = \\frac{2 \\cdot \\big | \\{\\!\\vert S1 \\vert\\!\\} \\cap \\{\\!\\vert S2 \\vert\\!\\} \\big |}{|S1| + |S2|}" }, { "math_id": 23, "text": "0 \\leq D_{ro} \\leq D_{qr} \\leq D_{rqr} \\leq 1" }, { "math_id": 24, "text": "0 \\leq K_m \\leq | \\{\\!\\vert S1 \\vert\\!\\} \\cap \\{\\!\\vert S2 \\vert\\!\\} \\big | \\leq \\min(|S1|, |S2|) \\leq \\frac {|S1| + |S2|}{2}" } ]
https://en.wikipedia.org/wiki?curid=61099017
61109403
Sylvester's determinant identity
Identity in algebra useful for evaluating certain types of determinants In matrix theory, Sylvester's determinant identity is an identity useful for evaluating certain types of determinants. It is named after James Joseph Sylvester, who stated this identity without proof in 1851. Given an "n"-by-"n" matrix formula_0, let formula_1 denote its determinant. Choose a pair formula_2 of "m"-element ordered subsets of formula_3, where "m" ≤ "n". Let formula_4 denote the ("n"−"m")-by-("n"−"m") submatrix of formula_0 obtained by deleting the rows in formula_5 and the columns in formula_6. Define the auxiliary "m"-by-"m" matrix formula_7 whose elements are equal to the following determinants formula_8 where formula_9, formula_10 denote the "m"−1 element subsets of formula_5 and formula_6 obtained by deleting the elements formula_11 and formula_12, respectively. Then the following is Sylvester's determinantal identity (Sylvester, 1851): formula_13 When "m" = 2, this is the Desnanot-Jacobi identity (Jacobi, 1851). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\det(A)" }, { "math_id": 2, "text": "u =(u_1, \\dots, u_m), v =(v_1, \\dots, v_m) \\subset (1, \\dots, n)" }, { "math_id": 3, "text": "(1, \\dots, n)" }, { "math_id": 4, "text": "A^u_v" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "v" }, { "math_id": 7, "text": "\\tilde{A}^u_v" }, { "math_id": 8, "text": "\n(\\tilde{A}^u_v)_{ij} := \\det(A^{u[\\hat{u}_i]}_{v[\\hat{v}_j]}),\n" }, { "math_id": 9, "text": "u[\\hat{u_i}]" }, { "math_id": 10, "text": "v[\\hat{v_j}]" }, { "math_id": 11, "text": "u_i" }, { "math_id": 12, "text": "v_j" }, { "math_id": 13, "text": "\\det(A)(\\det(A^u_v))^{m-1}=\\det(\\tilde{A}^u_v)." } ]
https://en.wikipedia.org/wiki?curid=61109403
61125010
Plant growth analysis
Plant growth analysis refers to a set of concepts and equations by which changes in size of plants over time can be summarised and dissected in component variables. It is often applied in the analysis of growth of individual plants, but can also be used in a situation where crop growth is followed over time. Absolute size. In comparing different treatments, genotypes or species, the simplest type of growth analysis is to evaluate size of plants after a certain period of growth, typically from the time of germination. In plant biology, size is often measured as dry mass of whole plants (M), or the above-ground part of it. In high-throughput phenotyping platforms, the amount of green pixels as derived from photographs taken from plants from various directions is often the variable that is used to estimate plant size. Absolute growth rate (AGR). In the case that plant size was determined at more than one occasion, the increase in size over a given time period can be determined. The Absolute Growth Rate (AGR) is temporal rate of change of size (mass). formula_0 where M is the change in mass of the plant during time t, respectively.&lt;br&gt; Absolute size at the end of an experiment then depends on seed mass, germination time, and the integration of AGR over all time steps measured. Relative growth rate (RGR). AGR is not constant, especially not in the first phases of plant growth. When there are enough resources available (light, nutrients, water), the increase of biomass after germination will be more or less proportional to the mass of the plant already present: small right after germination, larger when plants become bigger. Blackman (1919) was the first to recognize that this was similar to money accumulating in a bank account, with the increase determined by compounding interest. He applied the same mathematical formula to describe plant size over time. The equation for exponential mass growth rate in plant growth analysis is often expressed as: formula_1 Where: RGR can then be written as: formula_2 In the case of two harvests, RGR can be simply calculated as formula_3 In the case of more harvests, a linear equation can be fitted through the ln-transformed size data. The slope of this line gives an estimate of the average RGR for the period under investigation, with units of g.g−1.day−1. A time-course of RGR can be estimated by fitting a non-linear equation through the ln-transformed size data, and calculating the derivative with respect to time. For plants RGR values are typically (much) smaller than 1 g.g−1.day−1. Therefore, values are often reported in mg.g−1.day−1, with normal ranges for young, herbaceous species between 50–350 mg.g−1.day−1, and values for tree seedlings of 10–100 mg.g−1.day−1. RGR components (LAR and ULR). Soon after its inception, the RGR concept was expanded by a simple extension of the RGR equation: formula_4 where A is the total leaf area of a plant. The first component is called the 'Leaf Area Ratio' (LAR) and indicates how much leaf area there is per unit total plant mass. For young plants, values are often in the range of 1–20 m2 kg−1, for tree seedlings they are generally less. The second component is the 'Unit Leaf Rate' (ULR), which is also termed 'Net Assimilation Rate' (NAR). This variable indicates the rate of biomass increase per unit leaf area, with typical values ranging from 5-15 g.m−2.day−1 for herbaceous species and 1-5 g.m−2.day−1 for woody seedlings. Although the ULR is not equal to the rate of photosynthesis per unit leaf area, both values are often well correlated.&lt;br&gt; The LAR can be further subdivided into two other variables that are relevant for plant biology: Specific leaf area (SLA) and Leaf Mass Fraction (LMF). SLA is the leaf area of a plant (or a given leaf) divided by leaf mass. LMF characterizes the fraction of total plant biomass that is allocated to leaves. In formula: formula_5 where ML is the mass of the leaves.&lt;br&gt; Thus, by sequentially harvesting leaf, stem, and root biomass as well as determining leaf area, deeper insight can be achieved in the various components of a plant and how they together determine whole plant growth. Alternative ways to decompose RGR. As much as RGR can be seen from the perspective of C-economy, by calculating leaf area and photosynthesis, it could equally well be approached from the perspective of organic N concentration, and the rate of biomass increase per unit organic N: formula_6 where N is total plant organic Nitrogen, PNC is the plant organic nitrogen concentration, and NP, the nitrogen productivity, the increase in biomass per unit organic N present. Another way to break down RGR is to consider biomass increase from the perspective of a nutrient (element) and its uptake rate by the roots. RGR can then be rewritten as a function of the Root Mass Fraction (RMF), the concentration of that element in the plant and the specific uptake rate of roots for the element of interest. Under the condition that the concentration of the element of interest remains constant (i.e. dE/dM = E/M), RGR can be also written as: formula_7, which can be expanded to: formula_8 where MR is the mass of the roots, SAR the specific uptake rate of the roots (moles of E taken up per unit root mass and per time), and [E] the concentration of element E in the plant. Size-dependence of RGR. Although the increase in plant size is more or less proportional to plant mass already present, plants do not grow strictly exponentially. In a period of several days, plant growth rate will vary because of diurnal changes in light intensity, and day-to-day differences in the daily light integral. At night, plants will respire and even lose biomass. Over a longer period (weeks to months), RGR will generally decrease because of several reasons. First, the newly formed leaves at the top of the plant will begin to shade lower leaves, and therefore, average photosynthesis per unit area will go down, and so will ULR. Second, non-photosynthetic biomass, especially stems, will increase with plant size. The RGR of trees in particular decreases with increasing size due in part to the large allocation to structural material in the trunk required to hold the leaves up in the canopy. Overall, respiration scales with total biomass, but photosynthesis only scales with photosynthetically active leaf area and as a result growth rate slows down as total biomass increases and LAR decreases. And thirdly, depending on the growth conditions applied, shoot and/or root space may become confined with plant age, or water and/or nutrient supply do not keep pace with plant size and become more and more limiting. One way to 'correct' for these differences is by plotting RGR and their growth components directly against plant size. If RGR specifically is of interest, another approach is to separate size effects from intrinsic growth differences mathematically. Decomposing the RGR ignores the dependency of plant growth rate on plant size (or allometry) and assumes, incorrectly, that plant growth is directly proportional to total plant size (isometry). As a result RGR analyses assume that size effects are isometric (scaling exponents are 1.0) instead of allometric (exponents less than 1) or hypoallometric (exponents greater than 1). It has been demonstrated that traditional RGR lacks several of the critical traits influencing growth and the allometric dependency of leaf mass and also showed how to incorporate alloemtric dependencies into RGR growth equations. This has been used to derive a generalized trait-based model of plant growth (see also Metabolic Scaling Theory and Metabolic Theory of Ecology) to show how plant size and the allometric scaling of key functional traits interact to regulate variation in whole-plant relative growth rate. Growth analysis in agronomy. Plant growth analysis is often applied at the individual level to young well-spaced plants grown individually in pots. However, plant growth is also highly relevant in agronomy, where plants are generally grown at high density and to seed maturity. After canopy closure, plant growth is not proportional to size anymore, but changes to linear, with in the end saturation to a maximum value when crops mature. Equations used to describe plant size over time are then often expolinear or sigmoidal. Agronomic studies often focus on the above-ground part of plant biomass, and consider crop growth rates rather than individual plant growth rates. Nonetheless there is a strong corollary between the two approaches. More specifically, the ULR as discussed above shows up in crop growth analysis as well, as: formula_9 where CGR is the Crop Growth Rate, the increase in (shoot) biomass per unit ground area, Ag the ground area occupied by a crop, A the total amount of leaf area on that ground area, and LAI the Leaf Area Index, the amount of leaf area per unit ground area. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "AGR = \\lim_{\\Delta t \\to 0} {\\Delta M\\over \\Delta t} = {dM\\over dt}" }, { "math_id": 1, "text": "M(t) = M_0\\exp(RGR\\cdot t) " }, { "math_id": 2, "text": "RGR \\ = \\ \\frac{1}{M} \\frac{dM}{dt} " }, { "math_id": 3, "text": "RGR \\ = \\ {\\operatorname{\\ln(M_2) \\ - \\ \\ln(M_1)}\\over\\operatorname{t_2 \\ - \\ t_1}\\!}" }, { "math_id": 4, "text": "RGR \\ = \\ \\frac{A}{M} \\ . \\ \\frac{1}{A}\\frac{dM}{dt} \\ = \\ LAR\\ . \\ ULR " }, { "math_id": 5, "text": "RGR \\ = \\ \\frac{A}{M_L} \\ . \\ \\frac{M_L}{M}\\ . \\ \\frac{1}{A}\\frac{dM}{dt} \\ = \\ SLA\\ . \\ LMF \\ . \\ ULR " }, { "math_id": 6, "text": "RGR \\ = \\ \\frac{N}{M} \\ . \\ \\frac{1}{N}\\frac{dM}{dt} \\ = \\ PNC\\ . \\ NP " }, { "math_id": 7, "text": "RGR \\ = \\ = \\ \\frac{1}{E} \\frac{dE}{dt}" }, { "math_id": 8, "text": "RGR \\ = \\ \\frac{M_R}{M} \\ . \\ \\frac{M}{E}\\ . \\ \\frac{1}{M_R}\\frac{dE}{dt} \\ = \\ RMF\\ . \\ \\frac{1}{[E]} \\ . \\ SAR " }, { "math_id": 9, "text": "CGR \\ = \\ \\frac{1}{A_g} \\ . \\ \\frac{dM}{dt} \\ = \\ \\frac{A}{A_g} \\ .\\ \\frac{1}{A}\\frac{dM}{dt} \\ = \\ LAI\\ . \\ ULR " } ]
https://en.wikipedia.org/wiki?curid=61125010
6112557
Photoelastic modulator
A photoelastic modulator (PEM) is an optical device used to modulate the polarization of a light source. The photoelastic effect is used to change the birefringence of the optical element in the photoelastic modulator. PEM was first invented by J. Badoz in the 1960s and originally called a "birefringence modulator." It was initially developed for physical measurements including optical rotary dispersion and Faraday rotation, polarimetry of astronomical objects, strain-induced birefringence, and ellipsometry. Later developers of the photoelastic modulator include J.C Kemp, S.N Jasperson and S.E Schnatterly. Description. The basic design of a photoelastic modulator consists of a piezoelectric transducer and a half wave resonant bar; the bar being a transparent material (now most commonly fused silica). The transducer is tuned to the natural frequency of the bar. This resonance modulation results in highly sensitive polarization measurements. The fundamental vibration of the optic is along its longest dimension. Basic principles. The principle of operation of photoelastic modulators is based on the photoelastic effect, in which a mechanically stressed sample exhibits birefringence proportional to the resulting strain. Photoelastic modulators are resonant devices where the precise oscillation frequency is determined by the properties of the optical element/transducer assembly. The transducer is tuned to the resonance frequency of the optical element along its long dimension, determined by its length and the speed of sound in the material. A current is then sent through the transducer to vibrate the optical element through stretching and compressing which changes the birefringence of the transparent material. Because of this resonant character, the birefringence of the optical element can be modulated to large amplitudes, but also by the same reason, the operation of a PEM is limited to a single frequency, and most commercial devices manufactured today operate at about 50 kHz. Applications. Polarization modulation of a light source. This is the most basic application and function of a PEM. In a typical setup, where original light source is linearly polarized at 45 degrees from the optical axis of the PEM, the resulting polarization of light is modulated at the PEM operating frequency "f", and for a sinusoidal modulating signal, it can be expressed in Jones matrix formalism as: formula_0 where "A" is the amplitude of the modulation. Linearly polarized, monochromatic light impinging at 45 degrees to the optical axis can be thought of as the sum of two components, one parallel and one perpendicular to the optical axis of the PEM. The birefringence introduced in the plate will retard one of these components more than the other, that is the PEM acts as a tunable wave plate. Typically it is adjusted to be either a quarter wave or half wave plate at the peak of the oscillation. For the quarter wave plate case, the amplitude of oscillation is adjusted so that at the given wavelength one component is alternately retarded and advanced 90 degrees relative to the other, so that the exiting light is alternately right-hand and left-hand circularly polarized at the peaks. A reference signal is taken from the modulator oscillator and is used to drive a phase-sensitive detector, the demodulator. The amplitude of oscillation is adjusted by an external applied voltage that is proportional to the wavelength of the light passing through the modulator. Polarimetry. A typical polarimetric setup consists of two linear polarizers forming a crossed analyzer setup, an optical sample introducing the change in the polarization of light, and a PEM further modulating the polarization state. The final detected intensities at the fundamental and second harmonic of PEM operating frequency depend on the ellipticity and rotation introduced by the sample. PEM polarimetry has the advantage that the signal is modulated at a high frequency (and often detected with a lock-in amplifier), excluding many sources of noise not at the PEM operating frequency and attenuating the white noise by the bandwidth of the lock-in amplifier. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi = \\frac{1}{\\sqrt{2}}\\left(\\begin{matrix}1 \\\\ e^{i A \\sin 2 \\pi f t} \\end{matrix} \\right) " } ]
https://en.wikipedia.org/wiki?curid=6112557
6112758
Chebyshev rational functions
In mathematics, the Chebyshev rational functions are a sequence of functions which are both rational and orthogonal. They are named after Pafnuty Chebyshev. A rational Chebyshev function of degree "n" is defined as: formula_0 where "Tn"("x") is a Chebyshev polynomial of the first kind. Properties. Many properties can be derived from the properties of the Chebyshev polynomials of the first kind. Other properties are unique to the functions themselves. formula_1 formula_2 formula_3 Orthogonality. Defining: formula_4 The orthogonality of the Chebyshev rational functions may be written: formula_5 where "cn" 2 for "n" 0 and "cn" 1 for "n" ≥ 1; "δnm" is the Kronecker delta function. Expansion of an arbitrary function. For an arbitrary function "f"("x") ∈ "L" the orthogonality relationship can be used to expand "f"("x"): formula_6 where formula_7 formula_8 formula_9
[ { "math_id": 0, "text": "R_n(x)\\ \\stackrel{\\mathrm{def}}{=}\\ T_n\\left(\\frac{x-1}{x+1}\\right)" }, { "math_id": 1, "text": "R_{n+1}(x)=2\\left(\\frac{x-1}{x+1}\\right)R_{n}(x)-R_{n-1}(x)\\quad\\text{for}\\,n\\ge 1 " }, { "math_id": 2, "text": "(x+1)^2R_n(x)=\\frac{1}{n+1}\\frac{\\mathrm{d}}{\\mathrm{d}x}R_{n+1}(x)-\\frac{1}{n-1}\\frac{\\mathrm{d}}{\\mathrm{d}x}R_{n-1}(x) \\quad \\text{for } n\\ge 2" }, { "math_id": 3, "text": "(x+1)^2x\\frac{\\mathrm{d}^2}{\\mathrm{d}x^2}R_n(x)+\\frac{(3x+1)(x+1)}{2}\\frac{\\mathrm{d}}{\\mathrm{d}x}R_n(x)+n^2R_{n}(x) = 0" }, { "math_id": 4, "text": "\\omega(x) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{1}{(x+1)\\sqrt{x}}" }, { "math_id": 5, "text": "\\int_{0}^\\infty R_m(x)\\,R_n(x)\\,\\omega(x)\\,\\mathrm{d}x=\\frac{\\pi c_n}{2}\\delta_{nm}" }, { "math_id": 6, "text": "f(x)=\\sum_{n=0}^\\infty F_n R_n(x)" }, { "math_id": 7, "text": "F_n=\\frac{2}{c_n\\pi}\\int_{0}^\\infty f(x)R_n(x)\\omega(x)\\,\\mathrm{d}x." }, { "math_id": 8, "text": "\\begin{align}\nR_0(x)&=1\\\\\nR_1(x)&=\\frac{x-1}{x+1}\\\\\nR_2(x)&=\\frac{x^2-6x+1}{(x+1)^2}\\\\\nR_3(x)&=\\frac{x^3-15x^2+15x-1}{(x+1)^3}\\\\\nR_4(x)&=\\frac{x^4-28x^3+70x^2-28x+1}{(x+1)^4}\\\\\nR_n(x)&=(x+1)^{-n}\\sum_{m=0}^{n} (-1)^m\\binom{2n}{2m}x^{n-m}\n\\end{align}" }, { "math_id": 9, "text": "R_n(x)=\\sum_{m=0}^{n} \\frac{(m!)^2}{(2m)!}\\binom{n+m-1}{m}\\binom{n}{m}\\frac{(-4)^m}{(x+1)^m} " } ]
https://en.wikipedia.org/wiki?curid=6112758
61128246
Transit node routing
In applied mathematics, transit node routing can be used to speed up shortest-path routing by pre-computing connections between common access nodes to a sub-network relevant to long-distance travel. Transit node routing as a framework was established in 2007 and many concrete implementations have surfaced in the years after such as approaches using grids, highway hierarchies and contraction hierarchies. Transit node routing is a static approach that requires pre-processing of pair-wise distances between important nodes in the graph (see below how those nodes are chosen). A dynamic approach has not been published. Intuition. Long-distance travel usually involves driving along a subset of the road network such as freeways instead of e.g. urban roads. This sub-network can only be entered by using sparsely distributed access nodes. When compared to one another, multiple long-distance routes starting at the same location always use the same small amount of access nodes close to the starting location to enter this network. In the same way, similar target locations are always reached by using the same access nodes close to them. This intuition only holds for long-distance travel. When travelling short distances, such access nodes might be never used because the fastest path to the target only uses local roads. Because the number of such access nodes is small compared to the overall number of nodes in a road network, all shortest routes connecting those nodes with each other can be pre-calculated and stored. When calculating a shortest path therefore only routes to access nodes close to start and target location need to be calculated. General framework. Locality filter. Short routes between close start and target locations may not require any transit nodes. In this case, the above framework leads to incorrect distances because it forces routes to visit at least one transit node. To prevent this kind of problem, a locality filter can be used. For given start and target locations, the locality filter decides, if transit node routing should be applied or if a fallback-routine should be used (local query). Concrete instances. Transit node routing is not an algorithm but merely a framework for speeding up route planning. The general framework leaves open a few questions that need to be answered to implement it: The following example implementations of this framework answer these questions using different underlying methods such as grouping nodes in cells of an overlay grid and a more sophisticated implementation based on contraction hierarchies. Geometrical approach using grids. In a grid-based approach, the bounding square of all nodes is equally subdivided into square cells. How are access nodes selected? For each cell formula_9, a set of access nodes can be found by looking at an inner area formula_10 of 5x5 cells and an outer area formula_11 of 9x9 cells around formula_9. Focusing on crossing nodes (ends of edges that cross the boundary of formula_9, formula_10 or formula_11), the access nodes for formula_9 are those nodes of formula_10 that are part of a shortest path from some node in formula_9 to a node in formula_11. As access nodes for an arbitrary node formula_12 all access nodes of formula_9 are chosen (red dots in the image to the right). How are transit nodes selected? The set of transit nodes is exactly the union of all sets of access nodes. Which locality filter should be used? The way access nodes are selected implies that if source and target are more than four grid cells apart, a transit node must be passed on the shortest path and the distance can be calculated as described above. If they lie closer together, a fallback-algorithm is used to obtain the distance. How should local queries be handled? Local queries are only needed if start and target already lie close together, therefore every suitable shortest-path algorithm such as Dijkstra's algorithm or extensions thereof can be chosen. Space requirements. The pre-computed distances between each node and the corresponding access node as well as the pairwise distances between transit nodes need to be stored in distance tables. In the grid-based implementation outlined above, this results in 16 bytes of storage that is required for each node of the road graph. A full graph of the USA road network has 23,947,347 nodes. Therefore approx. 383 MB of storage would be required to store the distance tables. Using contraction hierarchies. How are transit nodes selected? By definition, a contraction hierarchy moves important nodes (i.e. nodes that are part of many shortest paths) to the top of the hierarchy. A set of transit nodes therefore can be selected as the formula_13highest nodes of the contraction hierarchy. How are access nodes selected? Forward access nodes of a node formula_6 can be found by running the upward-search of the contraction hierarchy starting at formula_6. During the upward-search, edges leaving previously found transit nodes aren't relaxed. When the search has no more upward nodes left to settle, those transit nodes that have been settled are the access nodes of formula_6. Backward access nodes can be found analogously. Which locality filter should be used? If the highest node of a shortest up-down-path in the hierarchy is not part of the set of transit nodes, then the query was local. This implies that neither the up-part of the path (beginning at the starting node) nor the down-part of the path (ending at the target node) can contain a transit node and there must be a common node in both paths. During the calculation of the access nodes, the search space (all visited nodes towards the top of the hierarchy) for each node can be stored without including transit nodes. When performing a query, those search spaces for start- and target node are checked for an intersection. If those spaces are disjoint, transit node routing can be used because the up- and down-paths must meet at a transit node. Otherwise there could be a shortest path without a transit node. How should local queries be handled? Local queries use the regular query algorithm of the contraction hierarchy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T \\subseteq V" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "v \\in V" }, { "math_id": 3, "text": "\\overrightarrow{A}(v) \\subseteq T" }, { "math_id": 4, "text": "\\overleftarrow{A}(v) \\subseteq T" }, { "math_id": 5, "text": "D_T" }, { "math_id": 6, "text": "v" }, { "math_id": 7, "text": "d_A" }, { "math_id": 8, "text": "d(s,t) = \\min_{u \\in \\overrightarrow{A}(s), v \\in \\overleftarrow{A}(t)}d_A(s,u) + D_T(u,v) + d_A(v,t)" }, { "math_id": 9, "text": "C" }, { "math_id": 10, "text": "I" }, { "math_id": 11, "text": "O" }, { "math_id": 12, "text": "v \\in C" }, { "math_id": 13, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=61128246
6112835
FELICS
Image compression algorithm FELICS, which stands for Fast Efficient &amp; Lossless Image Compression System, is a lossless image compression algorithm that performs 5-times faster than the original lossless JPEG codec and achieves a similar compression ratio. History. It was invented by Paul G. Howard and Jeffrey S. Vitter of the Department of Computer Science at Brown University in Providence, Rhode Island, USA, and was first presented at the 1993 IEEE Data Compression Conference in Snowbird, Utah. It was successfully implemented in hardware and deployed as part of HiRISE on the Mars Reconnaissance Orbiter. Principle. Like other lossless codecs for continuous-tone images, FELICS operates by decorrelating the image and encoding it with an entropy coder. The decorrelation is the context formula_0 where formula_1 and formula_2 where formula_3 are the pixel's two nearest neighbors (causal, already coded and known at the decoder) used for providing the context to code the present pixel formula_4. Except at the top and left edges, these are the pixel above and the pixel to the left. For example, the neighbors of pixel X in the diagram are A and B, but if X were at the left side, its neighbors would be B and D. P lies within the closed interval [L, H] roughly half the time. Otherwise, it is above H or below L. These can be encoded as 1, 01, and 00 respectively (p. 4). The following figure shows the (idealized) histogram of the pixels and their intensity values along the x-axis, and frequency of occurrence along the y-axis. The distribution of P within the range [L, H] is nearly uniform with a minor peak near the center formula_5 of this range. When P falls in the range [L, H], P − L is encoded using an adjusted binary code such that values in the center of the range use floor(log2(Δ + 1)) bits and values at the ends use ceil (log2(Δ + 1)) bits (p. 2). For example, when Δ = 11, the codes for P − L in 0 to 11 may be 0000, 0001, 0010, 0011, 010, 011, 100, 101, 1100, 1101, 1110, 1111. Outside the range, P tends to follow a geometric distribution on each side (p. 3). It is encoded using a Rice code with parameters chosen based on previous choices. For each Δ and each possible Rice code parameter "k", the algorithm keeps track of the total number of bits that would have been used to encode pixels outside the range. Then for each pixel, it chooses the Rice code with the based on Δ at the pixel. Improvements. FELICS improvements include methods for estimating Δ and estimating "k". For instance, Howard and Vitter's article recognizes that relatively flat areas (with small Δ, especially where L = H) may have some noise, and compression performance in these areas improves by widening the interval, increasing the effective Δ. It is also possible to estimate the optimal "k" for a given Δ based on the mean of all prediction residues seen so far, which is faster and uses less memory than computing the number of bits used for each "k". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta = H - L " }, { "math_id": 1, "text": "H=max(P1,P2)" }, { "math_id": 2, "text": "L=min(P1,P2)" }, { "math_id": 3, "text": "P1,P2" }, { "math_id": 4, "text": "P" }, { "math_id": 5, "text": "(L+H)/2" } ]
https://en.wikipedia.org/wiki?curid=6112835
6113
Chain rule
For derivatives of composed functions In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if formula_0 is the function such that formula_1 for every x, then the chain rule is, in Lagrange's notation, formula_2 or, equivalently, formula_3 The chain rule may also be expressed in Leibniz's notation. If a variable z depends on the variable y, which itself depends on the variable x (that is, y and z are dependent variables), then z depends on x as well, via the intermediate variable y. In this case, the chain rule is expressed as formula_4 and formula_5 for indicating at which points the derivatives have to be evaluated. In integration, the counterpart to the chain rule is the substitution rule. Intuitive explanation. Intuitively, the chain rule states that knowing the instantaneous rate of change of "z" relative to "y" and that of "y" relative to "x" allows one to calculate the instantaneous rate of change of "z" relative to "x" as the product of the two rates of change. As put by George F. Simmons: "If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man." The relationship between this example and the chain rule is as follows. Let z, y and x be the (variable) positions of the car, the bicycle, and the walking man, respectively. The rate of change of relative positions of the car and the bicycle is formula_6 Similarly, formula_7 So, the rate of change of the relative positions of the car and the walking man is formula_8 The rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time; that is, formula_9 or, equivalently, formula_10 which is also an application of the chain rule. History. The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of formula_11 as the composite of the square root function and the function formula_12. He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of the chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his "Analyse des infiniment petits". The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.. It is believed that the first "modern" version of the chain rule appears in Lagrange’s 1797 "Théorie des fonctions analytiques"; it also appears in Cauchy’s 1823 "Résumé des Leçons données a L’École Royale Polytechnique sur Le Calcul Infinitesimal". Statement. The simplest form of the chain rule is for real-valued functions of one real variable. It states that if "g" is a function that is differentiable at a point "c" (i.e. the derivative "g"′("c") exists) and "f" is a function that is differentiable at "g"("c"), then the composite function formula_13 is differentiable at "c", and the derivative is formula_14 The rule is sometimes abbreviated as formula_15 If "y" = "f"("u") and "u" = "g"("x"), then this abbreviated form is written in Leibniz notation as: formula_16 The points where the derivatives are evaluated may also be stated explicitly: formula_17 Carrying the same reasoning further, given "n" functions formula_18 with the composite function formula_19, if each function formula_20 is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation): formula_21 Applications. Composites of more than two functions. The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of f, g, and "h" (in that order) is the composite of f with "g" ∘ "h". The chain rule states that to compute the derivative of "f" ∘ "g" ∘ "h", it is sufficient to compute the derivative of "f" and the derivative of "g" ∘ "h". The derivative of f can be calculated directly, and the derivative of "g" ∘ "h" can be calculated by applying the chain rule again. For concreteness, consider the function formula_22 This can be decomposed as the composite of three functions: formula_23 So that formula_24. Their derivatives are: formula_25 The chain rule states that the derivative of their composite at the point "x" = "a" is: formula_26 In Leibniz's notation, this is: formula_27 or for short, formula_28 The derivative function is therefore: formula_29 Another way of computing this derivative is to view the composite function "f" ∘ "g" ∘ "h" as the composite of "f" ∘ "g" and "h". Applying the chain rule in this manner would yield: formula_30 This is the same as what was computed above. This should be expected because ("f" ∘ "g") ∘ "h" = "f" ∘ ("g" ∘ "h"). Sometimes, it is necessary to differentiate an arbitrarily long composition of the form formula_31. In this case, define formula_32 where formula_33 and formula_34 when formula_35. Then the chain rule takes the form formula_36 or, in the Lagrange notation, formula_37 Quotient rule. The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function "f"("x")/"g"("x") as the product "f"("x") · 1/"g"("x"). First apply the product rule: formula_38 To compute the derivative of 1/"g"("x"), notice that it is the composite of g with the reciprocal function, that is, the function that sends x to 1/"x". The derivative of the reciprocal function is formula_39. By applying the chain rule, the last expression becomes: formula_40 which is the usual formula for the quotient rule. Derivatives of inverse functions. Suppose that "y" = "g"("x") has an inverse function. Call its inverse function f so that we have "x" = "f"("y"). There is a formula for the derivative of f in terms of the derivative of g. To see this, note that f and g satisfy the formula formula_41 And because the functions formula_42 and x are equal, their derivatives must be equal. The derivative of x is the constant function with value 1, and the derivative of formula_42 is determined by the chain rule. Therefore, we have that: formula_43 To express f' as a function of an independent variable y, we substitute formula_44 for x wherever it appears. Then we can solve for f'. formula_45 For example, consider the function "g"("x") = "e""x". It has an inverse "f"("y") = ln "y". Because "g"′("x") = "e""x", the above formula says that formula_46 This formula is true whenever g is differentiable and its inverse f is also differentiable. This formula can fail when one of these conditions is not true. For example, consider "g"("x") = "x"3. Its inverse is "f"("y") = "y"1/3, which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of f at zero, then we must evaluate 1/"g"′("f"(0)). Since "f"(0) = 0 and "g"′(0) = 0, we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because f is not differentiable at zero. Back propagation. The chain rule forms the basis of the back propagation algorithm, which is used in gradient descent of neural networks in deep learning (artificial intelligence). Higher derivatives. Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that "y" "f"("u") and "u" "g"("x"), then the first few derivatives are: formula_47 Proofs. First proof. One proof of the chain rule begins by defining the derivative of the composite function "f" ∘ "g", where we take the limit of the difference quotient for "f" ∘ "g" as x approaches a: formula_48 Assume for the moment that formula_49 does not equal formula_50 for any formula_51 near formula_52. Then the previous expression is equal to the product of two factors: formula_53 If formula_54 oscillates near a, then it might happen that no matter how close one gets to a, there is always an even closer x such that "g"("x") = "g"("a"). For example, this happens near "a" = 0 for the continuous function g defined by "g"("x") = 0 for "x" = 0 and "g"("x") = "x"2 sin(1/"x") otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function formula_55 as follows: formula_56 We will show that the difference quotient for "f" ∘ "g" is always equal to: formula_57 Whenever "g"("x") is not equal to "g"("a"), this is clear because the factors of "g"("x") − "g"("a") cancel. When "g"("x") equals "g"("a"), then the difference quotient for "f" ∘ "g" is zero because "f"("g"("x")) equals "f"("g"("a")), and the above product is zero because it equals "f"′("g"("a")) times zero. So the above product is always equal to the difference quotient, and to show that the derivative of "f" ∘ "g" at "a" exists and to determine its value, we need only show that the limit as "x" goes to "a" of the above product exists and determine its value. To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are "Q"("g"("x")) and ("g"("x") − "g"("a")) / ("x" − "a"). The latter is the difference quotient for g at a, and because g is differentiable at a by assumption, its limit as x tends to a exists and equals "g"′("a"). As for "Q"("g"("x")), notice that "Q" is defined wherever "f" is. Furthermore, "f" is differentiable at "g"("a") by assumption, so "Q" is continuous at "g"("a"), by definition of the derivative. The function g is continuous at a because it is differentiable at a, and therefore "Q" ∘ "g" is continuous at a. So its limit as "x" goes to "a" exists and equals "Q"("g"("a")), which is "f"′("g"("a")). This shows that the limits of both factors exist and that they equal "f"′("g"("a")) and "g"′("a"), respectively. Therefore, the derivative of "f" ∘ "g" at "a" exists and equals "f"′("g"("a"))"g"′("a"). Second proof. Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function "g" is differentiable at "a" if there exists a real number "g"′("a") and a function "ε"("h") that tends to zero as "h" tends to zero, and furthermore formula_58 Here the left-hand side represents the true difference between the value of "g" at "a" and at "a" + "h", whereas the right-hand side represents the approximation determined by the derivative plus an error term. In the situation of the chain rule, such a function "ε" exists because "g" is assumed to be differentiable at "a". Again by assumption, a similar function also exists for "f" at "g"("a"). Calling this function "η", we have formula_59 The above definition imposes no constraints on "η"(0), even though it is assumed that "η"("k") tends to zero as "k" tends to zero. If we set "η"(0) = 0, then "η" is continuous at 0. Proving the theorem requires studying the difference "f"("g"("a" + "h")) − "f"("g"("a")) as "h" tends to zero. The first step is to substitute for "g"("a" + "h") using the definition of differentiability of "g" at "a": formula_60 The next step is to use the definition of differentiability of "f" at "g"("a"). This requires a term of the form "f"("g"("a") + "k") for some "k". In the above equation, the correct "k" varies with "h". Set "k""h" = "g"′("a") "h" + "ε"("h") "h" and the right hand side becomes "f"("g"("a") + "k""h") − "f"("g"("a")). Applying the definition of the derivative gives: formula_61 To study the behavior of this expression as "h" tends to zero, expand "k""h". After regrouping the terms, the right-hand side becomes: formula_62 Because "ε"("h") and "η"("k""h") tend to zero as "h" tends to zero, the first two bracketed terms tend to zero as "h" tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference "f"("g"("a" + "h")) − "f"("g"("a")), by the definition of the derivative "f" ∘ "g" is differentiable at "a" and its derivative is "f"′("g"("a")) "g"′("a"). The role of "Q" in the first proof is played by "η" in this proof. They are related by the equation: formula_63 The need to define "Q" at "g"("a") is analogous to the need to define "η" at zero. Third proof. Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule. Under this definition, a function f is differentiable at a point a if and only if there is a function q, continuous at a and such that "f"("x") − "f"("a") = "q"("x")("x" − "a"). There is at most one such function, and if f is differentiable at a then "f" ′("a") = "q"("a"). Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions q, continuous at "g"("a"), and r, continuous at a, and such that, formula_64 and formula_65 Therefore, formula_66 but the function given by "h"("x") = "q"("g"("x"))"r"("x") is continuous at a, and we get, for this a formula_67 A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions. Proof via infinitesimals. If formula_68 and formula_69 then choosing infinitesimal formula_70 we compute the corresponding formula_71 and then the corresponding formula_72, so that formula_73 and applying the standard part we obtain formula_74 which is the chain rule. Multivariable case. The full generalization of the chain rule to multi-variable functions (such as formula_75) is rather technical. However, it is simpler to write in the case of functions of the form formula_76 where formula_77, and formula_78 for each formula_79 As this case occurs often in the study of functions of a single variable, it is worth describing it separately. Case of scalar-valued functions with multiple inputs. Let formula_77, and formula_78 for each formula_79 To write the chain rule for the composition of functions formula_80 one needs the partial derivatives of f with respect to its k arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to use "D"-Notation, and to denote by formula_81 the partial derivative of f with respect to its ith argument, and by formula_82 the value of this derivative at z. With this notation, the chain rule is formula_83 Example: arithmetic operations. If the function f is addition, that is, if formula_84 then formula_85 and formula_86. Thus, the chain rule gives formula_87 For multiplication formula_88 the partials are formula_89 and formula_90. Thus, formula_91 The case of exponentiation formula_92 is slightly more complicated, as formula_93 and, as formula_94 formula_95 It follows that formula_96 General rule: Vector-valued functions with multiple inputs. The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions "f" : R"m" → R"k" and "g" : R"n" → R"m", and a point a in R"n". Let "D"a "g" denote the total derivative of "g" at a and "D""g"(a) "f" denote the total derivative of "f" at "g"(a). These two derivatives are linear transformations R"n" → R"m" and R"m" → R"k", respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of "f" ∘ "g" at a: formula_97 or for short, formula_98 The higher-dimensional chain rule can be proved using a technique similar to the second proof given above. Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says: formula_99 or for short, formula_100 That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points). The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If k, m, and n are 1, so that "f" : R → R and "g" : R → R, then the Jacobian matrices of "f" and "g" are 1 × 1. Specifically, they are: formula_101 The Jacobian of "f" ∘ "g" is the product of these 1 × 1 matrices, so it is "f"′("g"("a"))⋅"g"′("a"), as expected from the one-dimensional chain rule. In the language of linear transformations, "D""a"("g") is the function which scales a vector by a factor of "g"′("a") and "D""g"("a")("f") is the function which scales a vector by a factor of "f"′("g"("a")). The chain rule says that the composite of these two linear transformations is the linear transformation "D""a"("f" ∘ "g"), and therefore it is the function that scales a vector by "f"′("g"("a"))⋅"g"′("a"). Another way of writing the chain rule is used when "f" and "g" are expressed in terms of their components as y = "f"(u) = ("f"1(u), …, "f""k"(u)) and u = "g"(x) = ("g"1(x), …, "g""m"(x)). In this case, the above rule for Jacobian matrices is usually written as: formula_102 The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the i-th coordinate direction is found by multiplying the Jacobian matrix by the i-th basis vector. By doing this to the formula above, we find: formula_103 Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get: formula_104 More conceptually, this rule expresses the fact that a change in the "x""i" direction may change all of "g"1 through "gm", and any of these changes may affect "f". In the special case where "k" = 1, so that "f" is a real-valued function, then this formula simplifies even further: formula_105 This can be rewritten as a dot product. Recalling that u ("g"1, …, "g""m"), the partial derivative ∂u / ∂"x""i" is also a vector, and the chain rule says that: formula_106 Example. Given "u"("x", "y") = "x"2 + 2"y" where "x"("r", "t") = "r" sin("t") and "y"("r","t") = sin2("t"), determine the value of ∂"u" / ∂"r" and ∂"u" / ∂"t" using the chain rule. formula_107 and formula_108 Higher derivatives of multivariable functions. Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If "y" = "f"(u) is a function of u = "g"(x) as above, then the second derivative of "f" ∘ "g" is: formula_109 Further generalizations. All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different. One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of "f" ∘ "g" is the composite of the derivative of "f" and the derivative of "g". This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula. The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds. In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings "f" : "R" → "S" determines a morphism of Kähler differentials "Df" : Ω"R" → Ω"S" which sends an element "dr" to "d"("f"("r")), the exterior differential of "f"("r"). The formula "D"("f" ∘ "g") = "Df" ∘ "Dg" holds in this context as well. The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a "C""r"-manifold to a "C""r"−1-manifold (its tangent bundle) and a "C""r"-function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula "D"("f" ∘ "g") = "Df" ∘ "Dg". There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) "dX""t" with a twice-differentiable function "f". In Itō's lemma, the derivative of the composite function depends not only on "dX""t" and the derivative of "f" but also on the second derivative of "f". The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h=f\\circ g" }, { "math_id": 1, "text": "h(x)=f(g(x))" }, { "math_id": 2, "text": "h'(x) = f'(g(x)) g'(x)." }, { "math_id": 3, "text": "h'=(f\\circ g)'=(f'\\circ g)\\cdot g'." }, { "math_id": 4, "text": "\\frac{dz}{dx} = \\frac{dz}{dy} \\cdot \\frac{dy}{dx}," }, { "math_id": 5, "text": " \\left.\\frac{dz}{dx}\\right|_{x} = \\left.\\frac{dz}{dy}\\right|_{y(x)}\n\\cdot \\left. \\frac{dy}{dx}\\right|_{x} ," }, { "math_id": 6, "text": "\\frac {dz}{dy}=2." }, { "math_id": 7, "text": "\\frac {dy}{dx}=4." }, { "math_id": 8, "text": "\\frac{dz}{dx}=\\frac{dz}{dy}\\cdot\\frac{dy}{dx}=2\\cdot 4=8." }, { "math_id": 9, "text": "\\frac{dz}{dx}=\\frac \\frac{dz}{dt}\\frac{dx}{dt}," }, { "math_id": 10, "text": "\\frac{dz}{dt}=\\frac{dz}{dx}\\cdot \\frac{dx}{dt}," }, { "math_id": 11, "text": "\\sqrt{a + bz + cz^2}" }, { "math_id": 12, "text": "a + bz + cz^2\\!" }, { "math_id": 13, "text": "f \\circ g" }, { "math_id": 14, "text": " (f\\circ g)'(c) = f'(g(c))\\cdot g'(c). " }, { "math_id": 15, "text": "(f\\circ g)' = (f'\\circ g) \\cdot g'." }, { "math_id": 16, "text": "\\frac{dy}{dx} = \\frac{dy}{du} \\cdot \\frac{du}{dx}." }, { "math_id": 17, "text": "\\left.\\frac{dy}{dx}\\right|_{x=c} = \\left.\\frac{dy}{du}\\right|_{u = g(c)} \\cdot \\left.\\frac{du}{dx}\\right|_{x=c}." }, { "math_id": 18, "text": "f_1, \\ldots, f_n\\!" }, { "math_id": 19, "text": "f_1 \\circ ( f_2 \\circ \\cdots (f_{n-1} \\circ f_n) )\\!" }, { "math_id": 20, "text": "f_i\\!" }, { "math_id": 21, "text": "\\frac{df_1}{dx} = \\frac{df_1}{df_2}\\frac{df_2}{df_3}\\cdots\\frac{df_n}{dx}." }, { "math_id": 22, "text": "y = e^{\\sin (x^2)}." }, { "math_id": 23, "text": "\\begin{align}\ny &= f(u) = e^u, \\\\[6pt]\nu &= g(v) = \\sin v, \\\\[6pt]\nv &= h(x) = x^2.\n\\end{align}" }, { "math_id": 24, "text": " y = f(g(h(x))) " }, { "math_id": 25, "text": "\\begin{align}\n\\frac{dy}{du} &= f'(u) = e^u, \\\\[6pt]\n\\frac{du}{dv} &= g'(v) = \\cos v, \\\\[6pt]\n\\frac{dv}{dx} &= h'(x) = 2x.\n\\end{align}" }, { "math_id": 26, "text": "\\begin{align}\n(f \\circ g \\circ h)'(a) & = f'((g \\circ h)(a)) \\cdot (g \\circ h)'(a) \\\\[10pt]\n& = f'((g \\circ h)(a)) \\cdot g'(h(a)) \\cdot h'(a) \\\\[10pt]\n& = (f' \\circ g \\circ h)(a) \\cdot (g' \\circ h)(a) \\cdot h'(a).\n\\end{align}" }, { "math_id": 27, "text": "\\frac{dy}{dx} = \\left.\\frac{dy}{du}\\right|_{u=g(h(a))}\\cdot\\left.\\frac{du}{dv}\\right|_{v=h(a)}\\cdot\\left.\\frac{dv}{dx}\\right|_{x=a}," }, { "math_id": 28, "text": "\\frac{dy}{dx} = \\frac{dy}{du}\\cdot\\frac{du}{dv}\\cdot\\frac{dv}{dx}." }, { "math_id": 29, "text": "\\frac{dy}{dx} = e^{\\sin(x^2)}\\cdot\\cos(x^2)\\cdot 2x." }, { "math_id": 30, "text": "(f \\circ g \\circ h)'(a) = (f \\circ g)'(h(a))\\cdot h'(a) = f'(g(h(a)))\\cdot g'(h(a))\\cdot h'(a)." }, { "math_id": 31, "text": "f_1 \\circ f_2 \\circ \\cdots \\circ f_{n-1} \\circ f_n\\!" }, { "math_id": 32, "text": "f_{a\\,.\\,.\\,b} = f_{a} \\circ f_{a+1} \\circ \\cdots \\circ f_{b-1} \\circ f_{b}" }, { "math_id": 33, "text": "f_{a\\,.\\,.\\,a} = f_a" }, { "math_id": 34, "text": "f_{a\\,.\\,.\\,b}(x) = x" }, { "math_id": 35, "text": "b < a" }, { "math_id": 36, "text": "Df_{1\\,.\\,.\\,n} = (Df_1 \\circ f_{2\\,.\\,.\\,n}) (Df_2 \\circ f_{3\\,.\\,.\\,n}) \\cdots (Df_{n-1} \\circ f_{n\\,.\\,.\\,n}) Df_n = \\prod_{k=1}^n \\left[Df_k \\circ f_{(k+1)\\,.\\,.\\,n}\\right]" }, { "math_id": 37, "text": "f_{1\\,.\\,.\\,n}'(x) = f_1' \\left( f_{2\\,.\\,.\\,n}(x) \\right) \\; f_2' \\left( f_{3\\,.\\,.\\,n}(x) \\right) \\cdots f_{n-1}' \\left(f_{n\\,.\\,.\\,n}(x)\\right) \\; f_n'(x) = \\prod_{k=1}^{n} f_k' \\left(f_{(k+1\\,.\\,.\\,n)}(x) \\right)" }, { "math_id": 38, "text": "\\begin{align}\n\\frac{d}{dx}\\left(\\frac{f(x)}{g(x)}\\right)\n&= \\frac{d}{dx}\\left(f(x)\\cdot\\frac{1}{g(x)}\\right) \\\\\n&= f'(x)\\cdot\\frac{1}{g(x)} + f(x)\\cdot\\frac{d}{dx}\\left(\\frac{1}{g(x)}\\right).\n\\end{align}" }, { "math_id": 39, "text": "-1/x^2\\!" }, { "math_id": 40, "text": "f'(x)\\cdot\\frac{1}{g(x)} + f(x)\\cdot\\left(-\\frac{1}{g(x)^2}\\cdot g'(x)\\right)\n= \\frac{f'(x) g(x) - f(x) g'(x)}{g(x)^2}," }, { "math_id": 41, "text": "f(g(x)) = x." }, { "math_id": 42, "text": "f(g(x))" }, { "math_id": 43, "text": "f'(g(x)) g'(x) = 1." }, { "math_id": 44, "text": "f(y)" }, { "math_id": 45, "text": "\\begin{align}\nf'(g(f(y))) g'(f(y)) &= 1 \\\\[5pt]\nf'(y) g'(f(y)) &= 1 \\\\[5pt]\nf'(y) = \\frac{1}{g'(f(y))}.\n\\end{align}" }, { "math_id": 46, "text": "\\frac{d}{dy}\\ln y = \\frac{1}{e^{\\ln y}} = \\frac{1}{y}." }, { "math_id": 47, "text": "\n\\begin{align}\n\\frac{dy}{dx} & = \\frac{dy}{du} \\frac{du}{dx} \\\\[4pt]\n\\frac{d^2 y }{d x^2} & =\n \\frac{d^2 y}{d u^2} \\left(\\frac{du}{dx}\\right)^2\n + \\frac{dy}{du} \\frac{d^2 u}{dx^2} \\\\[4pt]\n\\frac{d^3 y }{d x^3} & =\n \\frac{d^3 y}{d u^3} \\left(\\frac{du}{dx}\\right)^3\n + 3 \\, \\frac{d^2 y}{d u^2} \\frac{du}{dx} \\frac{d^2 u}{d x^2}\n + \\frac{dy}{du} \\frac{d^3 u}{d x^3} \\\\[4pt]\n\\frac{d^4 y}{d x^4} & =\n \\frac{d^4 y}{du^4} \\left(\\frac{du}{dx}\\right)^4\n + 6 \\, \\frac{d^3 y}{d u^3} \\left(\\frac{du}{dx}\\right)^2 \\frac{d^2 u}{d x^2}\n + \\frac{d^2 y}{d u^2} \\left( 4 \\, \\frac{du}{dx} \\frac{d^3 u}{dx^3}\n + 3 \\, \\left(\\frac{d^2 u}{dx^2}\\right)^2\\right)\n + \\frac{dy}{du} \\frac{d^4 u}{dx^4}.\n\\end{align}" }, { "math_id": 48, "text": "(f \\circ g)'(a) = \\lim_{x \\to a} \\frac{f(g(x)) - f(g(a))}{x - a}." }, { "math_id": 49, "text": "g(x)\\!" }, { "math_id": 50, "text": "g(a)" }, { "math_id": 51, "text": "x" }, { "math_id": 52, "text": "a" }, { "math_id": 53, "text": "\\lim_{x \\to a} \\frac{f(g(x)) - f(g(a))}{g(x) - g(a)} \\cdot \\frac{g(x) - g(a)}{x - a}." }, { "math_id": 54, "text": "g" }, { "math_id": 55, "text": "Q" }, { "math_id": 56, "text": "Q(y) = \\begin{cases}\n\\displaystyle\\frac{f(y) - f(g(a))}{y - g(a)}, & y \\neq g(a), \\\\\nf'(g(a)), & y = g(a).\n\\end{cases}" }, { "math_id": 57, "text": "Q(g(x)) \\cdot \\frac{g(x) - g(a)}{x - a}." }, { "math_id": 58, "text": "g(a + h) - g(a) = g'(a) h + \\varepsilon(h) h." }, { "math_id": 59, "text": "f(g(a) + k) - f(g(a)) = f'(g(a)) k + \\eta(k) k." }, { "math_id": 60, "text": "f(g(a + h)) - f(g(a)) = f(g(a) + g'(a) h + \\varepsilon(h) h) - f(g(a))." }, { "math_id": 61, "text": "f(g(a) + k_h) - f(g(a)) = f'(g(a)) k_h + \\eta(k_h) k_h." }, { "math_id": 62, "text": "f'(g(a)) g'(a)h + [f'(g(a)) \\varepsilon(h) + \\eta(k_h) g'(a) + \\eta(k_h) \\varepsilon(h)] h." }, { "math_id": 63, "text": "Q(y) = f'(g(a)) + \\eta(y - g(a)). " }, { "math_id": 64, "text": "f(g(x))-f(g(a))=q(g(x))(g(x)-g(a))" }, { "math_id": 65, "text": "g(x)-g(a)=r(x)(x-a)." }, { "math_id": 66, "text": "f(g(x))-f(g(a))=q(g(x))r(x)(x-a)," }, { "math_id": 67, "text": "(f(g(a)))'=q(g(a))r(a)=f'(g(a))g'(a)." }, { "math_id": 68, "text": "y=f(x)" }, { "math_id": 69, "text": "x=g(t)" }, { "math_id": 70, "text": "\\Delta t\\not=0" }, { "math_id": 71, "text": "\\Delta x=g(t+\\Delta t)-g(t)" }, { "math_id": 72, "text": "\\Delta y=f(x+\\Delta x)-f(x)" }, { "math_id": 73, "text": "\\frac{\\Delta y}{\\Delta t} = \\frac{\\Delta y}{\\Delta x} \\frac{\\Delta x}{\\Delta t}" }, { "math_id": 74, "text": "\\frac{d y}{d t}=\\frac{d y}{d x} \\frac{dx}{dt}" }, { "math_id": 75, "text": "f : \\mathbb{R}^m \\to \\mathbb{R}^n" }, { "math_id": 76, "text": "f(g_1(x), \\dots, g_k(x))," }, { "math_id": 77, "text": "f : \\reals^k \\to \\reals" }, { "math_id": 78, "text": "g_i : \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 79, "text": "i = 1, 2, \\dots, k." }, { "math_id": 80, "text": "x \\mapsto f(g_1(x), \\dots , g_k(x))," }, { "math_id": 81, "text": "D_i f" }, { "math_id": 82, "text": "D_i f(z)" }, { "math_id": 83, "text": "\\frac{d}{dx}f(g_1(x), \\dots, g_k (x))=\\sum_{i=1}^k \\left(\\frac{d}{dx}{g_i}(x)\\right) D_i f(g_1(x), \\dots, g_k (x))." }, { "math_id": 84, "text": "f(u,v)=u+v," }, { "math_id": 85, "text": "D_1 f = \\frac{\\partial f}{\\partial u} = 1" }, { "math_id": 86, "text": "D_2 f = \\frac{\\partial f}{\\partial v} = 1" }, { "math_id": 87, "text": "\\frac{d}{dx}(g(x)+h(x)) = \\left( \\frac{d}{dx}g(x) \\right) D_1 f+\\left( \\frac{d}{dx}h(x)\\right) D_2 f=\\frac{d}{dx}g(x) +\\frac{d}{dx}h(x)." }, { "math_id": 88, "text": "f(u,v)=uv," }, { "math_id": 89, "text": "D_1 f = v" }, { "math_id": 90, "text": "D_2 f = u" }, { "math_id": 91, "text": "\\frac{d}{dx}(g(x)h(x)) = h(x) \\frac{d}{dx} g(x) + g(x) \\frac{d}{dx} h(x)." }, { "math_id": 92, "text": "f(u,v)=u^v" }, { "math_id": 93, "text": "D_1 f = vu^{v-1}," }, { "math_id": 94, "text": "u^v=e^{v\\ln u}," }, { "math_id": 95, "text": "D_2 f = u^v\\ln u." }, { "math_id": 96, "text": "\\frac{d}{dx}\\left(g(x)^{h(x)}\\right) = h(x)g(x)^{h(x)-1} \\frac{d}{dx}g(x) + g(x)^{h(x)} \\ln g(x) \\,\\frac{d}{dx}h(x)." }, { "math_id": 97, "text": "D_{\\mathbf{a}}(f \\circ g) = D_{g(\\mathbf{a})}f \\circ D_{\\mathbf{a}}g," }, { "math_id": 98, "text": "D(f \\circ g) = Df \\circ Dg." }, { "math_id": 99, "text": "J_{f \\circ g}(\\mathbf{a}) = J_{f}(g(\\mathbf{a})) J_{g}(\\mathbf{a})," }, { "math_id": 100, "text": "J_{f \\circ g} = (J_f \\circ g)J_g." }, { "math_id": 101, "text": "\\begin{align}\nJ_g(a) &= \\begin{pmatrix} g'(a) \\end{pmatrix}, \\\\\nJ_{f}(g(a)) &= \\begin{pmatrix} f'(g(a)) \\end{pmatrix}.\n\\end{align}" }, { "math_id": 102, "text": "\\frac{\\partial(y_1, \\ldots, y_k)}{\\partial(x_1, \\ldots, x_n)} = \\frac{\\partial(y_1, \\ldots, y_k)}{\\partial(u_1, \\ldots, u_m)} \\frac{\\partial(u_1, \\ldots, u_m)}{\\partial(x_1, \\ldots, x_n)}." }, { "math_id": 103, "text": "\\frac{\\partial(y_1, \\ldots, y_k)}{\\partial x_i} = \\frac{\\partial(y_1, \\ldots, y_k)}{\\partial(u_1, \\ldots, u_m)} \\frac{\\partial(u_1, \\ldots, u_m)}{\\partial x_i}." }, { "math_id": 104, "text": "\\frac{\\partial(y_1, \\ldots, y_k)}{\\partial x_i} = \\sum_{\\ell = 1}^m \\frac{\\partial(y_1, \\ldots, y_k)}{\\partial u_\\ell} \\frac{\\partial u_\\ell}{\\partial x_i}." }, { "math_id": 105, "text": "\\frac{\\partial y}{\\partial x_i} = \\sum_{\\ell = 1}^m \\frac{\\partial y}{\\partial u_\\ell} \\frac{\\partial u_\\ell}{\\partial x_i}." }, { "math_id": 106, "text": "\\frac{\\partial y}{\\partial x_i} = \\nabla y \\cdot \\frac{\\partial \\mathbf{u}}{\\partial x_i}." }, { "math_id": 107, "text": "\\frac{\\partial u}{\\partial r}=\\frac{\\partial u}{\\partial x} \\frac{\\partial x}{\\partial r}+\\frac{\\partial u}{\\partial y} \\frac{\\partial y}{\\partial r} = (2x)(\\sin(t)) + (2)(0) = 2r \\sin^2(t)," }, { "math_id": 108, "text": "\\begin{align}\n\\frac{\\partial u}{\\partial t}\n&= \\frac{\\partial u}{\\partial x} \\frac{\\partial x}{\\partial t}+\\frac{\\partial u}{\\partial y} \\frac{\\partial y}{\\partial t} \\\\\n&= (2x)(r\\cos(t)) + (2)(2\\sin(t)\\cos(t)) \\\\\n&= (2r\\sin(t))(r\\cos(t)) + 4\\sin(t)\\cos(t) \\\\\n&= 2(r^2 + 2) \\sin(t)\\cos(t) \\\\\n&= (r^2 + 2) \\sin(2t).\n\\end{align}" }, { "math_id": 109, "text": "\\frac{\\partial^2 y}{\\partial x_i \\partial x_j} = \\sum_k \\left(\\frac{\\partial y}{\\partial u_k}\\frac{\\partial^2 u_k}{\\partial x_i \\partial x_j}\\right) + \\sum_{k, \\ell} \\left(\\frac{\\partial^2 y}{\\partial u_k \\partial u_\\ell}\\frac{\\partial u_k}{\\partial x_i}\\frac{\\partial u_\\ell}{\\partial x_j}\\right)." } ]
https://en.wikipedia.org/wiki?curid=6113
6113420
Single vegetative obstruction model
The ITU single vegetative obstruction model is a radio propagation model that quantitatively estimates attenuation due to a single plant or tree standing in the middle of a telecommunication link. Coverage. Frequency = Below 3 GHz and over 5 GHz Depth = Not specified Mathematical formulations. The single vegetative obstruction model is formally expressed as, formula_0 where, "A" = The Attenuation due to vegetation. Unit: decibel(dB). "d" = Depth of foliage. Unit: Meter (m). formula_1 = Specific attenuation for short vegetative paths. Unit: decibel per meter (dB/m). "R"i = The initial slope of the attenuation curve "R"f = The final slope of the attenuation curve "f" = The frequency of operations. Unit: gigahertz (GHz). "k" = Empirical constant Calculation of slopes. Initial slope is calculated as: formula_2 And the final slope as: formula_3 where, "a", "b" and "c" are empirical constants (given in the table below). Calculation of "k". "k" is computed as: formula_4 where, "k"0 = Empirical constant (given in the table below) "R"f = Empirical constant for frequency dependent attenuation "A"0 = Empirical attenuation constant (given in the table below) "A"i = Illumination area Calculation of "A"i. "A"i is calculated in using any of the equations below. A point to note is that, the terms "h", "h"T, "h"R, "w", "w"T and "w"R are defined perpendicular to the (assumed horizontal) line joining the transmitter and receiver. The first three terms are measured vertically and the other there are measured horizontally. Equation 1: formula_5 Equation 2: formula_6 where, "w"T = Width of illuminated area as seen from the transmitter. Unit: meter (m). "w"R = Width of illuminated area as seen from the receiver. Unit: meter (m). "w" = Width of the vegetation. Unit: meter (m). "h"T =Height of illuminated area as seen from the transmitter. Unit: meter (m). "h"R = Height of illuminated area as seen from the receiver. Unit: meter (m). "h" = Height of the vegetation. Unit: meter (m). "a"T = Azimuth beamwidth of the transmitter. Unit: degree or radian. "a"R = Azimuth beamwidth of the receiver. Unit: degree or radian. "e"T = Elevation beamwidth of the transmitter. Unit: degree or radian. "e"R = Elevation beamwidth of the receiver. Unit: degree or radian. "d"T = Distance of the vegetation from transmitter. Unit: meter (m). "d"R = Distance of the vegetation from receiver. Unit: meter (m). The empirical constants. Empirical constants a, b, c, k0, Rf and A0 are used as tabulated below. Limitations. The model predicts the explicit path loss due to the existence of vegetation along the link. The total path loss includes other factors like free space loss which is not included in this model. Over 5 GHz, the equations suddenly become extremely complex in consideration of the equations for below 3 GHz. Also, this model does not work for frequency between 3 GHz and 5 GHz. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\begin{cases}d \\gamma \\mbox{ , frequency} < 3 GHz \\\\ R_fd \\;+\\;k[1-e^{(R_f - R_i)\\frac{d}{k}}] \\mbox{ , frequency} > 5 GHz \\end{cases}" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "R_i\\;=\\;af" }, { "math_id": 3, "text": "R_f\\;=\\;bf^c" }, { "math_id": 4, "text": "k = k_0\\;-\\;10\\;\\log {[A_0\\;(1\\;-\\;e^{\\frac{-A^i}{A_0}})(1-e^{R_ff})]}" }, { "math_id": 5, "text": "A_i\\;=\\;min(w_T, w_R, w)\\;x\\;min(h_T, h_R, h)" }, { "math_id": 6, "text": "A_i\\;=\\;min(2d_T\\;\\tan {\\frac{a_T}{2}}, 2d_R \\tan{\\frac{a_R}{2}}, w)\\;x\\;min(2d_T \\tan {\\frac{e_T}{2}}, 2d_R \\tan{\\frac{e_R}{2}}, h)" } ]
https://en.wikipedia.org/wiki?curid=6113420
6113509
Okumura model
The Okumura model is a radio propagation model that was built using data collected in the city of Tokyo, Japan. The model is ideal for using in cities with many urban structures but not many tall blocking structures. The model served as a base for the Hata model. The Okumura model was built into three modes: for urban, suburban and open areas. The model for urban areas was built first, and used as the base for the others. Mathematical formulation. The Okumura model is formally expressed as: formula_0 where, L = The median path loss. Unit: decibel (dB) LFSL = The free space loss. Unit: decibel (dB). AMU = Median attenuation. Unit: decibel (dB). HMG = Mobile station antenna height gain factor HBG = Base station antenna height gain factor Kcorrection = Correction factor gain (such as type of environment, water surfaces, isolated obstacle etc.) Points to note. Okumura's model is one of the most widely used models for signal prediction in urban areas. This model is applicable for frequencies in the range 150–1920 MHz (although it is typically extrapolated up to 3000 MHz) and distances of 1–100 km. It can be used for base-station antenna heights ranging from 30–1000 m. Okumura developed a set of curves giving the median attenuation relative to free space (Amu), in an urban area over a quasi-smooth terrain with a base station effective antenna height (hte) of 200 m and a mobile antenna height (hre) of 3 m. These curves were developed from extensive measurements using vertical omni-directional antennas at both the base and mobile, and are plotted as a function of frequency in the range 100–1920 MHz and as a function of distance from the base station in the range 1–100 km. To determine path loss using Okumura's model, the free space path loss between the points of interest is first determined, and then the value of Amu(f, d) (as read from the curves) is added to it along with correction factors to account for the type of terrain. The model can be expressed as: formula_1 where L50 is the 50th percentile (i.e., median) value of propagation path loss, LF is the free space propagation loss, Amu is the median attenuation relative to free space, G(hte) is the base station antenna height gain factor, G(hre) is the mobile antenna height gain factor, and GAREA is the gain due to the type of environment. Note that the antenna height gains are strictly a function of height and have nothing to do with antenna patterns. Plots of Amu(f, d) and GAREA for a wide range of frequencies are shown in Figure 3,23 and Figure 3.24. Furthermore, Okumura found that G(hte) varies at a rate of 20 dB/decade and G(hre) varies at a rate of 10 dB/decade for heights less than 3 m. formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 Other corrections may also be applied to Okumura's model. Some of the important terrain related parameters are the terrain undulation height (A/i), isolated ridge height, average slope of the terrain and the mixed land-sea parameter. Once the terrain related parameters are calculated, the necessary correction factors can be added or subtracted as required. All these correction factors are also available as Okumura curves [0ku68]. In irregular terrain, one frequently encounters non-line-of-sight paths caused by terrain obstacles. Okumura's model includes a correction factor called the "Isolated Ridge" factor to account for obstacles. However, the applicability of this correction is only to obstacles conforming to that description; i.e. an isolated ridge. More complex terrain cannot be modeled by the Isolated Ridge correction factor. A number of more general models exist for calculating diffraction loss. However, none of these can be applied directly to Okumura's basic mean attenuation. Proprietary methods of doing so have been developed; however, none are known to be in the public domain. Okumura's model is wholly based on measured data and does not provide any analytical explanation. For many situations, extrapolations of the derived curves can be made to obtain values outside the measurement range, although the validity of such extrapolations depends on the circumstances and the smoothness of the curve in question. Okumura's model is considered to be among the simplest and best in terms of accuracy in path loss prediction for mature cellular and land mobile radio systems in cluttered environments. It is very practical and has become a standard for system planning in modern land mobile radio systems in Japan. The major disadvantage with the model is its slow response to rapid changes in terrain, therefore the model is fairly good in urban and suburban areas, but not as good in rural areas. Common standard deviations between predicted and measured path loss values are around 10 dB to 14 dB. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L\\;=\\;L_\\text{FSL}\\;+\\;A_\\text{MU}\\;-\\;H_\\text{MG}\\;-\\;H_\\text{BG}\\;-\\;\\sum{K_\\text{correction}}\\;" }, { "math_id": 1, "text": " L_{50\\%}(dB) = LF + A_{mu}(f, d) - G(h_{te}) - G(h_{re}) - G_{area} " }, { "math_id": 2, "text": " G(h_{te}) = 20 \\log \\frac{h_{te}}{200} " }, { "math_id": 3, "text": "1000m > h_{te} > 30m" }, { "math_id": 4, "text": " G(h_{re}) = 10 \\log \\frac{h_{re}}{3} " }, { "math_id": 5, "text": "h_{re} \\le 3m" }, { "math_id": 6, "text": " G(h_{re}) = 20 \\log \\frac{h_{re}}{3} " }, { "math_id": 7, "text": "10m > h_{re} > 3m" } ]
https://en.wikipedia.org/wiki?curid=6113509
611460
Localization of a category
In mathematics, localization of a category consists of adding to a category inverse morphisms for some collection of morphisms, constraining them to become isomorphisms. This is formally similar to the process of localization of a ring; it in general makes objects isomorphic that were not so before. In homotopy theory, for example, there are many examples of mappings that are invertible up to homotopy; and so large classes of homotopy equivalent spaces. Calculus of fractions is another name for working in a localized category. Introduction and motivation. A category "C" consists of objects and morphisms between these objects. The morphisms reflect relations between the objects. In many situations, it is meaningful to replace "C" by another category "C"' in which certain morphisms are forced to be isomorphisms. This process is called localization. For example, in the category of "R"-modules (for some fixed commutative ring "R") the multiplication by a fixed element "r" of "R" is typically (i.e., unless "r" is a unit) not an isomorphism: formula_0 The category that is most closely related to "R"-modules, but where this map "is" an isomorphism turns out to be the category of formula_1-modules. Here formula_1 is the localization of "R" with respect to the (multiplicatively closed) subset "S" consisting of all powers of "r", formula_2 The expression "most closely related" is formalized by two conditions: first, there is a functor formula_3 sending any "R"-module to its localization with respect to "S". Moreover, given any category "C" and any functor formula_4 sending the multiplication map by "r" on any "R"-module (see above) to an isomorphism of "C", there is a unique functor formula_5 such that formula_6. Localization of categories. The above examples of localization of "R"-modules is abstracted in the following definition. In this shape, it applies in many more examples, some of which are sketched below. Given a category "C" and some class "W" of morphisms in "C", the localization "C"["W"−1] is another category which is obtained by inverting all the morphisms in "W". More formally, it is characterized by a universal property: there is a natural localization functor "C" → "C"["W"−1] and given another category "D", a functor "F": "C" → "D" factors uniquely over "C"["W"−1] if and only if "F" sends all arrows in "W" to isomorphisms. Thus, the localization of the category is unique up to unique isomorphism of categories, provided that it exists. One construction of the localization is done by declaring that its objects are the same as those in "C", but the morphisms are enhanced by adding a formal inverse for each morphism in "W". Under suitable hypotheses on "W", the morphisms from object "X" to object "Y" are given by "roofs" formula_7 (where "X"' is an arbitrary object of "C" and "f" is in the given class "W" of morphisms), modulo certain equivalence relations. These relations turn the map going in the "wrong" direction into an inverse of "f". This "calculus of fractions" can be seen as a generalization of the construction of rational numbers as equivalence classes of pairs of integers. This procedure, however, in general yields a proper class of morphisms between "X" and "Y". Typically, the morphisms in a category are only allowed to form a set. Some authors simply ignore such set-theoretic issues. Model categories. A rigorous construction of localization of categories, avoiding these set-theoretic issues, was one of the initial reasons for the development of the theory of model categories: a model category "M" is a category in which there are three classes of maps; one of these classes is the class of weak equivalences. The homotopy category Ho("M") is then the localization with respect to the weak equivalences. The axioms of a model category ensure that this localization can be defined without set-theoretical difficulties. Alternative definition. Some authors also define a "localization" of a category "C" to be an idempotent and coaugmented functor. A coaugmented functor is a pair "(L,l)" where "L:C → C" is an endofunctor and "l:Id → L" is a natural transformation from the identity functor to "L" (called the coaugmentation). A coaugmented functor is idempotent if, for every "X", both maps "L(lX),lL(X):L(X) → LL(X)" are isomorphisms. It can be proven that in this case, both maps are equal. This definition is related to the one given above as follows: applying the first definition, there is, in many situations, not only a canonical functor formula_8, but also a functor in the opposite direction, formula_9 For example, modules over the localization formula_1 of a ring are also modules over "R" itself, giving a functor formula_10 In this case, the composition formula_11 is a localization of "C" in the sense of an idempotent and coaugmented functor. Examples. Serre's "C"-theory. Serre introduced the idea of working in homotopy theory "modulo" some class "C" of abelian groups. This meant that groups "A" and "B" were treated as isomorphic, if for example "A/B" lay in "C". Module theory. In the theory of modules over a commutative ring "R", when "R" has Krull dimension ≥ 2, it can be useful to treat modules "M" and "N" as "pseudo-isomorphic" if "M/N" has support of codimension at least two. This idea is much used in Iwasawa theory. Derived categories. The derived category of an abelian category is much used in homological algebra. It is the localization of the category of chain complexes (up to homotopy) with respect to the quasi-isomorphisms. Quotients of abelian categories. Given an abelian category "A" and a Serre subcategory "B," one can define the quotient category "A/B," which is an abelian category equipped with an exact functor from "A" to "A/B" that is essentially surjective and has kernel "B." This quotient category can be constructed as a localization of "A" by the class of morphisms whose kernel and cokernel are both in "B." Abelian varieties up to isogeny. An isogeny from an abelian variety "A" to another one "B" is a surjective morphism with finite kernel. Some theorems on abelian varieties require the idea of "abelian variety up to isogeny" for their convenient statement. For example, given an abelian subvariety "A1" of "A", there is another subvariety "A2" of "A" such that "A1" × "A2" is "isogenous" to "A" (Poincaré's reducibility theorem: see for example "Abelian Varieties" by David Mumford). To call this a direct sum decomposition, we should work in the category of abelian varieties up to isogeny. Related concepts. The localization of a topological space, introduced by Dennis Sullivan, produces another topological space whose homology is a localization of the homology of the original space. A much more general concept from homotopical algebra, including as special cases both the localization of spaces and of categories, is the "Bousfield localization" of a model category. Bousfield localization forces certain maps to become weak equivalences, which is in general weaker than forcing them to become isomorphisms.
[ { "math_id": 0, "text": "M \\to M \\quad m \\mapsto r \\cdot m." }, { "math_id": 1, "text": "R[S^{-1}]" }, { "math_id": 2, "text": "S = \\{ 1, r, r^2, r^3, \\dots\\}" }, { "math_id": 3, "text": "\\varphi: \\text{Mod}_R \\to \\text{Mod}_{R[S^{-1}]} \\quad M \\mapsto M[S^{-1}]" }, { "math_id": 4, "text": "F: \\text{Mod}_R \\to C" }, { "math_id": 5, "text": "G: \\text{Mod}_{R[S^{-1}]} \\to C" }, { "math_id": 6, "text": "F = G \\circ \\varphi" }, { "math_id": 7, "text": "X \\stackrel f \\leftarrow X' \\rightarrow Y" }, { "math_id": 8, "text": "C \\to C[W^{-1}]" }, { "math_id": 9, "text": "C[W^{-1}] \\to C." }, { "math_id": 10, "text": "\\text{Mod}_{R[S^{-1}]} \\to \\text{Mod}_R " }, { "math_id": 11, "text": "L : C \\to C[W^{-1}] \\to C" } ]
https://en.wikipedia.org/wiki?curid=611460
61146406
Log-rank conjecture
Unsolved problem in theoretical computer science In theoretical computer science, the log-rank conjecture states that the deterministic communication complexity of a two-party Boolean function is polynomially related to the logarithm of the rank of its input matrix. Let formula_0 denote the deterministic communication complexity of a function, and let formula_1 denote the rank of its input matrix formula_2 (over the reals). Since every protocol using up to formula_3 bits partitions formula_2 into at most formula_4 monochromatic rectangles, and each of these has rank at most 1, formula_5 The log-rank conjecture states that formula_0 is also "upper-bounded" by a polynomial in the log-rank: for some constant formula_6, formula_7 Lovett proved the upper bound formula_8 This was improved by Sudakov and Tomon, who removed the logarithmic factor, showing that formula_9 This is the best currently known upper bound. The best known lower bound, due to Göös, Pitassi and Watson, states that formula_10. In other words, there exists a sequence of functions formula_11, whose log-rank goes to infinity, such that formula_12 In 2019, an approximate version of the conjecture has been disproved. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D(f)" }, { "math_id": 1, "text": "\\operatorname{rank}(f)" }, { "math_id": 2, "text": "M_f" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": "2^c" }, { "math_id": 5, "text": "D(f) \\geq \\log_2 \\operatorname{rank}(f). " }, { "math_id": 6, "text": "C" }, { "math_id": 7, "text": "D(f) = O((\\log \\operatorname{rank}(f))^C). " }, { "math_id": 8, "text": "D(f) = O\\left(\\sqrt{\\operatorname{rank}(f)} \\log \\operatorname{rank}(f)\\right). " }, { "math_id": 9, "text": "D(f) = O\\left(\\sqrt{\\operatorname{rank}(f)}\\right). " }, { "math_id": 10, "text": "C \\geq 2" }, { "math_id": 11, "text": "f_n" }, { "math_id": 12, "text": " D(f_n) = \\tilde\\Omega((\\log \\operatorname{rank}(f_n))^2). " } ]
https://en.wikipedia.org/wiki?curid=61146406
61146769
Oded Regev (computer scientist)
Israeli-American computer scientist Oded Regev (Hebrew: עודד רגב ;born 1980 or 1979) is an Israeli-American theoretical computer scientist and mathematician. He is a professor of computer science at the Courant institute at New York University. He is best known for his work in lattice-based cryptography, and in particular for introducing the learning with errors problem. Biography. Oded Regev earned his B.Sc. in 1995, M.Sc. in 1997, and Ph.D. in 2001, all from Tel Aviv University. He completed his Ph.D. at the age of 21, advised by Yossi Azar, with a thesis titled "Scheduling and Load Balancing." He held faculty positions at Tel Aviv University and the École Normale Supérieure before joining the Courant institute. Work. Regev has done extensive work on lattices. He is best known for introducing the learning with errors problem (LWE), for which he won the 2018 Gödel Prize. As the citation reads: Regev’s work has ushered in a revolution in cryptography, in both theory and practice. On the theoretical side, LWE has served as a simple and yet amazingly versatile foundation for nearly every kind of cryptographic object imaginable—along with many that were unimaginable until recently, and which still have no known constructions without LWE. Toward the practical end, LWE and its direct descendants are at the heart of several efficient real-world cryptosystems. Regev's most influential other work on lattices includes cryptanalysis of the GGH and NTRU signature schemes in joint work with Phong Q. Nguyen, for which they won a best paper award at Eurocrypt 2006; introducing the ring learning with errors problem in joint work with Chris Peikert and Vadim Lyubashevsky; and proving a converse to Minkowski's theorem and exploring its applications in joint works with his student Noah Stephens-Davidowitz and his former postdoc Daniel Dadush. In addition to his work on lattices, Regev has also done work in a large number of other areas in theoretical computer science and mathematics. These include quantum computing, communication complexity, hardness of approximation, online algorithms, combinatorics, probability, and dimension reduction. He has also recently become interested in topics in biology, and particularly RNA splicing. Regev is an associate editor in chief of the journal "Theory of Computing," and is a co-founder and organizer of the TCS+ online seminar series. In August 2023 Regev published a preprint describing an algorithm to factor integers with formula_0 quantum gates which would be more efficient than Shor's algorithm which uses formula_1, but would require more qubits formula_0 of quantum memory against Shor's formula_2. A variant has been proposed that could reduce the space to around the same amount. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sim O(n^{3/2})" }, { "math_id": 1, "text": "\\sim O(n^{2})" }, { "math_id": 2, "text": "\\sim O(n)\\ " } ]
https://en.wikipedia.org/wiki?curid=61146769
61148504
Limiting absorption principle
In mathematics, the limiting absorption principle (LAP) is a concept from operator theory and scattering theory that consists of choosing the "correct" resolvent of a linear operator at the essential spectrum based on the behavior of the resolvent near the essential spectrum. The term is often used to indicate that the resolvent, when considered not in the original space (which is usually the formula_0 space), but in certain weighted spaces (usually formula_1, see below), has a limit as the spectral parameter approaches the essential spectrum. This concept developed from the idea of introducing complex parameter into the Helmholtz equation formula_2 for selecting a particular solution. This idea is credited to Vladimir Ignatowski, who was considering the propagation and absorption of the electromagnetic waves in a wire. It is closely related to the Sommerfeld radiation condition and the limiting amplitude principle (1948). The terminology – both the limiting absorption principle and the limiting amplitude principle – was introduced by Aleksei Sveshnikov. Formulation. To find which solution to the Helmholz equation with nonzero right-hand side formula_3 with some fixed formula_4, corresponds to the outgoing waves, one considers the limit formula_5 The relation to absorption can be traced to the expression formula_6 for the electric field used by Ignatowsky: the absorption corresponds to nonzero imaginary part of formula_7, and the equation satisfied by formula_8 is given by the Helmholtz equation (or reduced wave equation) formula_9, with formula_10 having negative imaginary part (and thus with formula_11 no longer belonging to the spectrum of formula_12). Above, formula_13 is magnetic permeability, formula_14 is electric conductivity, formula_15 is dielectric constant, and formula_16 is the speed of light in vacuum. Example and relation to the limiting amplitude principle. One can consider the Laplace operator in one dimension, which is an unbounded operator formula_17 acting in formula_18 and defined on the domain formula_19, the Sobolev space. Let us describe its resolvent, formula_20. Given the equation formula_21, then, for the spectral parameter formula_22 from the resolvent set formula_23, the solution formula_24 is given by formula_25 where formula_26 is the convolution of F with the fundamental solution G: formula_27 where the fundamental solution is given by formula_28 To obtain an operator bounded in formula_18, one needs to use the branch of the square root which has positive real part (which decays for large absolute value of x), so that the convolution of G with formula_29 makes sense. One can also consider the limit of the fundamental solution formula_30 as formula_22 approaches the spectrum of formula_31, given by formula_32. Assume that formula_22 approaches formula_33, with some formula_4. Depending on whether formula_22 approaches formula_33 in the complex plane from above (formula_34) or from below (formula_35) of the real axis, there will be two different limiting expressions: formula_36 when formula_37 approaches formula_38 from above and formula_39 when formula_22 approaches formula_38 from below. The resolvent formula_40 (convolution with formula_41) corresponds to outgoing waves of the inhomogeneous Helmholtz equation formula_42, while formula_43 corresponds to incoming waves. This is directly related to the limiting amplitude principle: to find which solution corresponds to the outgoing waves, one considers the inhomogeneous wave equation formula_44 with zero initial data formula_45. A particular solution to the inhomogeneous Helmholtz equation corresponding to outgoing waves is obtained as the limit of formula_46 for large times. Estimates in the weighted spaces. Let formula_47 be a linear operator in a Banach space formula_48, defined on the domain formula_49. For the values of the spectral parameter from the resolvent set of the operator, formula_50, the resolvent formula_20 is bounded when considered as a linear operator acting from formula_48 to itself, formula_51, but its bound depends on the spectral parameter formula_22 and tends to infinity as formula_22 approaches the spectrum of the operator, formula_52. More precisely, there is the relation formula_53 Many scientists refer to the "limiting absorption principle" when they want to say that the resolvent formula_54 of a particular operator formula_55, when considered as acting in certain weighted spaces, has a limit (and/or remains uniformly bounded) as the spectral parameter formula_22 approaches the essential spectrum, formula_56. For instance, in the above example of the Laplace operator in one dimension, formula_57, defined on the domain formula_19, for formula_58, both operators formula_59 with the integral kernels formula_60 are not bounded in formula_0 (that is, as operators from formula_0 to itself), but will both be uniformly bounded when considered as operators formula_61 with fixed formula_62. The spaces formula_63 are defined as spaces of locally integrable functions such that their formula_1-norm, formula_64 is finite. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^2" }, { "math_id": 1, "text": "L^2_s" }, { "math_id": 2, "text": "(\\Delta+k^2)u(x)=-F(x)" }, { "math_id": 3, "text": "\\Delta v(x)+k^2 v(x)=-F(x),\\quad x\\in\\R^3," }, { "math_id": 4, "text": "k>0" }, { "math_id": 5, "text": "v(x)=-\\lim_{\\epsilon\\to +0} (\\Delta+k^2-i\\epsilon)^{-1}F(x)." }, { "math_id": 6, "text": "E(t,x)=A e^{i(\\omega t+\\varkappa x)}" }, { "math_id": 7, "text": "\\varkappa" }, { "math_id": 8, "text": "E(t,x)" }, { "math_id": 9, "text": "(\\Delta+\\varkappa^2/\\omega^2)E(t,x)=0" }, { "math_id": 10, "text": "\\varkappa^2=\\frac{\\mu\\varepsilon\\omega^2}{c^2}-i 4\\pi\\sigma\\mu\\omega" }, { "math_id": 11, "text": "\\varkappa^2/\\omega^2" }, { "math_id": 12, "text": "-\\Delta" }, { "math_id": 13, "text": "\\mu" }, { "math_id": 14, "text": "\\sigma" }, { "math_id": 15, "text": "\\varepsilon" }, { "math_id": 16, "text": "c" }, { "math_id": 17, "text": "A=-\\partial_x^2," }, { "math_id": 18, "text": "L^2(\\R)" }, { "math_id": 19, "text": "D(A)=H^2(\\R)" }, { "math_id": 20, "text": "R(z)=(A-z I)^{-1}" }, { "math_id": 21, "text": "(-\\partial_x^2-z)u(x)=F(x),\\quad x\\in\\R,\\quad F\\in L^2(\\R)" }, { "math_id": 22, "text": "z" }, { "math_id": 23, "text": "\\Complex\\setminus[0,+\\infty)" }, { "math_id": 24, "text": "u\\in L^2(\\R)" }, { "math_id": 25, "text": "u(x)=(R(z)F)(x)=(G(\\cdot,z)*F)(x)," }, { "math_id": 26, "text": "G(\\cdot,z)*F" }, { "math_id": 27, "text": "(G(\\cdot,z)*F)(x)=\\int_\\R G(x-y;z)F(y) \\, dy," }, { "math_id": 28, "text": "\nG(x;z) = \\frac{1}{2\\sqrt{-z}} e^{-|x|\\sqrt{-z}},\n\\quad\nz \\in \\Complex\\setminus[0,+\\infty).\n" }, { "math_id": 29, "text": "F\\in L^2(\\R)" }, { "math_id": 30, "text": "G(x;z)" }, { "math_id": 31, "text": "-\\partial_x^2" }, { "math_id": 32, "text": "\\sigma(-\\partial_x^2)=[0,+\\infty)" }, { "math_id": 33, "text": "k^2" }, { "math_id": 34, "text": "\\Im (z)>0" }, { "math_id": 35, "text": "\\Im (z)<0" }, { "math_id": 36, "text": "G_+(x;k^2)=\\lim_{\\varepsilon\\to 0+}G(x;k^2+i\\varepsilon)=-\\frac{1}{2ik}e^{i|x|k}" }, { "math_id": 37, "text": "z\\in\\Complex" }, { "math_id": 38, "text": "k^2\\in(0,+\\infty)" }, { "math_id": 39, "text": "G_-(x;k^2)=\\lim_{\\varepsilon\\to 0+}G(x;k^2-i\\varepsilon)=\\frac{1}{2ik}e^{-i|x|k}" }, { "math_id": 40, "text": "R_+(k^2)" }, { "math_id": 41, "text": "G_+(x;k^2)" }, { "math_id": 42, "text": "(-\\partial_x^2-k^2)u(x)=F(x)" }, { "math_id": 43, "text": "R_-(k^2)" }, { "math_id": 44, "text": "(\\partial_t^2-\\partial_x^2)\\psi(t,x)=F(x)e^{-i k t},\\quad t\\ge 0, \\quad x\\in\\R," }, { "math_id": 45, "text": "\\psi(0,x)=0,\\,\\partial_t\\psi(t,x)|_{t=0}=0" }, { "math_id": 46, "text": "\\psi(t,x)e^{i k t}" }, { "math_id": 47, "text": "A:\\,X\\to X" }, { "math_id": 48, "text": "X" }, { "math_id": 49, "text": "D(A)\\subset X" }, { "math_id": 50, "text": "z\\in\\rho(A)\\subset\\Complex" }, { "math_id": 51, "text": "R(z):\\,X\\to X" }, { "math_id": 52, "text": "\\sigma(A)=\\Complex\\setminus\\rho(A)" }, { "math_id": 53, "text": "\n\\Vert R(z)\\Vert\\ge\\frac{1}{\\operatorname{dist}(z,\\sigma(A))}, \\qquad z\\in\\rho(A).\n" }, { "math_id": 54, "text": "R(z)" }, { "math_id": 55, "text": "A" }, { "math_id": 56, "text": "\\sigma_{\\mathrm{ess}}(A)" }, { "math_id": 57, "text": "A=-\\partial_x^2:\\,L^2(\\R)\\to L^2(\\R)" }, { "math_id": 58, "text": "z>0" }, { "math_id": 59, "text": "R_\\pm(z)" }, { "math_id": 60, "text": "G_\\pm(x-y;z)" }, { "math_id": 61, "text": "R_\\pm(z):\\;L^2_s(\\R)\\to L^2_{-s}(\\R),\\quad s>1/2,\\quad z\\in\\Complex\\setminus[0,+\\infty),\\quad |z|\\ge\\delta," }, { "math_id": 62, "text": "\\delta>0" }, { "math_id": 63, "text": "L^2_s(\\R)" }, { "math_id": 64, "text": "\n\\Vert u\\Vert_{L^2_s(\\R)}^2=\\int_\\R (1+x^2)^s|u(x)|^2 \\, dx,\n" } ]
https://en.wikipedia.org/wiki?curid=61148504
61148736
Fatigue testing
Determination of a material or structure's resiliency against cyclic loading Fatigue testing is a specialised form of mechanical testing that is performed by applying cyclic loading to a "coupon" or structure. These tests are used either to generate fatigue life and crack growth data, identify critical locations or demonstrate the safety of a structure that may be susceptible to fatigue. Fatigue tests are used on a range of components from coupons through to full size test articles such as automobiles and aircraft. Fatigue tests on coupons are typically conducted using servo hydraulic test machines which are capable of applying large "variable amplitude" cyclic loads. "Constant amplitude" testing can also be applied by simpler oscillating machines. The "fatigue life" of a coupon is the number of cycles it takes to break the coupon. This data can be used for creating stress-life or strain-life curves. The rate of crack growth in a coupon can also be measured, either during the test or afterward using fractography. Testing of coupons can also be carried out inside environmental chambers where the temperature, humidity and environment that may affect the rate of crack growth can be controlled. Because of the size and unique shape of full size test articles, special "test rigs" are built to apply loads through a series of hydraulic or electric actuators. Actuators aim to reproduce the significant loads experienced by a structure, which in the case of aircraft, may consist of manoeuvre, gust, buffet and ground-air-ground (GAG) loading. A representative sample or block of loading is applied repeatedly until the safe life of the structure has been demonstrated or failures occur which need to be repaired. Instrumentation such as load cells, strain gauges and displacement gauges are installed on the structure to ensure the correct loading has been applied. Periodic inspections of the structure around critical stress concentrations such as holes and fittings are made to determine the time detectable cracks were found and to ensure any cracking that does occur, does not affect other areas of the test article. Because not all loads can be applied, any unbalanced structural loads are typically reacted out to the test floor through non-critical structure such as the undercarriage. Airworthiness standards generally require a fatigue test to be carried out for large aircraft prior to certification to determine their "safe life". Small aircraft may demonstrate safety through calculations, although typically larger "scatter" or safety factors are used because of the additional uncertainty involved. Coupon tests. Fatigue tests are used to obtain material data such as the rate of growth of a fatigue crack that can be used with crack growth equations to predict the fatigue life. These tests usually determine the rate of crack growth per cycle formula_0 versus the stress intensity factor range formula_1, where the minimum stress intensity factor formula_2 corresponds to the minimum load for formula_3 and is taken to be zero for formula_4, and formula_5 is the stress ratio formula_6. Standardised tests have been developed to ensure repeatability and to allow the stress intensity factor to be easily determined but other shapes can be used providing the coupon is large enough to be mostly elastic. Coupon shape. A variety of coupons can be used but some of the common ones are: Instrumentation. The following instrumentation is commonly used for monitoring coupon tests: Full scale fatigue tests. Full-scale tests may be used to: Fatigue tests can also be used to determine the extent that widespread fatigue damage may be a problem. Test article. Certification requires knowing and accounting for the complete load history that has been experienced by a test article. Using test articles that have previously been used for static proof testing have caused problems where "overloads" have been applied and that can retard the rate of fatigue crack growth. The test loads are typically recorded using a data acquisition system acquiring data from possibly thousands of inputs from instrumentation installed on the test article, including: strain gages, pressure gauges, load cells, LVDTs, etc. Fatigue cracks typically initiate from high stress regions such as stress concentrations or material and manufacturing defects. It is important that the test article is representative of all of these features. Cracks may initiate from the following sources: Loading sequence. A representative block of loading is applied repeatedly until the "safe life" of the structure has been demonstrated or failures occur which need to be repaired. The size of the sequence is chosen so that the maximum loads which may cause retardation effects are applied sufficiently often, typically at least ten times throughout the test, so that there are no sequence effects. The loading sequence is generally filtered to eliminate applying small non-fatigue damaging cycles that would take too long to apply. Two types of filtering are typically used: The testing rate of large structures is typically limited to a few Hz and needs to avoid the resonance frequency of the structure. Test rig. All components that are not part of the "test article" or instrumentation are termed the "test rig". The following components are typically found in "full scale fatigue tests": Instrumentation. The following instrumentation is typically used on a fatigue test: It is important to install any strain gauges on the test article that are also used for monitoring fleet aircraft. This allows the same damage calculations to be performed on the test article that are used to track the fatigue life of fleet aircraft. This is the primary way of ensuring fleet aircraft do not exceed the safe-life determined from the fatigue test. Inspections. Inspections form a component of a fatigue test. It is important to know when a detectable crack occurs in order to determine the certified life of each component in addition to minimising the damage to surrounding structure and to develop repairs that have minimal impact on the certification of the adjacent structure. Non-destructive inspections may be carried out during testing and destructive tests may be used at the end of testing to ensure the structure retains its load carrying capacity. Certification. Test interpretation and certification involves using the results from the fatigue test to justify the safe life and operation of an item. The purpose of certification is to ensure the probability of failure in service is acceptably small. The following factors may need to be considered: Airworthy standards typically require that an aircraft remains safe even with the structure in a degraded state due to the presence of fatigue cracking. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "da/dN" }, { "math_id": 1, "text": "\\Delta K = K_\\max - K_\\min" }, { "math_id": 2, "text": "K_\\min" }, { "math_id": 3, "text": "R > 0" }, { "math_id": 4, "text": "R\\le 0" }, { "math_id": 5, "text": "R" }, { "math_id": 6, "text": "R= K_\\min/K_\\max" } ]
https://en.wikipedia.org/wiki?curid=61148736
6115
P versus NP problem
Unsolved problem in computer science &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: If the solution to a problem is easy to check for correctness, must the problem be easy to solve? The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm that solves the task and runs in polynomial time exists, meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be "verified" in polynomial time is NP, standing for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. Example. Consider the following yes/no problem: given an incomplete Sudoku grid of size formula_0, is there at least one legal solution where every row, column, and formula_1 square contains the integers 1 through formula_2? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (Sudoku on the ordinary formula_3 grid is trivially in P as there are a constant number of Sudoku grids in this case; thus, it is necessary to consider a generalized version.) History. The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. Context. The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is "deterministic" (given the computer's present state and any inputs, there is only one possible action that the computer might take) and "sequential" (it performs actions one after the other). In this theory, the class P consists of all "decision problems" (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes: Is P equal to NP? Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era." NP-completeness. To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP. NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time. For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so "any" instance of "any" problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known. From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine "M" guaranteed to halt in polynomial time, does a polynomial-size input that "M" will accept exist? It is in NP because (given an input) it is simple to check whether "M" accepts the input by simulating "M"; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine "M" that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". Harder problems. Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have "exponential" running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2"p"("n")) time, where "p"("n") is a polynomial function of "n". A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an "N" × "N" board and similar problems for other board games. The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length "n" has a runtime of at least formula_4 for some constant "c". Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. Problems in NP not known to be in P or NP-complete. In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than "k". No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time formula_5 to factor an "n"-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. Does P mean "easy"? All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as "Cobham's thesis". It is a common assumption in complexity theory; but there are caveats. First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph "G" contains "H" as a minor, where "H" is fixed, can be solved in a running time of "O"("n"2), where "n" is the number of vertices in "G". However, the big O notation hides a constant that depends superexponentially on "H". The constant is greater than formula_6 (using Knuth's up-arrow notation), and where "h" is the number of vertices in "H". On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms. Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms. Reasons to believe P ≠ NP or P = NP. Cook provides a restatement of the problem in "The P Versus NP Problem" as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH. It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Scott Aaronson, UT Austin On the other hand, some researchers believe that there is overconfidence in believing P ≠ NP and that researchers should explore proofs of P = NP as well. For example, in 2002 these statements were made: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The main argument in favor of P ≠ NP is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. [...] The resolution of Fermat's Last Theorem also shows that very simple questions may be settled only by very deep theories. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required. DLIN vs NLIN. When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN. It is known that DLIN≠NLIN. Consequences of solution. One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well. P = NP. A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields. It is also very possible that a proof would "not" lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them. A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including: These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP. There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology. These changes could be insignificant compared to the revolution efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If there really were a machine with φ("n") ∼ "k"⋅"n" (or even ∼ "k"⋅"n"2), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of the Entscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural number "n" so large that when the machine does not deliver a result, it makes no sense to think more about the problem. Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems. Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle. Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[...] if you imagine a number "M" that's finite but incredibly large—like say the number 10↑↑↑↑3 discussed in my paper on "coping with finiteness"—then there's a humongous number of possible algorithms that do "n""M" bitwise or addition or shift operations on "n" given bits, and it's really hard to believe that all of those algorithms fail. My main point, however, is that I don't believe that the equality P = NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. P ≠ NP. A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place. P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds. Results about difficulty of proof. Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required. As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP: These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms. Logical characterizations. The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity. Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). Polynomial-time algorithms. No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: "// Algorithm that accepts the NP-complete language SUBSET-SUM." "// this is a polynomial-time algorithm if and only if P = NP." "// "Polynomial-time" means it returns "yes" in polynomial time when" "// the answer should be "yes", and runs forever when it is "no"." "// Input: S = a finite set of integers" "// Output: "yes" if any subset of S adds up to 0." "// Runs forever with no output otherwise." "// Note: "Program number M" is the program obtained by" "// writing the integer M in binary, then" "// considering that string of bits to be a" "// program. Every possible program can be" "// generated this way, though most do nothing" "// because of syntax errors." FOR K = 1...∞ FOR M = 1...K Run program number M for K steps with input S IF the program outputs a list of distinct integers AND the integers are all in S AND the integers sum to 0 THEN OUTPUT "yes" and HALT This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a "semi-algorithm"). This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is "b" bits long, the above algorithm will try at least 2"b" − 1 other programs first. Formal definitions. P and NP. A "decision problem" is a problem that takes as input some string "w" over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length "n" in at most "cnk" steps, where "k" and "c" are constants independent of the input string, then we say that the problem can be solved in "polynomial time" and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning, formula_7 where formula_8 and a deterministic polynomial-time Turing machine is a deterministic Turing machine "M" that satisfies two conditions: formula_11 formula_12 NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of "certificate" and "verifier". Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier": Let "L" be a language over a finite alphabet, Σ. "L" ∈ NP if, and only if, there exists a binary relation formula_13 and a positive integer "k" such that the following two conditions are satisfied: A Turing machine that decides "LR" is called a "verifier" for "L" and a "y" such that ("x", "y") ∈ "R" is called a "certificate of membership" of "x" in "L". Not all verifiers must be polynomial-time. However, for "L" to be in NP, there must be a verifier that runs in polynomial time. Example. Let formula_19 formula_20 Whether a value of "x" is composite is equivalent to of whether "x" is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations). COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test. NP-completeness. There are many equivalent ways of describing NP-completeness. Let "L" be a language over a finite alphabet Σ. "L" is NP-complete if, and only if, the following two conditions are satisfied: Alternatively, if "L" ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to "L", then "L" is NP-complete. This is a common way of proving some new problem is NP-complete. Claimed solutions. While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted. Popular culture. The film "Travelling Salesman", by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem. In the sixth episode of "The Simpsons"' seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension". In the second episode of season 2 of "Elementary", "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n^2 \\times n^2" }, { "math_id": 1, "text": "n \\times n" }, { "math_id": 2, "text": "n^2" }, { "math_id": 3, "text": "9 \\times 9" }, { "math_id": 4, "text": "2^{2^{cn}}" }, { "math_id": 5, "text": "O\\left (\\exp \\left ( \\left (\\tfrac{64n}{9} \\log(2) \\right )^{\\frac{1}{3}} \\left ( \\log(n\\log(2)) \\right )^{\\frac{2}{3}} \\right) \\right )" }, { "math_id": 6, "text": " 2 \\uparrow \\uparrow (2 \\uparrow \\uparrow (2 \\uparrow \\uparrow (h/2) ) ) " }, { "math_id": 7, "text": "\\mathbf{P} = \\{ L : L=L(M) \\text{ for some deterministic polynomial-time Turing machine } M \\}" }, { "math_id": 8, "text": "L(M) = \\{ w\\in\\Sigma^{*}: M \\text{ accepts } w \\}" }, { "math_id": 9, "text": "k \\in N" }, { "math_id": 10, "text": "T_M(n)\\in O(n^k)" }, { "math_id": 11, "text": "T_M(n) = \\max\\{ t_M(w) : w\\in\\Sigma^{*}, |w| = n \\}" }, { "math_id": 12, "text": "t_M(w) = \\text{ number of steps }M\\text{ takes to halt on input }w." }, { "math_id": 13, "text": "R\\subset\\Sigma^{*}\\times\\Sigma^{*}" }, { "math_id": 14, "text": "x\\in\\Sigma^{*}" }, { "math_id": 15, "text": "x\\in L \\Leftrightarrow\\exists y\\in\\Sigma^{*}" }, { "math_id": 16, "text": "|y|\\in O(|x|^k)" }, { "math_id": 17, "text": "L_{R} = \\{ x\\# y:(x,y)\\in R\\}" }, { "math_id": 18, "text": "\\Sigma\\cup\\{\\#\\}" }, { "math_id": 19, "text": "\\mathrm{COMPOSITE} = \\left \\{x\\in\\mathbb{N} \\mid x=pq \\text{ for integers } p, q > 1 \\right \\}" }, { "math_id": 20, "text": "R = \\left \\{(x,y)\\in\\mathbb{N} \\times\\mathbb{N} \\mid 1<y \\leq \\sqrt x \\text{ and } y \\text{ divides } x \\right \\}." }, { "math_id": 21, "text": "L' \\leq_{p} L" }, { "math_id": 22, "text": "(w\\in L' \\Leftrightarrow f(w)\\in L)" } ]
https://en.wikipedia.org/wiki?curid=6115
6115145
Orchestral suites (Bach)
Four suites by Johann Sebastian Bach The four orchestral suites BWV 1066–1069 (called ouvertures by their composer) are four suites by Johann Sebastian Bach from the years 1724–1731. The name "ouverture" refers only in part to the opening movement in the style of the French overture, in which a majestic opening section in relatively slow dotted-note rhythm in duple meter is followed by a fast fugal section, then rounded off with a short recapitulation of the opening music. More broadly, the term was used in Baroque Germany for a suite of dance-pieces in French Baroque style preceded by such an ouverture. This genre was extremely popular in Germany during Bach's day, and he showed far less interest in it than was usual: Robin Stowell writes that "Telemann's 135 surviving examples [represent] only a fraction of those he is known to have written"; Christoph Graupner left 85; and Johann Friedrich Fasch left almost 100. Bach did write several other ouverture (suites) for solo instruments, notably the Cello Suite no. 5, BWV 1011, which also exists in the autograph Lute Suite in G minor, BWV 995, the Keyboard Partita no. 4 in D, BWV 828, and the Overture in the French style, BWV 831 for keyboard. The two keyboard works are among the few Bach published, and he prepared the lute suite for a "Monsieur Schouster," presumably for a fee, so all three may attest to the form's popularity. Scholars believe that Bach did not conceive of the four orchestral suites as a set (in the way he conceived of the Brandenburg Concertos), since the sources are various, as detailed below. The Bach-Werke-Verzeichnis catalogue includes a fifth suite, BWV 1070 in G minor. However, this work is highly unlikely to have been composed by J. S. Bach. Gustav Mahler arranged portions of BWV 1067 and 1068 for orchestra, harpsichord, and organ. They were played several times during Mahler's first tour of the New York Philharmonic, with Mahler on harpsichord and Harry Jepson on organ. Suite No. 1 in C major, BWV 1066. The source is a set of parts from Leipzig in 1724–45 copied by C. G. Meissner. Instrumentation: Oboe I/II, bassoon, violin I/II, viola, basso continuo Suite No. 2 in B minor, BWV 1067. The source is a partially autograph set of parts (Bach wrote out those for flute and viola) from Leipzig in 1738–39. Instrumentation: Solo "[Flute] traversière" (transverse flute), violin I/II, viola, basso continuo. The Polonaise is a stylization of the Polish folk song "Wezmę ja kontusz" (I'll take my nobleman's robe). The "Badinerie" (literally "jesting" in French – in other works Bach used the Italian word with the same meaning, scherzo) has become a showpiece for solo flautists because of its quick pace and difficulty. For many years in the 1980s and early 1990s the movement was the incidental music for ITV Schools morning programmes in the UK. Earlier version in A minor. Joshua Rifkin has argued, based on in-depth analysis of the partially autograph primary sources, that this work is based on an earlier version in A minor in which the solo flute part was scored instead for solo violin. Rifkin demonstrates that notational errors in the surviving parts can best be explained by their having been copied from a model a whole tone lower, and that this solo part would venture below the lowest pitches on the flutes Bach wrote for (the transverse flute, which Bach called "flauto traverso" or "flute traversière"). Rifkin argues that the violin was the most likely option, noting that in writing the word "Traversiere" in the solo part, Bach seems to have fashioned the letter T out of an earlier "V", suggesting that he originally intended to write the word "violin" (the page in question can be viewed here, p. 6) Further, Rifkin notes passages that would have used the violinistic technique of bariolage. Rifkin also suggests that Bach was inspired to write the suite by a similar work by his second cousin Johann Bernhard Bach. Flautist Steven Zohn accepts the argument of an earlier version in A minor, but suggests that the original part may have been playable on flute as well as violin. Oboist Gonzalo X. Ruiz has argued in detail that the solo instrument in the lost original A minor version was the oboe, and he has recorded it in his own reconstruction of that putative original on a baroque oboe. His case against the violin is that: the range is "curiously limited" for that instrument, "avoiding the G string almost entirely," and that the supposed violin solo would at times be lower in pitch than the first violin part, something that is almost unheard of in dedicated violin concertos. By contrast, "the range is exactly the range of Bach's oboes"; scoring the solo oboe occasionally lower than the first violin was typical Baroque practice, as the oboe still comes through to the ear; and the "figurations are very similar to those found in many oboe works of the period." Suite No. 3 in D major, BWV 1068. The oldest source is a partially autograph set of parts from around 1730. Bach wrote out the first violin and continuo parts, C. P. E. Bach wrote out the trumpet, oboe, and timpani parts, and J. S. Bach's student Johann Ludwig Krebs wrote out the second violin and viola parts. Rifkin has argued that the original was a version for strings and continuo alone. Instrumentation: Trumpet I/II/III, timpani, oboe I/II, violin I/II, viola, basso continuo (second movement: only strings and continuo). An arrangement of the second movement of the suite by German violinist August Wilhelmj (1845–1908) became known as "Air on the G String". Suite No. 4 in D major, BWV 1069. The source is lost, but the existing parts date from circa 1730. Rifkin has argued that the lost original version was written during Bach's tenure at Köthen, did not have trumpets or timpani, and that Bach first added these parts when adapting the Ouverture movement for the choral first movement to his 1725 Christmas cantata "Unser Mund sei voll Lachens, BWV 110" ("Our mouths be full of laughter"). Instrumentation: Trumpet I/II/III, timpani, oboe I/II/III, bassoon, violin I/II, viola, basso continuo. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol{2}\\!\\!\\!|\\;" }, { "math_id": 1, "text": "\\boldsymbol{2}" } ]
https://en.wikipedia.org/wiki?curid=6115145
61154877
Nonhomogeneous Gaussian regression
Type of statistical regression analysis Non-homogeneous Gaussian regression (NGR) is a type of statistical regression analysis used in the atmospheric sciences as a way to convert ensemble forecasts into probabilistic forecasts. Relative to simple linear regression, NGR uses the ensemble spread as an additional predictor, which is used to improve the prediction of uncertainty and allows the predicted uncertainty to vary from case to case. The prediction of uncertainty in NGR is derived from both past forecast errors statistics and the ensemble spread. NGR was originally developed for site-specific medium range temperature forecasting, but has since also been applied to site-specific medium-range wind forecasting and to seasonal forecasts, and has been adapted for precipitation forecasting. The introduction of NGR was the first demonstration that probabilistic forecasts that take account of the varying ensemble spread could achieve better skill scores than forecasts based on standard model output statistics approaches applied to the ensemble mean. Intuition. Weather forecasts generated by computer simulations of the atmosphere and ocean typically consist of an ensemble of individual forecasts. Ensembles are used as a way to attempt to capture and quantify the uncertainties in the weather forecasting process, such as uncertainty in the initial conditions and uncertainty in the parameterisations in the model. For point forecasts of normally distributed variables, one can summarize an ensemble forecast with the mean and the standard deviation of the ensemble. The ensemble mean is often a better forecast than any of the individual forecasts, and the ensemble standard deviation may give an indication of the uncertainty in the forecast. However, direct output from computer simulations of the atmosphere needs calibration before it can be meaningfully compared with observations of weather variables. This calibration process is often known as model output statistics (MOS). The simplest form of such calibration is to correct biases, using a bias correction calculated from past forecast errors. Bias correction can be applied to both individual ensemble members and the ensemble mean. A more complex form of calibration is to use past forecasts and past observations to train a simple linear regression model that maps the ensemble mean onto the observations. In such a model the uncertainty in the prediction is derived purely from the statistical properties of the past forecast errors. However, ensemble forecasts are constructed with the hope that the ensemble spread may contain additional information about the uncertainty, above and beyond the information that can be derived from analysing past performance of the forecast. In particular since the ensemble spread is typically different for each successive forecast, it has been suggested that the ensemble spread may give a basis for predicting different levels of uncertainty in different forecasts, which is difficult to do from past performance-based estimates of uncertainty. Whether the ensemble spread actually contains information about forecast uncertainty, and how much information it contains, depends on many factors such as the forecast system, the forecast variable, the resolution and the lead time of the forecast. NGR is a way to include information from the ensemble spread in the calibration of a forecast, by predicting future uncertainty as a weighted combination of the uncertainty estimated using past forecast errors, as in MOS, and the uncertainty estimated using the ensemble spread. The weights on the two sources of uncertainty information are calibrated using past forecasts and past observations in an attempt to derive optimal weighting. Overview. Consider a series of past weather observations formula_0 over a period of formula_1 days (or other time interval): formula_2 and a corresponding series of past ensemble forecasts, characterized by the sample mean formula_3 and standard deviation formula_4 of the ensemble: formula_5. Also consider a new ensemble forecast from the same system with ensemble mean formula_6 and ensemble standard deviation formula_7, intended as a forecast for an unknown future weather observation formula_8. A straightforward way to calibrate the new ensemble forecast output parameters formula_9 and produce a calibrated forecast for formula_8 is to use a simple linear regression model based on the ensemble mean formula_6, trained using the past weather observations and past forecasts: formula_10 This model has the effect of bias correcting the ensemble mean and adjusting the level of variability of the forecast. It can be applied to the new ensemble forecast formula_9 to generate a point forecast for formula_8 using formula_11 or to obtain a probabilistic forecast for the distribution of possible values for formula_8 based on the normal distribution with mean formula_12 and variance formula_13: formula_14 The use of regression to calibrate weather forecasts in this way is an example of model output statistics. However, this simple linear regression model does not use the ensemble standard deviation formula_7, and hence misses any information that the ensemble standard deviation may contain about the forecast uncertainty. The NGR model was introduced as a way to potentially improve the prediction of uncertainty in the forecast of formula_8 by including information extracted from the ensemble standard deviation. It achieves this by generalising the simple linear regression model to either: formula_15 or formula_16 this can then be used to calibrate the new ensemble forecast parameters formula_9 using either formula_17 or formula_18 respectively. The prediction uncertainty is now given by two terms: the formula_19 term is constant in time, while the formula_20 term varies as the ensemble spread varies. Parameter estimation. In the scientific literature the four parameters formula_21 of NGR have been estimated either by maximum likelihood or by maximum continuous ranked probability score (CRPS). The pros and cons of these two approaches have also been discussed. History. NGR was originally developed in the private sector by scientists at Risk Management Solutions Ltd for the purpose of using information in the ensemble spread for the valuation of weather derivatives. Terminology. NGR was originally referred to as ‘spread regression’ rather than NGR. Subsequent authors, however, introduced first the alternative names Ensemble Model Output Statistics (EMOS) and then NGR. The original name ‘spread regression’ has now fallen from use, EMOS is used to refer generally to any method used for the calibration of ensembles, and NGR is typically used to refer to the method described in this article. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_t" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "y_t, \\quad t=1,\\ldots,T" }, { "math_id": 3, "text": "m_t" }, { "math_id": 4, "text": "s_t" }, { "math_id": 5, "text": "(m_t,s_t), \\quad t=1,\\ldots,T" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "S" }, { "math_id": 8, "text": "Y" }, { "math_id": 9, "text": "(M,S)" }, { "math_id": 10, "text": "y_t \\sim N(\\alpha+\\beta m_t, \\sigma^2)" }, { "math_id": 11, "text": "\\hat{Y}{{=}}\\hat{\\alpha}+\\hat{\\beta} M" }, { "math_id": 12, "text": "\\hat{\\alpha}+\\hat{\\beta} M" }, { "math_id": 13, "text": "\\hat{\\sigma}^2" }, { "math_id": 14, "text": "\\hat{Y} \\sim N(\\hat{\\alpha}+\\hat{\\beta} M, \\hat{\\sigma}^2)" }, { "math_id": 15, "text": "y_t \\sim N(\\alpha+\\beta m_t, \\sigma=\\gamma + \\delta s_t)" }, { "math_id": 16, "text": "y_t \\sim N(\\alpha+\\beta m_t, \\sigma^2=\\gamma + \\delta s_t^2)" }, { "math_id": 17, "text": "\\hat{Y} \\sim N(\\hat{\\alpha}+\\hat{\\beta} M, \\hat{\\sigma}=\\hat{\\gamma} + \\hat{\\delta} S)" }, { "math_id": 18, "text": "\\hat{Y} \\sim N(\\hat{\\alpha}+\\hat{\\beta} M, \\hat{\\sigma}^2=\\hat{\\gamma} + \\hat{\\delta} S^2)" }, { "math_id": 19, "text": "\\gamma" }, { "math_id": 20, "text": "\\delta" }, { "math_id": 21, "text": "\\alpha, \\beta, \\gamma, \\delta" } ]
https://en.wikipedia.org/wiki?curid=61154877
6115906
Super star cluster
Type of very massive young open cluster thought to be the precursor of a globular cluster A super star cluster (SSC) is a very massive young open cluster that is thought to be the precursor of a globular cluster. These clusters called "super" because they are relatively more luminous and contain more mass than other young star clusters. The SSC, however, does not have to physically be larger than other clusters of lower mass and luminosity. They typically contain a very large number of young, massive stars that ionize a surrounding HII region or a so-called "Ultra dense HII region (UDHII)" in the Milky Way Galaxy or in other galaxies (however, SSCs do not always have to be inside an HII region). An SSC's HII region is in turn surrounded by a cocoon of dust. In many cases, the stars and the HII regions will be invisible to observations in certain wavelengths of light, such as the visible spectrum, due to high levels of extinction. As a result, the youngest SSCs are best observed and photographed in radio and infrared. SSCs, such as Westerlund 1 (Wd1), have been found in the Milky Way Galaxy. However, most have been observed in farther regions of the universe. In the galaxy M82 alone, 197 young SSCs have been observed and identified using the Hubble Space Telescope. Generally, SSCs have been seen to form in the interactions between galaxies and in regions of high amounts of star formation with high enough pressures to satisfy the properties needed for the formation of a star cluster. These regions can include newer galaxies with much new star formation, dwarf starburst galaxies, arms of a spiral galaxy that have a high star formation rate, and in the merging of galaxies. In an Astronomical Journal published in 1996, using pictures taken in the ultraviolet (UV) spectrum by the Hubble Space Telescope of star-forming rings in five different barred galaxies, numerous star clusters were found in clumps within the rings which had high rates of star formation. These clusters were found to have masses of about to , ages of about 100 Myr, and radii of about 5 pc, and are thought to evolve into globular clusters later in their lifetimes. These properties match those found in SSCs. Characteristics and properties. The typical characteristics and properties of SSCs: Hubble Space Telescope contributions. Given the relatively small size of SSCs compared to their host galaxies, astronomers have had trouble finding them in the past due to the limited resolution of the ground-based and space telescopes at the time. With the introduction of the Hubble Space Telescope (HST) in the 1990s, finding SSCs (as well as other astronomical objects) became much easier thanks to the higher resolution of the HST (angular resolution of ~1/10 arcsecond). This has not only allowed astronomers to see SSCs, but also allowed for them to measure their properties as well as the properties of the individual stars within the SSC. Recently, a massive star, Westerlund 1-26, was discovered in the SSC Westerlund 1 in the Milky Way. The radius of this star is thought to be larger than the radius of Jupiter's orbit around the Sun. Essentially, the HST searches the night sky, specifically nearby galaxies, for star clusters and "dense stellar objects" to see if any have the properties similar to that of a SSC or an object that would, in its lifetime, evolve into a globular cluster. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gtrsim" }, { "math_id": 1, "text": "n_\\text{e}" }, { "math_id": 2, "text": "P/" }, { "math_id": 3, "text": "k_\\text{B}" } ]
https://en.wikipedia.org/wiki?curid=6115906
61173505
Cederbaum's maximum flow theorem
Cederbaum's theorem defines hypothetical analog electrical networks which will automatically produce a solution to the minimum s–t cut problem. Alternatively, simulation of such a network will also produce a solution to the minimum s–t cut problem. This article gives basic definitions, a statement of the theorem and a proof of the theorem. The presentation in this article closely follows the presentation of the theorem in the original publication. Definitions. Definitions in this article are consistent in all respects with those given in a discussion of the maximum-flow minimum-cut theorem. Flow graph. Cederbaum's theorem applies to a particular type of directed graph:  "G" ("V", "E"). "V" is the set of nodes. formula_0 is the a set of directed edges: formula_1 . A positive weight is associated with each edge:  "w"ab: "E" → R+. Two of the nodes must be "s" and "t": formula_2 and formula_3. Flow. Flow,   "f" : "E" → R+, is a positive quantity associated with each edge in the graph. Flow is constrained by the weight of the associated edge and by the conservation of flow at each vertex as described here. Current. Current is defined as a map for each edge pair to the real numbers,  "i"ab : "Ep" → R. Current maps from the voltage to a range that is determined by the weights of the respective forward and reverse edges. Each edge pair is the tuple consisting of the forward and reverse edges for a given pair of vertices. The current in the forward and reverse directions between a pair of nodes are the additive inverses of one another:  "i"ab −"i"ba . Current is conserved at each interior node in the network. The net current at the formula_4 and formula_5 nodes is non-zero. The net current at the formula_4 node is defined as the input current. For the set of neighbors of the node formula_4, formula_6: formula_7 Voltage. Voltage is defined as a mapping from the set of edge pairs to real numbers,  "v"ab : "Ep" → R. Voltage is directly analogous to electrical voltage in an electrical network. The voltage in the forward and reverse directions between a pair of nodes are the additive inverses of one another:  "v"ab −"v"ba . The input voltage is the sum of the voltages over a set of edges, formula_8, that form a path between the formula_4 and formula_5 nodes. formula_9 "s"–"t" cut. An s–t cut is a partition of the graph into two parts each containing one of either formula_4 or formula_5. Where formula_10, formula_11, formula_12, the s–t cut is formula_13. The s–t cut set is the set of edges that start in formula_14 and end in formula_15. The minimum s–t cut is the s–t cut whose cut set has the minimum weight. Formally, the cut set is defined as: formula_16 Electrical network. An electrical network is a model that is derived from a flow graph. Each resistive element in the electrical network corresponds to an edge pair in the flow graph. The positive and negative terminals of the electrical network are the nodes corresponding to the formula_4 and formula_5 terminals of the graph, respectively. The voltage state of the model becomes binary in the limit as the input voltage difference approaches formula_17. The behavior of the electrical network is defined by Kirchhoff's voltage and current laws. Voltages add to zero around all closed loops and currents add to zero at all nodes. Resistive element. A resistive element in the context of this theorem is a component of the electrical network that corresponds to an edge pair in the flow graph. "iv" characteristic. The formula_18 characteristic is the relationship between current and voltage. The requirements are: (i) Current and voltage are continuous function with respect to one another. (ii) Current and voltage are non-decreasing functions with respect to one another. (iii) The range of the current is limited by the weights of the forward and reverse edges corresponding to the resistive element. The current range may be inclusive or exclusive of the endpoints. The domain of the voltage is exclusive of the maximum and minimum currents:  "i"ab : R → [−"w"ab,"w"ba]   or   (−"w"ab,"w"ba]   or  [−"w"ab,"w"ba)   or  (−"w"ab,"w"ba)  "v"ab : (−"w"ab,"w"ba) → R Statement of theorem. The limit of the current  "I"in between the input terminals of the electrical network as the input voltage, "V"in approaches formula_17, is equal to the weight of the minimum cut set XC. formula_19 Proof. Claim 1 Current at any resistive element in the electrical network in either direction is always less than or equal to the maximum flow at the corresponding edge in the graph. Therefore, the maximum current through the electrical network is less than the weight of the minimum cut of the flow graph: formula_20 Claim 2 As the input voltage formula_21 approaches infinity, there exists at least one cut set formula_22 such that the voltage across the cut set approaches infinity. formula_23 This implies that: formula_24 Given claims 1 and 2 above: formula_25 Related Topics. The existence and uniqueness of a solution to the equations of an electrical network composed of monotone resistive elements was established by Duffin. Application. Cederbaum's maximum flow theorem is the basis for the Simcut algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": " E = (a, b) \\in V \\times V " }, { "math_id": 2, "text": " s \\in V " }, { "math_id": 3, "text": " t \\in V " }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "N_s" }, { "math_id": 7, "text": "I_{in} = \\sum_{b \\in N_s}i_{sb} " }, { "math_id": 8, "text": "P_{ab}" }, { "math_id": 9, "text": "V_{in} = \\sum_{(a,b) \\in P_{ab}}v_{ab}" }, { "math_id": 10, "text": " S \\cup T = V " }, { "math_id": 11, "text": "\\;\\; s \\in S " }, { "math_id": 12, "text": "\\;\\; t \\in T " }, { "math_id": 13, "text": " (S, T) " }, { "math_id": 14, "text": " S " }, { "math_id": 15, "text": " T " }, { "math_id": 16, "text": "X_C = \\{ (a, b) \\in E \\mid a \\in S, b \\in T \\} " }, { "math_id": 17, "text": "\\infty" }, { "math_id": 18, "text": "iv" }, { "math_id": 19, "text": "\\lim_{V_{in} \\rightarrow \\infty} (I_{in})= \\min_{X_C}\\sum_{(a,b) \\in X_C}w_{ab} ." }, { "math_id": 20, "text": "\\lim_{V_{in} \\rightarrow \\infty} (I_{in}) \\leq \\min_{X_C}\\sum_{(a,b) \\in X_C}w_{ab} ." }, { "math_id": 21, "text": " V_{in} " }, { "math_id": 22, "text": "X'_C" }, { "math_id": 23, "text": "\\exists X'_C \\;\\; \\forall \\; (a,b) \\in X'_C \\;\\; \\lim_{V_{in} \\rightarrow \\infty}(v_{ab}) = \\infty " }, { "math_id": 24, "text": "\\lim_{V_{in} \\rightarrow \\infty} (I_{in}) = \\sum_{(a,b) \\in X'_C}w_{ab} \\; \\geq \\; \\min_{X_C}\\sum_{(a,b) \\in X_C}w_{ab}." }, { "math_id": 25, "text": "\\lim_{V_{in} \\rightarrow \\infty} (I_{in}) = \\min_{X_C}\\sum_{(a,b) \\in X_C}w_{ab} ." } ]
https://en.wikipedia.org/wiki?curid=61173505
61175228
Scott–Curry theorem
In mathematical logic, the Scott–Curry theorem is a result in lambda calculus stating that if two non-empty sets of lambda terms "A" and "B" are closed under beta-convertibility then they are recursively inseparable. Explanation. A set "A" of lambda terms is closed under beta-convertibility if for any lambda terms X and Y, if formula_0 and X is β-equivalent to Y then formula_1. Two sets "A" and "B" of natural numbers are recursively separable if there exists a computable function formula_2 such that formula_3 if formula_4 and formula_5 if formula_6. Two sets of lambda terms are recursively separable if their corresponding sets under a Gödel numbering are recursively separable, and recursively inseparable otherwise. The Scott–Curry theorem applies equally to sets of terms in combinatory logic with weak equality. It has parallels to Rice's theorem in computability theorem, which states that all non-trivial semantic properties of programs are undecidable. The theorem has the immediate consequence that it is an undecidable problem to determine if two lambda terms are β-equivalent. Proof. The proof is adapted from Barendregt in "The Lambda Calculus". Let "A" and "B" be closed under beta-convertibility and let "a" and "b" be lambda term representations of elements from "A" and "B" respectively. Suppose for a contradiction that "f" is a lambda term representing a computable function such that formula_7 if formula_8 and formula_9 if formula_10 (where equality is β-equality). Then define formula_11. Here, formula_12 is true if its argument is zero and false otherwise, and formula_13 is the identity so that formula_14 is equal to "x" if "b" is true and "y" if "b" is false. Then formula_15 and similarly, formula_16. By the Second Recursion Theorem, there is a term "X" which is equal to "f" applied to the Church numeral of its Gödel numbering, "X'". Then formula_17 implies that formula_18 so in fact formula_19. The reverse assumption formula_19 gives formula_20 so formula_17. Either way we arise at a contradiction, and so "f" cannot be a function which separates "A" and "B". Hence "A" and "B" are recursively inseparable. History. Dana Scott first proved the theorem in 1963. The theorem, in a slightly less general form, was independently proven by Haskell Curry. It was published in Curry's 1969 paper "The undecidability of λK-conversion". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\in A" }, { "math_id": 1, "text": "Y \\in A" }, { "math_id": 2, "text": "f : \\mathbb{N} \\rightarrow \\{0, 1\\}" }, { "math_id": 3, "text": "f(a) = 0" }, { "math_id": 4, "text": "a \\in A" }, { "math_id": 5, "text": "f(b) = 1" }, { "math_id": 6, "text": "b \\in B" }, { "math_id": 7, "text": "fx = 0" }, { "math_id": 8, "text": "x \\in A" }, { "math_id": 9, "text": "fx = 1" }, { "math_id": 10, "text": "x \\in B" }, { "math_id": 11, "text": "G \\equiv \\lambda x.\\text{if}\\ (\\text{zero?} \\ (fx)) a b" }, { "math_id": 12, "text": "\\text{zero?}" }, { "math_id": 13, "text": "\\text{if}" }, { "math_id": 14, "text": "\\text{if}\\ b x y" }, { "math_id": 15, "text": "x \\in C \\implies Gx = a" }, { "math_id": 16, "text": "x \\notin C \\implies Gx = b" }, { "math_id": 17, "text": "X \\in C" }, { "math_id": 18, "text": "X = G(X') = b" }, { "math_id": 19, "text": "X \\notin C" }, { "math_id": 20, "text": "X = G(X') = a" } ]
https://en.wikipedia.org/wiki?curid=61175228
6118
Carnot heat engine
Theoretical engine A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates. A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine. Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build. Carnot's diagram. In the adjacent diagram, from Carnot's 1824 work, "Reflections on the Motive Power of Fire", there are "two bodies "A" and "B", kept each at a constant temperature, that of "A" being higher than that of "B". These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator." Carnot then explains how we can obtain motive power, i.e., "work", by carrying a certain quantity of heat from body "A" to body "B". It also acts as a cooler and hence can also act as a refrigerator. Modern diagram. The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat "Q" can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically "Q"H was supplied by a boiler, wherein water was boiled over a furnace; "Q"C was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, "W", is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height". Carnot cycle. The Carnot cycle when acting as a heat engine consists of the following steps: Carnot's theorem. Carnot's theorem is a formal statement of this fact: "No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs." formula_0 Explanation. This maximum efficiency is defined as above: A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. It is easily shown that the efficiency η is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the "working fluid" of the heat engine, and the cold sink) remains constant when the "working fluid" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.) Since the "working fluid" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the "working fluid" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section. The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency. Efficiency of real heat engines. For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0. The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system W is equal to the net heat put into the system, the sum of formula_1 &gt; 0 taken up and the waste heat formula_2 &lt; 0 given off: For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the "working fluid" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place. During heat transfer from the hot reservoir at formula_3 to the fluid, the fluid would have a slightly lower temperature than formula_3, and the process for the fluid may not necessarily remain isothermal. Let formula_4 be the total entropy change of the fluid in the process of intake of heat. where the temperature of the fluid T is always slightly lesser than formula_3, in this process. So, one would get: Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change formula_5&lt; 0 of the fluid in the process of expelling heat: where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid T is always slightly greater than formula_6. We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have The previous three equations, namely (3), (4), (5), substituted into (6) to give: For [ΔSh ≥ (Qh/Th)] +[ΔSc ≥ (Qc/Tc)] = 0 [ΔSh ≥ (Qh/Th)] = - [ΔSc ≥ (Qc/Tc)] = [-ΔSc "≤" (-Qc/Tc)] it is at least (Qh/Th) "≤" (-Qc/Tc) Equations (2) and (7) combine to give To derive this step needs two adiabatic processes involved to show an isentropic process property for the ratio of the changing volumes of two isothermal processes are equal. Most importantly, since the two adiabatic processes are volume works without heat lost, and since the ratio of volume changes for this two processes are the same, so the works for these two adiabatic processes are the same with opposite direction to each other, namely, one direction is work done by the system and the other is work done on the system; therefore, heat efficiency only concerns the amount of work done by the heat absorbed comparing to the amount of heat absorbed by the system. Therefore, (W/Qh) = (Qh - Qc) / Qh = 1 - (Qc/Qh) = 1 - (Tc/Th) And, from (7) (Qh/Th) "≤" (-Qc/Tc) here Qc it is less than 0 (release heat) -(Tc/Th) ≥ (Qc/Qh) "1+"[-(Tc/Th)] ≥ "1+"(Qc/Qh) 1 - (Tc/Th) ≥ (Qh + Qc)/Qh here Qc&lt;0, 1 - (Tc/Th) ≥ (Qh - Qc)/Qh 1 - (Tc/Th) ≥ W/Qh Hence, where formula_7 is the efficiency of the real engine, and formula_8 is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures formula_3 and formula_6. For the Carnot engine, the entire process is 'reversible', and Equation (7) is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine. Equation (7) signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as formula_9 flows into it at the fixed temperature formula_6, is greater than the entropy loss of the hot reservoir as formula_10 leaves it at its fixed temperature formula_3. The inequality in Equation (7) is essentially the statement of the Clausius theorem. According to the second theorem, "The efficiency of the Carnot engine is independent of the nature of the working substance". The Carnot engine and Rudolf Diesel. In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed a practical Diesel engine. The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem. For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at , then cycle to one atmosphere at . However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, if it could have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.) Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the "Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and . He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter. Even so, the Diesel engine slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, capable by 1969 of 40% efficiency. As a macroscopic construct. The Carnot heat engine is, ultimately, a theoretical construct based on an "idealized" thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built. For example, for the isothermal expansion part of the Carnot cycle, the following "infinitesimal" conditions must be satisfied simultaneously at every step in the expansion: Such "infinitesimal" requirements as these (and others) cause the Carnot cycle to take an "infinite amount of time", rendering the production of work impossible. Other practical requirements that make the Carnot cycle impractical to realize include fine control of the gas, and perfect thermal contact with the surroundings (including high and low temperature reservoirs). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta_{I}=\\frac{W}{Q_{\\mathrm{H}}}=1-\\frac{T_{\\mathrm{C}}}{T_{\\mathrm{H}}}" }, { "math_id": 1, "text": " Q_\\text{H} " }, { "math_id": 2, "text": " Q_\\text{C} " }, { "math_id": 3, "text": "T_\\text{H}" }, { "math_id": 4, "text": "\\Delta S_\\text{H}" }, { "math_id": 5, "text": " \\Delta S_\\text{C} " }, { "math_id": 6, "text": "T_\\text{C}" }, { "math_id": 7, "text": "\\eta = \\frac{W}{Q_\\text{H}}" }, { "math_id": 8, "text": "\\eta_\\text{I}" }, { "math_id": 9, "text": "Q_\\text{C}" }, { "math_id": 10, "text": "Q_\\text{H}" } ]
https://en.wikipedia.org/wiki?curid=6118
61184298
Bernstein–Vazirani algorithm
Quantum algorithm The Bernstein–Vazirani algorithm, which solves the Bernstein–Vazirani problem, is a quantum algorithm invented by Ethan Bernstein and Umesh Vazirani in 1997. It is a restricted version of the Deutsch–Jozsa algorithm where instead of distinguishing between two different classes of functions, it tries to learn a string encoded in a function. The Bernstein–Vazirani algorithm was designed to prove an oracle separation between complexity classes BQP and BPP. Problem statement. Given an oracle that implements a function formula_0 in which formula_1 is promised to be the dot product between formula_2 and a secret string formula_3 modulo 2, formula_4, find formula_5. Algorithm. Classically, the most efficient method to find the secret string is by evaluating the function formula_6 times with the input values formula_7 for all formula_8: formula_9 In contrast to the classical solution which needs at least formula_6 queries of the function to find formula_5, only one query is needed using quantum computing. The quantum algorithm is as follows: Apply a Hadamard transform to the formula_6 qubit state formula_10 to get formula_11 Next, apply the oracle formula_12 which transforms formula_13. This can be simulated through the standard oracle that transforms formula_14 by applying this oracle to formula_15. (formula_16 denotes addition mod two.) This transforms the superposition into formula_17 Another Hadamard transform is applied to each qubit which makes it so that for qubits where formula_18, its state is converted from formula_19 to formula_20 and for qubits where formula_21, its state is converted from formula_22 to formula_23. To obtain formula_5, a measurement in the standard basis (formula_24) is performed on the qubits. Graphically, the algorithm may be represented by the following diagram, where formula_25 denotes the Hadamard transform on formula_6 qubits: formula_26 The reason that the last state is formula_27 is because, for a particular formula_28, formula_29 Since formula_30 is only true when formula_31, this means that the only non-zero amplitude is on formula_27. So, measuring the output of the circuit in the computational basis yields the secret string formula_5. A generalization of Bernstein–Vazirani problem has been proposed that involves finding one or more secret keys using a probabilistic oracle. This is an interesting problem for which a quantum algorithm can provide efficient solutions with certainty or with a high degree of confidence, while classical algorithms completely fail to solve the problem in the general case. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f\\colon\\{0,1\\}^n\\rightarrow \\{0,1\\}" }, { "math_id": 1, "text": "f(x)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": " s \\in \\{0,1\\}^n" }, { "math_id": 4, "text": " f(x) = x \\cdot s = x_1s_1 \\oplus x_2s_2 \\oplus \\cdots \\oplus x_ns_n" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "x = 2^{i}" }, { "math_id": 8, "text": "i \\in \\{0, 1, ..., n-1\\}" }, { "math_id": 9, "text": "\\begin{align}\nf(1000\\cdots0_n) & = s_1 \\\\\nf(0100\\cdots0_n) & = s_2 \\\\\nf(0010\\cdots0_n) & = s_3 \\\\\n& \\,\\,\\,\\vdots \\\\\nf(0000\\cdots1_n) & = s_n \\\\\n\\end{align}" }, { "math_id": 10, "text": "|0\\rangle^{\\otimes n} " }, { "math_id": 11, "text": "\\frac{1}{\\sqrt{2^{n}}}\\sum_{x=0}^{2^n-1} |x\\rangle." }, { "math_id": 12, "text": "U_f" }, { "math_id": 13, "text": "|x\\rangle \\to (-1)^{f(x)}|x\\rangle" }, { "math_id": 14, "text": "|b\\rangle|x\\rangle \\to |b \\oplus f(x)\\rangle |x\\rangle" }, { "math_id": 15, "text": "\\frac{|0\\rangle - |1\\rangle}{\\sqrt{2}}|x\\rangle" }, { "math_id": 16, "text": "\\oplus" }, { "math_id": 17, "text": "\\frac{1}{\\sqrt{2^{n}}}\\sum_{x=0}^{2^n-1} (-1)^{f(x)} |x\\rangle." }, { "math_id": 18, "text": "s_i = 1" }, { "math_id": 19, "text": "|-\\rangle" }, { "math_id": 20, "text": "|1\\rangle " }, { "math_id": 21, "text": "s_i = 0" }, { "math_id": 22, "text": "|+\\rangle" }, { "math_id": 23, "text": "|0\\rangle " }, { "math_id": 24, "text": "\\{|0\\rangle, |1\\rangle\\}" }, { "math_id": 25, "text": "H^{\\otimes n}" }, { "math_id": 26, "text": "\n |0\\rangle^n \\xrightarrow{H^{\\otimes n }} \\frac{1}{\\sqrt{2^n}} \\sum_{x \\in \\{0,1\\}^n} |x\\rangle \\xrightarrow{U_f} \\frac{1}{\\sqrt{2^n}}\\sum_{x \\in \\{0,1\\}^n}(-1)^{f(x)}|x\\rangle \\xrightarrow{H^{\\otimes n}} \\frac{1}{2^n} \\sum_{x,y \\in \\{0,1\\}^n}(-1)^{f(x) + x\\cdot y}|y\\rangle = |s\\rangle\n" }, { "math_id": 27, "text": "|s\\rangle" }, { "math_id": 28, "text": "y" }, { "math_id": 29, "text": "\n \\frac{1}{2^n}\\sum_{x \\in \\{0,1\\}^n}(-1)^{f(x) + x\\cdot y}\n = \\frac{1}{2^n}\\sum_{x \\in \\{0,1\\}^n}(-1)^{x\\cdot s + x\\cdot y}\n = \\frac{1}{2^n}\\sum_{x \\in \\{0,1\\}^n}(-1)^{x\\cdot (s \\oplus y)}\n = 1 \\text{ if } s \\oplus y = \\vec{0},\\, 0 \\text{ otherwise}.\n" }, { "math_id": 30, "text": "s \\oplus y = \\vec{0}" }, { "math_id": 31, "text": "s = y" } ]
https://en.wikipedia.org/wiki?curid=61184298
61186329
Dasgupta's objective
In the study of hierarchical clustering, Dasgupta's objective is a measure of the quality of a clustering, defined from a similarity measure on the elements to be clustered. It is named after Sanjoy Dasgupta, who formulated it in 2016. Its key property is that, when the similarity comes from an ultrametric space, the optimal clustering for this quality measure follows the underlying structure of the ultrametric space. In this sense, clustering methods that produce good clusterings for this objective can be expected to approximate the ground truth underlying the given similarity measure. In Dasgupta's formulation, the input to a clustering problem consists of similarity scores between certain pairs of elements, represented as an undirected graph formula_0, with the elements as its vertices and with non-negative real weights on its edges. Large weights indicate elements that should be considered more similar to each other, while small weights or missing edges indicate pairs of elements that are not similar. A hierarchical clustering can be described as a tree (not necessarily a binary tree) whose leaves are the elements to be clustered; the clusters are then the subsets of elements descending from each tree node, and the size formula_1 of any cluster formula_2 is its number of elements. For each edge formula_3 of the input graph, let formula_4 denote the weight of edge formula_3 and let formula_5 denote the smallest cluster of a given clustering that contains both formula_6 and formula_7. Then Dasgupta defines the cost of a clustering to be formula_8 The optimal clustering for this objective is NP-hard to find. However, it is possible to find a clustering that approximates the minimum value of the objective in polynomial time by a divisive (top-down) clustering algorithm that repeatedly subdivides the elements using an approximation algorithm for the sparsest cut problem, the problem of finding a partition that minimizes the ratio of the total weight of cut edges to the total number of cut pairs. Equivalently, for purposes of approximation, one may minimize the ratio of the total weight of cut edges to the number of elements on the smaller side of the cut. Using the best known approximation for the sparsest cut problem, the approximation ratio of this approach is formula_9. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G=(V,E)" }, { "math_id": 1, "text": "|C|" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "uv" }, { "math_id": 4, "text": "w(uv)" }, { "math_id": 5, "text": "C(uv)" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "\\sum_{uv\\in E} w(uv)\\cdot |C(uv)|." }, { "math_id": 9, "text": "O(\\sqrt{\\log n})" } ]
https://en.wikipedia.org/wiki?curid=61186329
61186777
Low-energy plasma-enhanced chemical vapor deposition
Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) is a plasma-enhanced chemical vapor deposition technique used for the epitaxial deposition of thin semiconductor (silicon, germanium and SiGe alloys) films. A remote low energy, high density DC argon plasma is employed to efficiently decompose the gas phase precursors while leaving the epitaxial layer undamaged, resulting in high quality epilayers and high deposition rates (up to 10 nm/s). Working principle. The substrate (typically a silicon wafer) is inserted in the reactor chamber, where it is heated by a graphite resistive heater from the backside. An argon plasma is introduced into the chamber to ionize the precursors' molecules, generating highly reactive radicals which result in the growth of an epilayer on the substrate. Moreover, the bombardment of Ar ions removes the hydrogen atoms adsorbed on the surface of the substrate while introducing no structural damage. The high reactivity of the radicals and the removal of hydrogen from the surface by ion bombardment prevent the typical problems of Si, Ge and SiGe alloys growth by thermal chemical vapor deposition (CVD), which are Thanks to this effects the growth rate in a LEPECVD reactor depends only on the plasma parameters and the gas fluxes, and it is possible to obtain epitaxial deposition at much lower temperatures compared to a standard CVD tool. LEPECVD reactor. The LEPECVD reactor is divided in three main parts: The substrate is placed at the top of the chamber, facing down toward the plasma source. Heating is provided from the back side by thermal radiation from a resistive graphite heater incapsulated between two boron nitride discs, which improve the temperature uniformity across the heater. Thermocouples are used to measure the temperature above the heater, which is then correlated to that of the substrate by a calibration done with an infrared pyrometer. Typical substrate temperatures for monocrystalline films are 400 °C to 760 °C, for germanium and silicon respectively. The potential of the wafer stage can be controlled by an external power supply, influencing the amount and the energy of radicals impinging on the surface, and is typically kept at 10-15 V with respect to the chamber walls. The process gases are introduced into the chamber through a gas dispersal ring placed below the wafer stage. The gases used in a LEPECVD reactor are silane () and germane () for silicon and germanium deposition respectively, together with diborane () and phosphine () for p- and n-type doping. Plasma source. The plasma source is the most critical component of a LEPECVD reactor, as the low energy, high density, plasma is the key difference from a typical PECVD deposition system. The plasma is generated in a source which is attached to the bottom of the chamber. Argon is fed directly in the source, where tantalum filaments are heated to create an electron-rich environment by thermionic emission. The plasma is then ignited by a DC discharge from the heated filaments to the grounded walls of the source. Thanks to the high electron density in the source the voltage required to obtain a discharge is around 20-30V, resulting in an ion energy of about 10-20 eV, while the discharge current is of the order of several tens of amperes, giving a high ion density. The DC discharge current can be tuned to control the ion density, thus changing the growth rate: in particular at a larger discharge current the ion density is higher, therefore increasing the rate. Plasma confinement. The plasma enters the growth chamber through an anode electrically connected to the grounded chamber walls, which is used to focus and stabilize the discharge and the plasma. Further focusing is provided by a magnetic field directed along the chamber's axis, provided by external copper coils wrapped around the chamber. The current flowing through the coils (i.e. the intensity of the magnetic field) can be controlled to change the ion density at the substrate's surface, thus changing the growth rate. Additional coils ("wobblers") are placed around the chamber, with their axis perpendicular to the magnetic field, to continuously sweep the plasma over the substrate, improving the homogeneity of the deposited film. Applications. Thanks to the possibility of changing the growth rate (through the plasma density or gas fluxes) independently from the substrate temperature, both thin films with sharp interfaces and a precision down to the nanometer scale at rates as low as 0.4 nm/s, as well as thick layers (up to 10 um or more) at rates as high as 10 nm/s, can be grown using the same reactor and in the same deposition process. This has been exploited to grow low-loss composition-graded waveguides for NIR and MIR and integrated nanostructures (i.e. quantum well stacks) for NIR optical amplitude modulation. The capability of LEPECVD to grow both very sharp quantum wells on thick buffers in the same deposition step has also been employed to realize high mobility strained Ge channels. Another promising application of the LEPECVD technique is the possibility of growing high aspect ratio, self-assembled silicon and germanium microcrystals on deeply patterned Si substrates. This solves many problems related to heteroepitaxy (i.e. thermal expansion coefficient and crystal lattice mismatch), leading to very high crystal quality, and is possible thanks to the high rates and low temperatures found in a LEPECVD reactor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^{-9}" } ]
https://en.wikipedia.org/wiki?curid=61186777
61186810
Pan–Tompkins algorithm
Heart rate measuring algorithm used in ECGs The Pan–Tompkins algorithm is commonly used to detect QRS complexes in electrocardiographic signals (ECG). The QRS complex represents the ventricular depolarization and the main spike visible in an ECG signal (see figure). This feature makes it particularly suitable for measuring heart rate, the first way to assess the heart health state. In the first derivation of Einthoven of a physiological heart, the QRS complex is composed by a downward deflection (Q wave), a high upward deflection (R wave) and a final downward deflection (S wave). The Pan–Tompkins algorithm applies a series of filters to highlight the frequency content of this rapid heart depolarization and removes the background noise. Then, it squares the signal to amplify the QRS contribution, which makes identifying the QRS complex more straightforward. Finally, it applies adaptive thresholds to detect the peaks of the filtered signal. The algorithm was proposed by Jiapu Pan and Willis J. Tompkins in 1985, in the journal IEEE Transactions on Biomedical Engineering. The performance of the method was tested on an annotated arrhythmia database (MIT/BIH) and evaluated also in presence of noise. Pan and Tompkins reported that the 99.3 percent of QRS complexes was correctly detected. Pre-processing. Noise cancellation. As a first step, a band-pass filter is applied to increase the signal-to-noise ratio. A filter bandwidth of 5-15 Hz is suggested to maximize the QRS contribute and reduce muscle noise, baseline wander, powerline interference and the P wave/T wave frequency content. In the original algorithm proposed in 1985, the band-pass filter was obtained with a low-pass filter and a high-pass filter in cascade to reduce the computational cost and allow a real-time detection, while ensuring a 3 dB passband in the 5–12 Hz frequency range, reasonably close to the design goal. For a signal sampled at a frequency of 200 Hz, Pan and Tompkins suggested the filters with the following transfer functions formula_0 in an updated version of their article: Derivative step. As a third step, a derivative filter is applied to provide information about the slope of the QRS. For a signal sampled at 200 Hz, Pan and Tompkins suggested the following transfer function: formula_3for a 5-point derivative filter with gain of 0.1 and a processing delay of 2 samples. Squaring and integration. The filtered signal is squared to enhance the dominant peaks (QRSs) and reduce the possibility of erroneously recognizing a T wave as an R peak. Then, a moving average filter is applied to provide information about the duration of the QRS complex. The number of samples to average is chosen in order to average on windows of 150 ms. The signal so obtained is called integrated signal. Decision rules. Fiducial mark. In order to detect a QRS complex, the local peaks of the integrated signal are found. A peak is defined as the point in which the signal changes direction (from an increasing direction to a decreasing direction). After each peak, no peak can be detected in the next 200 ms (i.e. the lockout time). This is a physiological constraint due to the refractory period during which ventricular depolarization cannot occur even in the presence of a stimulus. Thresholds. Each fiducial mark is considered as a potential QRS. To reduce the possibility of wrongly selecting a noise peak as a QRS, each peak amplitude is compared to a threshold ("ThresholdI") that takes into account the available information about already detected QRS and the noise level: formula_4 where "NoiseLevelI" is the running estimate of the noise level in the integrated signal and "SignalLevelI" is the running estimate of the signal level in the integrated signal. The threshold is automatically updated after detecting a new peak, based on its classification as signal or noise peak: formula_5(if "PEAKI" is a signal peak) formula_6(if "PEAKI" is a noise peak) where "PEAKI" is the new peak found in the integrated signal. At the beginning of the QRS detection, a 2 seconds learning phase is needed to initialize "SignalLevelI" and "NoiseLevelI" as a percentage of the maximum and average amplitude of the integrated signal, respectively. If a new "PEAKI" is under the "ThresholdI", the noise level is updated. If "PEAKI" is above the "ThresholdI", the algorithm implements a further check before confirming the peak as a true QRS, taking into consideration the information provided by the bandpass filtered signal. In the filtered signal the peak corresponding to the one evaluated on the integrated signal is searched and compared with a threshold, calculated in a similar way to the previous step: formula_7 formula_8(if "PEAKF" is a signal peak) formula_9(if "PEAKF" is a noise peak) where the final F stands for filtered signal. Search back for missed QRS complexes. The algorithm takes into account the possibility of setting too high values of "ThresholdII" and "ThresholdIF." A check is performed to continuously assess the RR intervals (namely the temporal interval between two consecutively QRS peaks) to overcome this issue. The average RR is computed in two ways to consider both regular and irregular heart rhythm. In the first method "RRaverage1" is computed as the mean of the last RR intervals. In the second method "RRaverage2" is computed as the mean of the last RR intervals that fell between the limits specified as: formula_10 formula_11 If no QRS is detected in a window of 166% of the average RR ("RRaverage1" or "RRaverage2", if the heart rhythm is regular or irregular, respectively)"," the algorithm adds the maximal peak in the window as a potential QRS and classify it considering half the values of the thresholds (both "ThresholdII and ThresholdIF"). This check is implemented because the temporal distance between two consecutive beats cannot physiologically change more quickly than this. T wave discrimination. The algorithm takes particularly into consideration the possibility of a false detection of T waves. If a potential QRS falls up to a 160 ms window after the refractory period from the last correctly detected QRS complex, the algorithm evaluates if it could be a T wave with particular high amplitude. In this case, its slope is compared to the one of the precedent QRS complex. If the slope is less than half the previous one, the current QRS is recognized as a T wave and discarded, and it also updates the "NoiseLevel" (both in the filtered signal and the integrated signal). Application. Once the QRS complex is successfully recognized, the heart rate is computed as a function of the distance in seconds between two consecutive QRS complexes (or R peaks): formula_12 where bpm stands for beats per minute. The HR is often used to compute the heart rate variability (HRV) a measure of the variability of the time interval between heartbeats. HRV is often used in the clinical field to diagnose and monitor pathological conditions and their treatment, but also in the affective computing research to study new methods to assess the emotional state of people.
[ { "math_id": 0, "text": "H(z)" }, { "math_id": 1, "text": "H(z)={(1-z^{-5})^{2} \\over (1-z^{-1})^{2}}" }, { "math_id": 2, "text": "H(z)={(-1/32+z^{-16}-z^{-17}+z^{-32}/32)\\over(1-z^{-1})}" }, { "math_id": 3, "text": "H(z)=0.1(-z^{-2}-2z^{-1}+2z^{1}+z^{2})" }, { "math_id": 4, "text": "Threshold_I= NoiseLevel_I + 0.25 (SignalLevel_I - NoiseLevel_I)" }, { "math_id": 5, "text": "SignalLevel_I = 0.125 PEAK_I + 0.875 SignalLevel_I \n" }, { "math_id": 6, "text": "NoiseLevel_I = 0.125 PEAK_I + 0.875 NoiseLevel_I \n" }, { "math_id": 7, "text": "Threshold_F= NoiseLevel_F + 0.25 (SignalLevel_F - NoiseLevel_F)" }, { "math_id": 8, "text": "SignalLevel_F = 0.125 PEAK_F + 0.875 SignalLevel_F \n" }, { "math_id": 9, "text": "NoiseLevel_F = 0.125 PEAK_F + 0.875 NoiseLevel_F \n" }, { "math_id": 10, "text": "RRlow = 92% RRaverage2" }, { "math_id": 11, "text": "RRhigh = 116% RRaverage2" }, { "math_id": 12, "text": "\\mathit{HR}\\ (\\text{bpm})={60 \\over \\mathit{RR}\\ (\\text{s})}" } ]
https://en.wikipedia.org/wiki?curid=61186810
61186816
Variational multiscale method
The variational multiscale method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena. The VMS framework has been mainly applied to design stabilized finite element methods in which stability of the standard Galerkin method is not ensured both in terms of singular perturbation and of compatibility conditions with the finite element spaces. Stabilized methods are getting increasing attention in computational fluid dynamics because they are designed to solve drawbacks typical of the standard Galerkin method: advection-dominated flows problems and problems in which an arbitrary combination of interpolation functions may yield to unstable discretized formulations. The milestone of stabilized methods for this class of problems can be considered the Streamline Upwind Petrov-Galerkin method (SUPG), designed during 80s for convection dominated-flows for the incompressible Navier–Stokes equations by Brooks and Hughes. Variational Multiscale Method (VMS) was introduced by Hughes in 1995. Broadly speaking, VMS is a technique used to get mathematical models and numerical methods which are able to catch multiscale phenomena; in fact, it is usually adopted for problems with huge scale ranges, which are separated into a number of scale groups. The main idea of the method is to design a sum decomposition of the solution as formula_0, where formula_1 is denoted as coarse-scale solution and it is solved numerically, whereas formula_2 represents the fine scale solution and is determined analytically eliminating it from the problem of the coarse scale equation. The abstract framework. Abstract Dirichlet problem with variational formulation. Consider an open bounded domain formula_3 with smooth boundary formula_4, being formula_5 the number of space dimensions. Denoting with formula_6 a generic, second order, nonsymmetric differential operator, consider the following boundary value problem: formula_7 formula_8 being formula_9 and formula_10 given functions. Let formula_11 be the Hilbert space of square-integrable functions with square-integrable derivatives: formula_12 Consider the trial solution space formula_13 and the weighting function space formula_14 defined as follows: formula_15 formula_16 The variational formulation of the boundary value problem defined above reads: formula_17, being formula_18 the bilinear form satisfying formula_19, formula_20 a bounded linear functional on formula_14 and formula_21 is the formula_22 inner product. Furthermore, the dual operator formula_23 of formula_6 is defined as that differential operator such that formula_24. Variational multiscale method. In VMS approach, the function spaces are decomposed through a multiscale direct sum decomposition for both formula_25 and formula_26 into coarse and fine scales subspaces as: formula_27 and formula_28 Hence, an "overlapping" sum decomposition is assumed for both formula_29 and formula_30 as: formula_31, where formula_1 represents the "coarse" (resolvable) scales and formula_2 the "fine" (subgrid) scales, with formula_32, formula_33, formula_34 and formula_35. In particular, the following assumptions are made on these functions: formula_36 With this in mind, the variational form can be rewritten as formula_37 and, by using bilinearity of formula_38 and linearity of formula_39, formula_40 Last equation, yields to a coarse scale and a fine scale problem: formula_41 formula_42 or, equivalently, considering that formula_19 and formula_20: formula_41 formula_43 By rearranging the second problem as formula_44, the corresponding Euler–Lagrange equation reads: formula_45 which shows that the fine scale solution formula_2 depends on the strong residual of the coarse scale equation formula_46. The fine scale solution can be expressed in terms of formula_46 through the Green's function formula_47: formula_48 Let formula_49 be the Dirac delta function, by definition, the Green's function is found by solving formula_50 formula_51 Moreover, it is possible to express formula_2 in terms of a new differential operator formula_52 that approximates the differential operator formula_53 as formula_54 with formula_55. In order to eliminate the explicit dependence in the coarse scale equation of the sub-grid scale terms, considering the definition of the dual operator, the last expression can be substituted in the second term of the coarse scale equation: formula_56 Since formula_52 is an approximation of formula_53, the Variational Multiscale Formulation will consist in finding an approximate solution formula_57 instead of formula_1. The coarse problem is therefore rewritten as: formula_58 being formula_59 Introducing the form formula_60 and the functional formula_61, the VMS formulation of the coarse scale equation is rearranged as: formula_62 Since commonly it is not possible to determine both formula_52 and formula_63, one usually adopt an approximation. In this sense, the coarse scale spaces formula_64 and formula_65 are chosen as finite dimensional space of functions as: formula_66 and formula_67 being formula_68 the Finite Element space of Lagrangian polynomials of degree formula_69 over the mesh built in formula_70 . Note that formula_71 and formula_72 are infinite-dimensional spaces, while formula_73 and formula_74 are finite-dimensional spaces. Let formula_75 and formula_76 be respectively approximations of formula_77 and formula_78, and let formula_79 and formula_80 be respectively approximations of formula_81 and formula_82. The VMS problem with Finite Element approximation reads: formula_83 or, equivalently: formula_84 VMS and stabilized methods. Consider an advection–diffusion problem: formula_85 where formula_86 is the diffusion coefficient with formula_87 and formula_88 is a given advection field. Let formula_89 and formula_90, formula_91, formula_92. Let formula_93, being formula_94 and formula_95. The variational form of the problem above reads: formula_96 being formula_97 Consider a Finite Element approximation in space of the problem above by introducing the space formula_98 over a grid formula_99 made of formula_100 elements, with formula_101. The standard Galerkin formulation of this problem reads formula_102 Consider a strongly consistent stabilization method of the problem above in a finite element framework: formula_103 for a suitable form formula_104 that satisfies: formula_105 The form formula_104 can be expressed as formula_106, being formula_107 a differential operator such as: formula_108 and formula_109 is the stabilization parameter. A stabilized method with formula_110 is typically referred to multiscale stabilized method . In 1995, Thomas J.R. Hughes showed that a stabilized method of multiscale type can be viewed as a sub-grid scale model where the stabilization parameter is equal to formula_111 or, in terms of the Green's function as formula_112 which yields the following definition of formula_109: formula_113 Stabilization Parameter Properties. For the 1-d advection diffusion problem, with an appropriate choice of basis functions and formula_114, VMS provides a projection in the approximation space. Further, an adjoint-based expression for formula_114 can be derived, formula_115 where formula_116 is the element wise stabilization parameter, formula_117 is the element wise residual and the adjoint formula_118 problem solves, formula_119 In fact, one can show that the formula_114 thus calculated allows one to compute the linear functional formula_120 exactly. VMS turbulence modeling for large-eddy simulations of incompressible flows. The idea of VMS turbulence modeling for Large Eddy Simulations(LES) of incompressible Navier–Stokes equations was introduced by Hughes et al. in 2000 and the main idea was to use - instead of classical filtered techniques - variational projections. Incompressible Navier–Stokes equations. Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density formula_121 in a domain formula_122 with boundary formula_123, being formula_124 and formula_125 portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied (formula_126): formula_127 being formula_128 the fluid velocity, formula_129 the fluid pressure, formula_130 a given forcing term, formula_131 the outward directed unit normal vector to formula_125, and formula_132 the viscous stress tensor defined as: formula_133 Let formula_134 be the dynamic viscosity of the fluid, formula_135 the second order identity tensor and formula_136 the strain-rate tensor defined as: formula_137 The functions formula_138 and formula_139 are given Dirichlet and Neumann boundary data, while formula_140 is the initial condition. Global space time variational formulation. In order to find a variational formulation of the Navier–Stokes equations, consider the following infinite-dimensional spaces: formula_141 formula_142 formula_143 Furthermore, let formula_144 and formula_145. The weak form of the unsteady-incompressible Navier–Stokes equations reads: given formula_146, formula_147 formula_148 where formula_21 represents the formula_22 inner product and formula_149 the formula_150 inner product. Moreover, the bilinear forms formula_151, formula_152 and the trilinear form formula_153 are defined as follows: formula_154 Finite element method for space discretization and VMS-LES modeling. In order to discretize in space the Navier–Stokes equations, consider the function space of finite element formula_155 of piecewise Lagrangian Polynomials of degree formula_69 over the domain formula_70 triangulated with a mesh formula_156 made of tetrahedrons of diameters formula_157, formula_158. Following the approach shown above, let introduce a multiscale direct-sum decomposition of the space formula_159 which represents either formula_160 and formula_161: formula_162 being formula_163 the finite dimensional function space associated to the coarse scale, and formula_164 the infinite-dimensional fine scale function space, with formula_165, formula_166 and formula_167. An overlapping sum decomposition is then defined as: formula_168 By using the decomposition above in the variational form of the Navier–Stokes equations, one gets a coarse and a fine scale equation; the fine scale terms appearing in the coarse scale equation are integrated by parts and the fine scale variables are modeled as: formula_169 In the expressions above, formula_170 and formula_171 are the residuals of the momentum equation and continuity equation in strong forms defined as: formula_172 while the stabilization parameters are set equal to: formula_173 where formula_174 is a constant depending on the polynomials's degree formula_175, formula_176 is a constant equal to the order of the backward differentiation formula (BDF) adopted as temporal integration scheme and formula_177 is the time step. The semi-discrete variational multiscale multiscale formulation (VMS-LES) of the incompressible Navier–Stokes equations, reads: given formula_146, formula_178 being formula_179 and formula_180 The forms formula_181 and formula_182 are defined as: formula_183 From the expressions above, one can see that:
[ { "math_id": 0, "text": " u = \\bar u + u' " }, { "math_id": 1, "text": " \\bar u " }, { "math_id": 2, "text": " u' " }, { "math_id": 3, "text": "\\Omega \\subset \\mathbb R^d " }, { "math_id": 4, "text": "\\Gamma \\subset \\mathbb R^{d-1} " }, { "math_id": 5, "text": " d \\geq 1 " }, { "math_id": 6, "text": " \\mathcal L " }, { "math_id": 7, "text": "\n\\text{find } u: \\Omega \\to \\mathbb R \\text{ such that}:\n" }, { "math_id": 8, "text": "\n\\begin{cases}\n\\mathcal L u = f & \\text{ in } \\Omega \\\\\nu = g & \\text{ on } \\Gamma\\\\\n\\end{cases}\n" }, { "math_id": 9, "text": "f: \\Omega \\to \\mathbb R " }, { "math_id": 10, "text": "g: \\Gamma \\to \\mathbb R " }, { "math_id": 11, "text": " H^1 (\\Omega)" }, { "math_id": 12, "text": "\nH^1(\\Omega)= \\{ f \\in L^2(\\Omega): \\nabla f \\in L^2 (\\Omega)\\}.\n" }, { "math_id": 13, "text": " \\mathcal V_g " }, { "math_id": 14, "text": " \\mathcal V " }, { "math_id": 15, "text": "\n\\mathcal V_g = \\{u \\in H^1 (\\Omega): \\, u=g \\text{ on } \\Gamma \\},\n" }, { "math_id": 16, "text": "\n\\mathcal V = H_0^1(\\Omega) = \\{v \\in H^1 (\\Omega): \\, v=0 \\text{ on } \\Gamma \\}.\n" }, { "math_id": 17, "text": " \\text{find } u \\in \\mathcal V_g \\text{ such that: } a(v, u) = f(v) \\, \\, \\, \\, \\forall v \\in \\mathcal V " }, { "math_id": 18, "text": " a(v, u) " }, { "math_id": 19, "text": " a(v, u) = (v, \\mathcal L u) " }, { "math_id": 20, "text": " f(v)=(v, f) " }, { "math_id": 21, "text": " (\\cdot, \\cdot) " }, { "math_id": 22, "text": " L^2(\\Omega) " }, { "math_id": 23, "text": " \\mathcal L^* " }, { "math_id": 24, "text": " \\mathcal (v, \\mathcal L u) = (\\mathcal L^*v, u)\\, \\, \\, \\forall u, \\, v \\in \\mathcal V " }, { "math_id": 25, "text": "\\mathcal V_g" }, { "math_id": 26, "text": "\\mathcal V " }, { "math_id": 27, "text": " \n\\mathcal V= \\bar {\\mathcal V} \\oplus \\mathcal V' \n" }, { "math_id": 28, "text": " \n\\mathcal V_g= \\bar {\\mathcal V_g} \\oplus \\mathcal V_g'. \n" }, { "math_id": 29, "text": " u " }, { "math_id": 30, "text": " v " }, { "math_id": 31, "text": " u = \\bar{u} + u' \\text{ and } v = \\bar{v} + v'" }, { "math_id": 32, "text": " \\bar{u} \\in \\bar{\\mathcal V_g} " }, { "math_id": 33, "text": " {u'} \\in {\\mathcal V_g}' " }, { "math_id": 34, "text": " \\bar{v} \\in \\bar{\\mathcal V} " }, { "math_id": 35, "text": " v' \\in {\\mathcal V}' " }, { "math_id": 36, "text": "\n\\begin{align}\n\\bar u = g & & \\text{ on } \\Gamma & & \\forall& \\bar u \\in \\bar{\\mathcal{V_g}}, \\\\\nu' = 0 & & \\text{ on } \\Gamma & & \\forall& u' \\in {\\mathcal{V_g}}', \\\\\n\\bar v = 0 & & \\text{ on } \\Gamma & & \\forall& \\bar v \\in \\bar{\\mathcal{V}}, \\\\\nv' = 0 & & \\text{ on } \\Gamma & & \\forall& v' \\in {\\mathcal{V}}'.\n\\end{align}\n" }, { "math_id": 37, "text": "\na(\\bar v + v', \\bar u + u') = f (\\bar v + v')\n" }, { "math_id": 38, "text": " a (\\cdot , \\cdot) " }, { "math_id": 39, "text": " f (\\cdot) " }, { "math_id": 40, "text": "\na (\\bar v, \\bar u) + a (\\bar v, u') + a (v', \\bar u) + a(v', u') = f(\\bar v) + f (v').\n" }, { "math_id": 41, "text": "\n\\text{find } \\bar u \\in \\bar{\\mathcal V}_g \\text{ and } u' \\in \\mathcal V' \\text{ such that: }\n" }, { "math_id": 42, "text": "\n\\begin{align}\n& & a (\\bar v, \\bar u) + a (\\bar v, u') &= f(\\bar v) & & \\forall \\bar v \\in \\bar{\\mathcal V} & \\, \\, \\, \\, \\text{coarse-scale problem}\\\\\n& & a (v', \\bar u) + a (v', u') &= f( v') & & \\forall v' \\in {\\mathcal V}' & \\, \\, \\, \\, \\text{fine-scale problem} \\\\\n\\end{align}\n" }, { "math_id": 43, "text": "\n\\begin{align}\n& & (\\bar v, \\mathcal L \\bar u) + (\\bar v, \\mathcal L u') &= (\\bar v, f) & & \\forall \\bar v \\in \\bar{\\mathcal V}, \\\\\n& & (v', \\mathcal L \\bar u) + (v', \\mathcal L u') &= ( v', f) & & \\forall v' \\in {\\mathcal V}'.\\\\\n\\end{align}\n" }, { "math_id": 44, "text": " (v', \\mathcal L u') = - (v', \\mathcal L \\bar u - f) " }, { "math_id": 45, "text": "\n\\begin{cases}\n\\mathcal L u' = - (\\mathcal L \\bar u - f) & \\text{ in } \\Omega \\\\\nu' = 0 & \\text{ on } \\Gamma\n\\end{cases}\n" }, { "math_id": 46, "text": " \\mathcal L \\bar u - f " }, { "math_id": 47, "text": "G: \\Omega \\times \\Omega \\to \\mathbb R \\text{ with } G=0 \\text{ on } \\Gamma \\times \\Gamma " }, { "math_id": 48, "text": "\nu'(y) = - \\int_\\Omega G(x, y) (\\mathcal L \\bar u - f )(x)\\,d \\Omega_x \\, \\, \\, \\forall y \\in \\Omega.\n" }, { "math_id": 49, "text": " \\delta" }, { "math_id": 50, "text": " \\forall y \\in \\Omega " }, { "math_id": 51, "text": " \n\\begin{cases}\n\\mathcal L^* G(x, y) = \\delta (x-y) & \\text{ in } \\Omega \\\\\nG(x,y)=0 & \\text{ on } \\Gamma\n\\end{cases}\n" }, { "math_id": 52, "text": " \\mathcal M " }, { "math_id": 53, "text": " - \\mathcal L^{-1} " }, { "math_id": 54, "text": " \nu' = \\mathcal M (\\mathcal L \\bar u - f),\n" }, { "math_id": 55, "text": " \\mathcal M \\approx - \\mathcal L^{-1}" }, { "math_id": 56, "text": "\n(\\bar v, \\mathcal L u') = (\\mathcal L^* \\bar v, u') = (\\mathcal L^* \\bar v, \\mathcal M (\\mathcal L \\bar u - f)).\n" }, { "math_id": 57, "text": " \\tilde{\\bar u} \\approx \\bar u " }, { "math_id": 58, "text": "\n\\text{find } \\tilde{\\bar u} \\in \\mathcal{\\bar V}_g: \\; \\; \\; a (\\bar v, \\tilde{\\bar u}) + (\\mathcal L^* \\bar v, \\mathcal M (\\mathcal L \\tilde{\\bar u} - f)) = (\\bar v, f) \\; \\; \\; \\forall \\bar{v} \\in \\mathcal {\\bar V},\n" }, { "math_id": 59, "text": "\n(\\mathcal L^* \\bar v, \\mathcal M (\\mathcal L \\tilde{\\bar u} - f)) = - \\int_{\\Omega} \\int_{\\Omega} (\\mathcal L^* \\bar v)(y) G(x, y) (\\mathcal L \\tilde{\\bar u} - f)(x) \\,d \\Omega_x \\,d\\Omega_y.\n" }, { "math_id": 60, "text": " \nB(\\bar v, \\tilde{\\bar u}, G) = a (\\bar v, \\tilde{\\bar u}) + (\\mathcal L^* \\bar v, \\mathcal M (\\mathcal L \\tilde{\\bar u})) \n" }, { "math_id": 61, "text": " \nL(\\bar v, G)= (\\bar v, f) + (\\mathcal L^* \\bar v, \\mathcal M f) \n" }, { "math_id": 62, "text": "\n\\text{find } \\tilde{\\bar u} \\in \\mathcal{\\bar V}_g: \\, B(\\bar v, \\tilde{\\bar u}, G) = L(\\bar v, G) \\, \\, \\, \\forall \\bar{v} \\in \\mathcal {\\bar V}.\n" }, { "math_id": 63, "text": " G " }, { "math_id": 64, "text": "\\bar{\\mathcal V}_g" }, { "math_id": 65, "text": "\\bar{\\mathcal V}" }, { "math_id": 66, "text": "\n\\bar{\\mathcal V}_g \\equiv \\mathcal V_{g_h} : = \\mathcal V_g \\cap X_r^h(\\Omega) \n" }, { "math_id": 67, "text": "\n \\bar{\\mathcal V} \\equiv \\mathcal V_{h} : = \\mathcal V \\cap X_h^r(\\Omega),\n" }, { "math_id": 68, "text": "X_r^h(\\Omega)" }, { "math_id": 69, "text": " r \\geq 1 " }, { "math_id": 70, "text": " \\Omega " }, { "math_id": 71, "text": " \\mathcal{V}_g' " }, { "math_id": 72, "text": " \\mathcal{V}' " }, { "math_id": 73, "text": " \\mathcal{V}_{g_h} " }, { "math_id": 74, "text": " \\mathcal{V}_h " }, { "math_id": 75, "text": " u_h \\in \\mathcal V_{g_h} " }, { "math_id": 76, "text": " v_h \\in \\mathcal V_{h} " }, { "math_id": 77, "text": " \\tilde{\\bar u} " }, { "math_id": 78, "text": " {\\bar v} " }, { "math_id": 79, "text": " \\tilde G " }, { "math_id": 80, "text": " \\tilde{\\mathcal M}" }, { "math_id": 81, "text": " G " }, { "math_id": 82, "text": " {\\mathcal M}" }, { "math_id": 83, "text": "\n\\text{find } u_h \\in \\mathcal V_{g_h}: B(v_h, u_h, \\tilde G) = L( v_h, \\tilde G) \\, \\, \\, \\forall {v}_h \\in \\mathcal { V}_h\n" }, { "math_id": 84, "text": "\n\\text{find } u_h \\in \\mathcal V_{g_h}: a (v_h, u_h) + (\\mathcal L^* v_h, \\mathcal {\\tilde{M}} (\\mathcal L { u_h} - f)) = ( v_h, f) \\, \\, \\, \\forall {v}_h \\in \\mathcal { V}_h\n" }, { "math_id": 85, "text": "\n\\begin{cases}\n-\\mu \\Delta u + \\boldsymbol b \\cdot \\nabla u = f & \\text{ in } \\Omega \\\\\nu=0 & \\text{ on } \\partial \\Omega\n\\end{cases}\n" }, { "math_id": 86, "text": " \\mu \\in \\mathbb R " }, { "math_id": 87, "text": " \\mu>0 " }, { "math_id": 88, "text": " \\boldsymbol b \\in \\mathbb R^d " }, { "math_id": 89, "text": " \\mathcal{V}= H^1_0(\\Omega) " }, { "math_id": 90, "text": " u \\in \\mathcal V " }, { "math_id": 91, "text": " \\boldsymbol b \\in [L^2(\\Omega)]^d " }, { "math_id": 92, "text": " f \\in L^2(\\Omega) " }, { "math_id": 93, "text": " \\mathcal L = \\mathcal L_{diff} + \\mathcal L_{adv} " }, { "math_id": 94, "text": " \\mathcal L_{diff} = - \\mu \\Delta " }, { "math_id": 95, "text": " \\mathcal L_{adv} = \\boldsymbol b \\cdot \\nabla " }, { "math_id": 96, "text": "\n\\text{find} \\, u \\in \\mathcal V: \\; \\; \\; a(v, u) = (f, v) \\; \\; \\; \\forall v \\in \\mathcal V,\n" }, { "math_id": 97, "text": "\na(v, u) = (\\nabla v, \\mu \\nabla u) + (v, \\boldsymbol b \\cdot \\nabla u).\n" }, { "math_id": 98, "text": " \\mathcal V_h = \\mathcal V \\cap X_h^r " }, { "math_id": 99, "text": " \\Omega_h = \\bigcup_{k=1}^{N} \\Omega_k " }, { "math_id": 100, "text": " N " }, { "math_id": 101, "text": " u_h \\in \\mathcal V_h " }, { "math_id": 102, "text": "\n\\text{find } u_h \\in \\mathcal V_h: \\; \\; \\; a(v_h, u_h) = (f, v_h) \\; \\; \\; \\forall v \\in \\mathcal V,\n" }, { "math_id": 103, "text": "\n\\text{ find } u_h \\in \\mathcal V_h: \\, \\, \\, a(v_h, u_h) + \\mathcal L_h (u_h, f; v_h)= (f, v_h) \\, \\, \\, \\forall v_h \\in \\mathcal V_h\n" }, { "math_id": 104, "text": " \\mathcal L_h " }, { "math_id": 105, "text": "\n\\mathcal L_h (u, f; v_h) = 0 \\, \\, \\, \\forall v_h \\in \\mathcal V_h.\n" }, { "math_id": 106, "text": " (\\mathbb L v_h, \\tau (\\mathcal L u_h - f))_{\\Omega_h} " }, { "math_id": 107, "text": " \\mathbb L " }, { "math_id": 108, "text": "\n\\mathbb L=\n\\begin{cases}\n+ \\mathcal L & \\, \\, \\, & \\text{ Galerkin/least squares (GLS)} \\\\\n+ \\mathcal L_{adv} & \\, \\, \\, & \\text{ Streamline Upwind Petrov-Galerkin (SUPG)} \\\\\n- \\mathcal L^* & \\, \\, \\, & \\text{ Multiscale} \\\\\n\\end{cases}\n" }, { "math_id": 109, "text": " \\tau " }, { "math_id": 110, "text": " \\mathbb L = -\\mathcal L^* " }, { "math_id": 111, "text": " \n\\tau= - \\tilde{\\mathcal M} \\approx - \\mathcal M \n" }, { "math_id": 112, "text": "\n\\tau \\delta (x-y) = \\tilde G(x, y) \\approx G(x,y),\n" }, { "math_id": 113, "text": "\n\\tau = \\frac{1}{|\\Omega_k|} \\int_{\\Omega_k} \\int_{\\Omega_k} G(x, y) \\,d\\Omega_x \\,d\\Omega_y.\n" }, { "math_id": 114, "text": "\\tau" }, { "math_id": 115, "text": "\n\\tau_e = -\\frac{\\mathcal L(\\tilde{z}, u_h)_e}{(\\phi_{e}L_h(u_h)),L^*(\\tilde{z}))_e}\n" }, { "math_id": 116, "text": "\\tau_e" }, { "math_id": 117, "text": "\\mathcal L(\\tilde{z}, u_h)_e" }, { "math_id": 118, "text": "\\tilde{z}" }, { "math_id": 119, "text": "\n\\mathcal a(\\tilde{z}, v) + L_h(\\tilde{z}, v) = \\int_{\\Omega_e} v \\, dx\n" }, { "math_id": 120, "text": " \\int_{\\Omega} u \\, dx " }, { "math_id": 121, "text": " \\rho " }, { "math_id": 122, "text": " \\Omega \\in \\mathbb R^d " }, { "math_id": 123, "text": " \\partial \\Omega = \\Gamma_D \\cup \\Gamma_N " }, { "math_id": 124, "text": " \\Gamma_D " }, { "math_id": 125, "text": " \\Gamma_N " }, { "math_id": 126, "text": " \\Gamma_D \\cap \\Gamma_N = \\emptyset " }, { "math_id": 127, "text": "\n\\begin{cases}\n\\rho \\dfrac{\\partial \\boldsymbol u}{\\partial t} + \\rho (\\boldsymbol u \\cdot \\nabla) \\boldsymbol u - \\nabla \\cdot \\boldsymbol \\sigma (\\boldsymbol u, p) = \\boldsymbol f & \\text{ in } \\Omega \\times (0, T) \\\\\n\\nabla \\cdot \\boldsymbol u = 0 & \\text{ in } \\Omega \\times (0, T) \\\\\n\\boldsymbol u = \\boldsymbol g & \\text{ on } \\Gamma_D \\times (0, T) \\\\\n \\sigma (\\boldsymbol u, p) \\boldsymbol{\\hat n} = \\boldsymbol h & \\text{ on } \\Gamma_N \\times (0, T) \\\\\n\\boldsymbol u(0)= \\boldsymbol u_0 & \\text{ in } \\Omega \\times \\{ 0\\}\n\\end{cases}\n" }, { "math_id": 128, "text": " \\boldsymbol u " }, { "math_id": 129, "text": " p " }, { "math_id": 130, "text": " \\boldsymbol f " }, { "math_id": 131, "text": " \\boldsymbol{\\hat n} " }, { "math_id": 132, "text": " \\boldsymbol \\sigma (\\boldsymbol u, p) " }, { "math_id": 133, "text": "\n\\boldsymbol \\sigma (\\boldsymbol u, p) = -p \\boldsymbol I + 2 \\mu \\boldsymbol \\epsilon (\\boldsymbol u).\n" }, { "math_id": 134, "text": " \\mu " }, { "math_id": 135, "text": " \\boldsymbol I " }, { "math_id": 136, "text": " \\boldsymbol \\epsilon (\\boldsymbol u) " }, { "math_id": 137, "text": "\n\\boldsymbol \\epsilon (\\boldsymbol u) = \\frac{1}{2} ((\\nabla \\boldsymbol u) + (\\nabla \\boldsymbol u)^T).\n" }, { "math_id": 138, "text": " \\boldsymbol g " }, { "math_id": 139, "text": " \\boldsymbol h " }, { "math_id": 140, "text": " \\boldsymbol u_0 " }, { "math_id": 141, "text": " \n\\mathcal V_g= \\{ \\boldsymbol u \\in [H^1(\\Omega)]^d : \\boldsymbol u = \\boldsymbol g \\text{ on } \\Gamma_D \\}, \n" }, { "math_id": 142, "text": " \n\\mathcal V_0 = [H^1_0(\\Omega)]^d=\\{ \\boldsymbol u \\in [H^1(\\Omega)]^d : \\boldsymbol u = \\boldsymbol 0 \\text{ on } \\Gamma_D \\}, \n" }, { "math_id": 143, "text": " \n\\mathcal Q =L^2(\\Omega).\n" }, { "math_id": 144, "text": " \\boldsymbol \\mathcal V_g = \\mathcal V_g \\times \\mathcal Q " }, { "math_id": 145, "text": " \\boldsymbol \\mathcal V_0 = \\mathcal V_0 \\times \\mathcal Q " }, { "math_id": 146, "text": " \\boldsymbol u_{0}" }, { "math_id": 147, "text": "\n\\forall t \\in (0, T), \\; \\text{find } (\\boldsymbol u, p) \\in \\boldsymbol \\mathcal V_g \\text{ such that } \n" }, { "math_id": 148, "text": "\n\\begin{align}\n\\bigg( \\boldsymbol v, \\rho \\dfrac{\\partial \\boldsymbol u}{\\partial t}\\bigg) + a (\\boldsymbol v, \\boldsymbol u) + c (\\boldsymbol v, \\boldsymbol u, \\boldsymbol u) - b (\\boldsymbol v, p) + b (\\boldsymbol u, q) = (\\boldsymbol v, \\boldsymbol f) + (\\boldsymbol v, \\boldsymbol h)_{\\Gamma_N} \\; \\; \\forall (\\boldsymbol v, q) \\in \\boldsymbol \\mathcal V_0\n\\end{align}\n" }, { "math_id": 149, "text": " (\\cdot, \\cdot)_{\\Gamma_N} " }, { "math_id": 150, "text": " L^2(\\Gamma_N) " }, { "math_id": 151, "text": " a(\\cdot, \\cdot) " }, { "math_id": 152, "text": " b(\\cdot, \\cdot) " }, { "math_id": 153, "text": " c(\\cdot, \\cdot, \\cdot) " }, { "math_id": 154, "text": "\n\\begin{align}\na (\\boldsymbol v, \\boldsymbol u) = & (\\nabla \\boldsymbol v, \\mu ((\\nabla \\boldsymbol u) + (\\nabla \\boldsymbol u)^T)), \\\\\nb (\\boldsymbol v, q) = &(\\nabla \\cdot \\boldsymbol v, q), \\\\\nc (\\boldsymbol v, \\boldsymbol u, \\boldsymbol u) = &(\\boldsymbol v, \\rho (\\boldsymbol u \\cdot \\nabla) \\boldsymbol u). \n\\end{align}\n" }, { "math_id": 155, "text": "\nX_r^h = \\{u^h \\in C^0 (\\overline\\Omega): u^h|_k \\in \\mathbb P_r, \\; \\forall k \\in \\Tau_h\\}\n" }, { "math_id": 156, "text": " \\Tau_h" }, { "math_id": 157, "text": " h_k " }, { "math_id": 158, "text": " \\forall k \\in \\Tau_h " }, { "math_id": 159, "text": "\\boldsymbol \\mathcal V " }, { "math_id": 160, "text": "\\boldsymbol \\mathcal V_g" }, { "math_id": 161, "text": "\\boldsymbol \\mathcal V_0" }, { "math_id": 162, "text": "\n\\boldsymbol \\mathcal V = \\boldsymbol \\mathcal V_h \\oplus \\boldsymbol \\mathcal V',\n" }, { "math_id": 163, "text": " \n\\boldsymbol \\mathcal V_h = \\mathcal V_{g_h} \\times \\mathcal Q \\text{ or } \\boldsymbol \\mathcal V_h = \\mathcal V_{0_h} \\times \\mathcal Q \n" }, { "math_id": 164, "text": " \n\\boldsymbol \\mathcal V' = \\mathcal V_g' \\times \\mathcal Q \\text{ or } \\boldsymbol \\mathcal V' = \\mathcal V_0' \\times \\mathcal Q \n" }, { "math_id": 165, "text": " \\mathcal V_{g_h} = \\mathcal V_g \\cap X_r^h " }, { "math_id": 166, "text": " \\mathcal V_{0_h} = \\mathcal V_0 \\cap X_r^h " }, { "math_id": 167, "text": " \\mathcal Q_h = \\mathcal Q \\cap X_r^h " }, { "math_id": 168, "text": "\n\\begin{align}\n& \\boldsymbol u = \\boldsymbol u^h + \\boldsymbol u' \\text{ and } p = p^h + p' \\\\\n& \\boldsymbol v = \\boldsymbol v^h + \\boldsymbol v' \\;\\text{ and } q = q^h + q' \n\\end{align}\n" }, { "math_id": 169, "text": "\n\\begin{align}\n\\boldsymbol u' \\approx & -\\tau_M (\\boldsymbol u^h) \\boldsymbol r_M (\\boldsymbol u^h, p^h), \\\\\np' \\approx & -\\tau_C (\\boldsymbol u^h) \\boldsymbol r_C (\\boldsymbol u^h).\n\\end{align}\n" }, { "math_id": 170, "text": "\\boldsymbol r_M (\\boldsymbol u^h, p^h) " }, { "math_id": 171, "text": " \\boldsymbol r_C (\\boldsymbol u^h) " }, { "math_id": 172, "text": "\n\\begin{align}\n\\boldsymbol r_M (\\boldsymbol u^h, p^h) = & \\rho \\dfrac{\\partial \\boldsymbol u^h}{\\partial t} + \\rho (\\boldsymbol u^h \\cdot \\nabla) \\boldsymbol u^h - \\nabla \\cdot \\boldsymbol \\sigma (\\boldsymbol u^h, p^h) - \\boldsymbol f,\\\\\n\\boldsymbol r_C (\\boldsymbol u^h) = & \\nabla \\cdot \\boldsymbol u^h,\n\\end{align}\n" }, { "math_id": 173, "text": "\n\\begin{align}\n\\tau_M (\\boldsymbol u^h) = & \\bigg ( \\frac{\\sigma^2 \\rho^2}{\\Delta t^2} + \\frac{\\rho^2 }{h_k^2} |\\boldsymbol u^h|^2 + \\frac{\\mu^2}{h_k^4}C_r\\bigg )^{-1/2}, \\\\\n\\tau_C (\\boldsymbol u^h) = & \\frac{h_k^2}{\\tau_M (\\boldsymbol u^h) },\n\\end{align}\n" }, { "math_id": 174, "text": " C_r = 60 \\cdot 2^{r-2}" }, { "math_id": 175, "text": " r " }, { "math_id": 176, "text": " \\sigma " }, { "math_id": 177, "text": " \\Delta t " }, { "math_id": 178, "text": "\n\\forall t \\in (0, T), \\; \\text{find } \\boldsymbol U^h = \\{\\boldsymbol u^h, p^h\\} \\in \\boldsymbol \\mathcal V_{g_h} \\text{ such that } A(\\boldsymbol V^h, \\boldsymbol U^h ) = F(\\boldsymbol V^h) \\; \\; \\forall \\boldsymbol V^h= \\{\\boldsymbol v^h, q^h\\} \\in \\boldsymbol \\mathcal V_{0_h},\n" }, { "math_id": 179, "text": "\nA(\\boldsymbol V^h, \\boldsymbol U^h ) = A^{NS}(\\boldsymbol V^h, \\boldsymbol U^h ) + A^{VMS}(\\boldsymbol V^h, \\boldsymbol U^h ),\n" }, { "math_id": 180, "text": "\nF(\\boldsymbol V^h) = (\\boldsymbol v, \\boldsymbol f) + (\\boldsymbol v, \\boldsymbol h)_{\\Gamma_N}.\n" }, { "math_id": 181, "text": " A^{NS}(\\cdot, \\cdot) " }, { "math_id": 182, "text": " A^{VMS}(\\cdot, \\cdot) " }, { "math_id": 183, "text": "\n\\begin{align}\nA^{NS}(\\boldsymbol V^h, \\boldsymbol U^h )= & \\bigg( \\boldsymbol v^h, \\rho \\dfrac{\\partial \\boldsymbol u^h}{\\partial t}\\bigg) + a (\\boldsymbol v^h, \\boldsymbol u^h) + c (\\boldsymbol v^h, \\boldsymbol u^h, \\boldsymbol u^h) - b (\\boldsymbol v^h, p^h) + b (\\boldsymbol u^h, q^h), \\\\\nA^{VMS}(\\boldsymbol V^h, \\boldsymbol U^h ) = & \\underbrace{\\big( \\rho \\boldsymbol u^h \\cdot \\nabla \\boldsymbol v^h + \\nabla q^h, \\tau_M(\\boldsymbol u^h) \\boldsymbol r_M(\\boldsymbol u^h, p^h) \\big)}_{\\text{SUPG}} - \\underbrace{(\\nabla \\cdot \\boldsymbol v^h, \\tau_c(\\boldsymbol u_h)\\boldsymbol r_C(\\boldsymbol u^h)) + \\big( \\rho \\boldsymbol u^h \\cdot (\\nabla \\boldsymbol u^h)^T, \\tau_M(\\boldsymbol u^h) \\boldsymbol r_M (\\boldsymbol u^h, p^h) \\big )}_{\\text{VMS}} - \\underbrace{(\\nabla \\boldsymbol v^h, \\tau_M(\\boldsymbol u^h) \\boldsymbol r_M(\\boldsymbol u^h, p^h) \\otimes \\tau_M(\\boldsymbol u^h) \\boldsymbol r_M(\\boldsymbol u^h, p^h) )}_{\\text{LES}}.\n\\end{align}\n" }, { "math_id": 184, "text": "A^{NS}(\\cdot, \\cdot)" }, { "math_id": 185, "text": "A^{VMS}(\\cdot, \\cdot)" } ]
https://en.wikipedia.org/wiki?curid=61186816
61186824
Streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations
Finite element method for Navier-Stokes equations The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space (i.e. formula_0) by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation. The finite element (FE) numerical computation of incompressible Navier–Stokes equations (NS) suffers from two main sources of numerical instabilities arising from the associated Galerkin problem. Equal order finite elements for pressure and velocity, (for example, formula_1), do not satisfy the inf-sup condition and leads to instability on the discrete pressure (also called spurious pressure). Moreover, the advection term in the Navier–Stokes equations can produce oscillations in the velocity field (also called spurious velocity). Such spurious velocity oscillations become more evident for advection-dominated (i.e., high Reynolds number formula_2) flows. To control instabilities arising from inf-sup condition and convection dominated problem, pressure-stabilizing Petrov–Galerkin (PSPG) stabilization along with Streamline-Upwind Petrov-Galerkin (SUPG) stabilization can be added to the NS Galerkin formulation. The incompressible Navier–Stokes equations for a Newtonian fluid. Let formula_3 be the spatial fluid domain with a smooth boundary formula_4, where formula_5 with formula_6 the subset of formula_7 in which the essential (Dirichlet) boundary conditions are set, while formula_8 the portion of the boundary where natural (Neumann) boundary conditions have been considered. Moreover, formula_9, and formula_10. Introducing an unknown velocity field formula_11 and an unknown pressure field formula_12, in absence of body forces, the incompressible Navier–Stokes (NS) equations read formula_13 where formula_14 is the outward directed unit normal vector to formula_8, formula_15 is the Cauchy stress tensor, formula_16 is the fluid density , and formula_17 and formula_18 are the usual gradient and divergence operators. The functions formula_19 and formula_20 indicate suitable Dirichlet and Neumann data, respectively, while formula_21 is the known initial field solution at time formula_22. For a Newtonian fluid, the Cauchy stress tensor formula_15 depends linearly on the components of the strain rate tensor: formula_23 where formula_24 is the dynamic viscosity of the fluid (taken to be a known constant) and formula_25 is the second order identity tensor, while formula_26 is the strain rate tensor formula_27 The first of the NS equations represents the balance of the momentum and the second one the conservation of the mass, also called continuity equation (or incompressible constraint). Vectorial functions formula_21, formula_19, and formula_20 are assigned. Hence, the strong formulation of the incompressible Navier–Stokes equations for a constant density, Newtonian and homogeneous fluid can be written as: Find, formula_28, velocity formula_29 and pressure formula_30 such that: formula_31 where, formula_32 is the kinematic viscosity, and formula_33 is the pressure rescaled by density (however, for the sake of clearness, the hat on pressure variable will be neglect in what follows). In the NS equations, the Reynolds number shows how important is the non linear term, formula_34, compared to the dissipative term, formula_35 formula_36 The Reynolds number is a measure of the ratio between the advection convection terms, generated by inertial forces in the flow velocity, and the diffusion term specific of fluid viscous forces. Thus, formula_37 can be used to discriminate between advection-convection dominated flow and diffusion dominated one. Namely: The weak formulation of the Navier–Stokes equations. The weak formulation of the strong formulation of the NS equations is obtained by multiplying the first two NS equations by test functions formula_38 and formula_39, respectively, belonging to suitable function spaces, and integrating these equation all over the fluid domain formula_40. As a consequence: formula_41 By summing up the two equations and performing integration by parts for pressure (formula_42) and viscous (formula_43) term: formula_44 Regarding the choice of the function spaces, it's enough that formula_45 and formula_39, formula_46 and formula_38, and their derivative, formula_47 and formula_48 are square-integrable functions in order to have sense in the integrals that appear in the above formulation. Hence, formula_49 Having specified the function spaces formula_50, formula_51 and formula_52, and by applying the boundary conditions, the boundary terms can be rewritten as formula_53 where formula_54. The integral terms with formula_6 vanish because formula_55, while the term on formula_8 become formula_56 The weak formulation of Navier–Stokes equations reads: Find, for all formula_57, formula_58, such that formula_59 with formula_60, where formula_61 Finite element Galerkin formulation of Navier–Stokes equations. In order to numerically solve the NS problem, first the discretization of the weak formulation is performed. Consider a triangulation formula_62, composed by tetrahedra formula_63, with formula_64 (where formula_65 is the total number of tetrahedra), of the domain formula_40 and formula_66 is the characteristic length of the element of the triangulation. Introducing two families of finite-dimensional sub-spaces formula_67 and formula_68, approximations of formula_50 and formula_52 respectively, and depending on a discretization parameter formula_66, with formula_69 and formula_70, formula_71 the discretized-in-space Galerkin problem of the weak NS equation reads: Find, for all formula_57, formula_72, such that formula_73 with formula_74, where formula_75 is the approximation (for example, its interpolant) of formula_76, and formula_77 Time discretization of discretized-in-space NS Galerkin problem can be performed, for example, by using the second order Backward Differentiation Formula (BDF2), that is an implicit second order multistep method. Divide uniformly the finite time interval formula_78 into formula_79 time step of size formula_80 formula_81 For a general function formula_82, denoted by formula_83 as the approximation of formula_84. Thus, the BDF2 approximation of the time derivative reads as follows: formula_85 So, the fully discretized in time and space NS Galerkin problem is: Find, for formula_86, formula_87, such that formula_88 with formula_89, and formula_90 is a quantity that will be detailed later in this section. The main issue of a fully implicit method for the NS Galerkin formulation is that the resulting problem is still non linear, due to the convective term, formula_91. Indeed, if formula_92 is put, this choice leads to solve a non-linear system (for example, by means of the Newton or Fixed point algorithm) with a huge computational cost. In order to reduce this cost, it is possible to use a semi-implicit approach with a second order extrapolation for the velocity, formula_93, in the convective term: formula_94 Finite element formulation and the INF-SUP condition. Let's define the finite element (FE) spaces of continuous functions, formula_95 (polynomials of degree formula_96 on each element formula_63 of the triangulation) as formula_97 where, formula_98 is the space of polynomials of degree less than or equal to formula_96. Introduce the finite element formulation, as a specific Galerkin problem, and choose formula_67 and formula_68 as formula_99 The FE spaces formula_67 and formula_68 need to satisfy the inf-sup condition(or LBB): formula_100 with formula_101, and independent of the mesh size formula_102 This property is necessary for the well posedness of the discrete problem and the optimal convergence of the method. Examples of FE spaces satisfying the inf-sup condition are the so named Taylor-Hood pair formula_103 (with formula_104), where it can be noticed that the velocity space formula_67 has to be, in some sense, "richer" in comparison to the pressure space formula_105 Indeed, the inf-sup condition couples the space formula_67 and formula_68, and it is a sort of compatibility condition between the velocity and pressure spaces. The equal order finite elements, formula_106 (formula_107), do not satisfy the inf-sup condition and leads to instability on the discrete pressure (also called spurious pressure). However, formula_106 can still be used with additional stabilization terms such as Streamline Upwind Petrov-Galerkin with a Pressure-Stabilizing Petrov-Galerkin term (SUPG-PSPG). In order to derive the FE algebraic formulation of the fully discretized Galerkin NS problem, it is necessary to introduce two basis for the discrete spaces formula_67 and formula_68 formula_108 in order to expand our variables as formula_109 The coefficients, formula_110 (formula_111) and formula_112 (formula_113) are called degrees of freedom (d.o.f.) of the finite element for the velocity and pressure field, respectively. The dimension of the FE spaces, formula_114 and formula_115, is the number of d.o.f, of the velocity and pressure field, respectively. Hence, the total number of d.o.f formula_116 is formula_117. Since the fully discretized Galerkin problem holds for all elements of the space formula_67 and formula_68, then it is valid also for the basis. Hence, choosing these basis functions as test functions in the fully discretized NS Galerkin problem, and using bilinearity of formula_118 and formula_119, and trilinearity of formula_120, the following linear system is obtained: formula_121 where formula_122 , formula_123, formula_124, formula_125, and formula_126 are given by formula_127 and formula_128 and formula_129 are the unknown vectors formula_130 Problem is completed by an initial condition on the velocity formula_131. Moreover, using the semi-implicit treatment formula_132, the trilinear term formula_120 becomes bilinear, and the corresponding matrix is formula_133 Hence, the linear system can be written in a single monolithic matrix (formula_134, also called monolithic NS matrix) of the form formula_135 where formula_136. Streamline upwind Petrov–Galerkin formulation for incompressible Navier–Stokes equations. NS equations with finite element formulation suffer from two source of numerical instability, due to the fact that: To control instabilities arising from inf-sup condition and convection dominated problem, Pressure-Stabilizing Petrov–Galerkin(PSPG) stabilization along with Streamline-Upwind Petrov–Galerkin (SUPG) stabilization can be added to the NS Galerkin formulation. formula_138 where formula_139 is a positive constant, formula_140 is a stabilization parameter, formula_141 is a generic tetrahedron belonging to the finite elements partitioned domain formula_62, formula_142 is the residual of the NS equations. formula_143 and formula_144 is the skew-symmetric part of the NS equations formula_145 The skew-symmetric part of a generic operator formula_142 is the one for which formula_146 Since it is based on the residual of the NS equations, the SUPG-PSPG is a strongly consistent stabilization method. The discretized finite element Galerkin formulation with SUPG-PSPG stabilization can be written as: Find, for all formula_147 formula_148, such that formula_149 with formula_150, where formula_151 and formula_152, and formula_153 are two stabilization parameters for the momentum and the continuity NS equations, respectively. In addition, the notation formula_154 has been introduced, and formula_93 was defined in agreement with the semi-implicit treatment of the convective term. In the previous expression of formula_155, the term formula_156 is the Brezzi-Pitkaranta stabilization for the inf-sup, while the term formula_157 corresponds to the streamline diffusion term stabilization for large formula_37. The other terms occur to obtain a strongly consistent stabilization. Regarding the choice of the stabilization parameters formula_152, and formula_153: formula_158 where: formula_159 is a constant obtained by an inverse inequality relation (and formula_160 is the order of the chosen pair formula_0); formula_161 is a constant equal to the order of the time discretization; formula_162 is the time step; formula_163 is the "element length" (e.g. the element diameter) of a generic tetrahedra belonging to the partitioned domain formula_62. The parameters formula_152 and formula_153 can be obtained by a multidimensional generalization of the optimal value introduced in for the one-dimensional case. Notice that the terms added by the SUPG-PSPG stabilization can be explicitly written as follows formula_164 formula_165 where, for the sake of clearness, the sum over the tetrahedra was omitted: all the terms to be intended as formula_166; moreover, the indices formula_167 in formula_168 refer to the position of the corresponding term in the monolithic NS matrix, formula_134, and formula_169 distinguishes the different terms inside each block formula_170 Hence, the NS monolithic system with the SUPG-PSPG stabilization becomes formula_171 where formula_172, and formula_173. It is well known that SUPG-PSPG stabilization does not exhibit excessive numerical diffusion if at least second-order velocity elements and first-order pressure elements (formula_174) are used.
[ { "math_id": 0, "text": " \\mathbb{P}_k-\\mathbb{P}_k " }, { "math_id": 1, "text": " \\mathbb{P}_k-\\mathbb{P}_k, \\; \\forall k \\ge 0 " }, { "math_id": 2, "text": " Re " }, { "math_id": 3, "text": " \\Omega \\subset \\mathbb{R}^3 " }, { "math_id": 4, "text": " \\partial \\Omega \\equiv \\Gamma " }, { "math_id": 5, "text": " \\Gamma=\\Gamma_N \\cup \\Gamma_D " }, { "math_id": 6, "text": " \\Gamma_D " }, { "math_id": 7, "text": " \\Gamma " }, { "math_id": 8, "text": " \\Gamma_N " }, { "math_id": 9, "text": " \\Gamma_N = \\Gamma \\setminus \\Gamma_D " }, { "math_id": 10, "text": " \\Gamma_N \\cap \\Gamma_D=\\emptyset " }, { "math_id": 11, "text": " \\mathbf{u}(\\mathbf{x},t):\\Omega \\times [0,T] \\rightarrow \\mathbb{R}^3 " }, { "math_id": 12, "text": " p(\\mathbf{x},t):\\Omega \\times [0,T] \\rightarrow \\mathbb{R} " }, { "math_id": 13, "text": "\\begin{cases}\n\\frac{\\partial \\mathbf u}{\\partial t}+( \\mathbf u \\cdot \\nabla ) \\mathbf u - \\frac{1}{\\rho}\\nabla \\cdot \\boldsymbol{\\sigma} (\\mathbf u,p)=\\mathbf 0 & \\text{in } \\Omega \\times (0,T],\n\\\\\n\\nabla \\cdot {\\mathbf u}=0 & \\text{in } \\Omega \\times (0,T],\n\\\\\n\\mathbf u = \\mathbf g & \\text{on } \\Gamma_D \\times (0,T],\n\\\\\n\\boldsymbol{\\sigma} (\\mathbf u,p) \\mathbf{\\hat{n}} = \\mathbf h & \\text{on } \\Gamma_N \\times (0,T],\n\\\\\n\\mathbf{u} (\\mathbf{x},0) = \\mathbf u_0(\\mathbf{x})& \\text{in } \\Omega \\times \\{0\\},\n\\end{cases}" }, { "math_id": 14, "text": " \\mathbf{\\hat{n}} " }, { "math_id": 15, "text": " \\boldsymbol{\\sigma} " }, { "math_id": 16, "text": " \\rho " }, { "math_id": 17, "text": " \\nabla " }, { "math_id": 18, "text": " \\nabla \\cdot " }, { "math_id": 19, "text": " \\mathbf g " }, { "math_id": 20, "text": " \\mathbf h " }, { "math_id": 21, "text": " \\mathbf u_0 " }, { "math_id": 22, "text": " t=0 " }, { "math_id": 23, "text": "\\boldsymbol{\\sigma} (\\mathbf u,p)=-p \\mathbf{I} +2\\mu \\mathbf S(\\mathbf u)," }, { "math_id": 24, "text": " \\mu " }, { "math_id": 25, "text": " \\mathbf{I} " }, { "math_id": 26, "text": " \\mathbf S(\\mathbf u) " }, { "math_id": 27, "text": "\\mathbf S(\\mathbf u)=\\frac{1}{2} \\big[ \\nabla \\mathbf u + (\\nabla \\mathbf u)^T \\big]." }, { "math_id": 28, "text": " \\forall t \\in (0,T] " }, { "math_id": 29, "text": " \\mathbf u(\\mathbf{x},t) " }, { "math_id": 30, "text": " p(\\mathbf{x},t) " }, { "math_id": 31, "text": "\\begin{cases}\n\\frac{\\partial \\mathbf u}{\\partial t}+( \\mathbf u \\cdot \\nabla ) \\mathbf u + \\nabla \\hat p -2\\nu \\nabla \\cdot \\mathbf S(\\mathbf u)=\\mathbf 0 & \\text{in } \\Omega \\times (0,T],\n\\\\\n\\nabla \\cdot {\\mathbf u}=0 & \\text{in } \\Omega \\times (0,T],\n\\\\\n\\left( - \\hat p \\mathbf{I} +2\\nu \\mathbf S(\\mathbf u) \\right) \\mathbf{\\hat{n}} = \\mathbf h & \\text{on } \\Gamma_N \\times (0,T],\n\\\\\n\\mathbf u = \\mathbf g & \\text{on } \\Gamma_D \\times (0,T] \\;,\n\\\\\n\\mathbf{u} (\\mathbf{x},0) = \\mathbf u_0(\\mathbf{x}) & \\text{in } \\Omega \\times \\{0\\},\n\\end{cases}" }, { "math_id": 32, "text": " \\nu = \\frac{\\mu}{\\rho} " }, { "math_id": 33, "text": " \\hat p=\\frac{p}{\\rho} " }, { "math_id": 34, "text": " ( \\mathbf u \\cdot \\nabla ) \\mathbf u " }, { "math_id": 35, "text": " \\nu \\nabla \\cdot \\mathbf S(\\mathbf u):" }, { "math_id": 36, "text": "\n\\frac{( \\mathbf u \\cdot \\nabla ) \\mathbf u}{\\nu \\nabla \\cdot \\mathbf S(\\mathbf u)}\\approx \\frac{\\frac{U^2}{L}}{\\nu\\frac{U}{L^2}}=\\frac{UL}{\\nu} = \\mathrm{Re}.\n" }, { "math_id": 37, "text": " \\mathrm{Re} " }, { "math_id": 38, "text": " \\mathbf v " }, { "math_id": 39, "text": " q " }, { "math_id": 40, "text": " \\Omega " }, { "math_id": 41, "text": "\\begin{align}\n& \\int_{\\Omega}\\frac{\\partial \\mathbf u}{\\partial t}\\cdot \\mathbf v\\,d\\Omega+ \\int_{\\Omega}(\\mathbf u \\cdot \\nabla ) \\mathbf u \\cdot \\mathbf v \\,d\\Omega + \\int_{\\Omega}\\nabla p \\cdot \\mathbf v \\,d\\Omega\\,-\\int_{\\Omega}2\\nu \\nabla \\cdot \\mathbf S(\\mathbf u) \\cdot \\mathbf v \\,d\\Omega = 0,\n\\\\\n& \\int_{\\Omega} \\nabla \\cdot \\mathbf u \\, q \\,d\\Omega=0. \n\\end{align} " }, { "math_id": 42, "text": " \\nabla p " }, { "math_id": 43, "text": " \\nabla \\cdot \\mathbf S (\\mathbf u) " }, { "math_id": 44, "text": "\\int_\\Omega \\frac{\\partial \\mathbf u}{\\partial t}\\cdot \\mathbf v\\,d\\Omega+ \\int_{\\Omega}(\\mathbf u \\cdot \\nabla ) \\mathbf u \\cdot \\mathbf v \\,d\\Omega\\,\n+\\int_\\Omega \\nabla \\cdot \\mathbf u \\, q \\,d\\Omega- \\int_{\\Omega}p \\nabla \\cdot \\mathbf v \\,d\\Omega+\\int_{\\partial \\Omega}p \\mathbf v \\cdot \\mathbf{\\hat n} \\,d\\Gamma \\,\n+ \\int_\\Omega 2\\nu \\mathbf S(\\mathbf u) : \\nabla \\mathbf v \\,d\\Omega-\\int_{\\partial \\Omega}2\\nu \\mathbf S(\\mathbf u) \\cdot \\mathbf v \\cdot \\mathbf{\\hat n} \\,d\\Gamma \\, =0. " }, { "math_id": 45, "text": " p " }, { "math_id": 46, "text": " \\mathbf u " }, { "math_id": 47, "text": " \\nabla \\mathbf u " }, { "math_id": 48, "text": " \\nabla \\mathbf v " }, { "math_id": 49, "text": "\\begin{align}\n& \\mathcal{Q} = L^2(\\Omega) = \\left\\{ q \\in \\Omega \\text{ s.t. } \\Vert q\\Vert_{L^2}=\\sqrt{\\int_{\\Omega}{\\vert q \\vert^2 \\ d\\Omega}} < \\infty \\right\\},\n\\\\\n& \\mathcal{V}=\\{ \\mathbf v \\in [L^2(\\Omega)]^3 \\text{ and } \\nabla \\mathbf v \\in [L^2(\\Omega)]^{3 \\times 3}, \\, \\mathbf v|_{\\Gamma_{D}}=\\mathbf g \\},\n\\\\ \n& \\mathcal{V}_0=\\{ \\mathbf v \\in \\mathcal{V} \\text{ s.t. } \\mathbf v|_{\\Gamma_D}=\\mathbf 0\\}. \n\\end{align} " }, { "math_id": 50, "text": " \\mathcal{V} " }, { "math_id": 51, "text": " \\mathcal{V}_0 " }, { "math_id": 52, "text": " \\mathcal{Q} " }, { "math_id": 53, "text": "\n\\int_{\\Gamma_D \\cup \\Gamma_N}p \\mathbf v \\cdot \\mathbf{\\hat n} \\,d\\Gamma+ \\int_{\\Gamma_D \\cup \\Gamma_N} -2\\nu S(\\mathbf u) \\cdot \\mathbf v \\cdot \\mathbf{\\hat n} \\,d\\Gamma,\n" }, { "math_id": 54, "text": " \\partial \\Omega=\\Gamma_D \\cup \\Gamma_N " }, { "math_id": 55, "text": " \\mathbf v|_{\\Gamma_D}=\\mathbf 0 " }, { "math_id": 56, "text": "\n\\int_{\\Gamma_{N}} [p \\mathbf I -2\\nu S(\\mathbf u)] \\cdot \\mathbf v \\cdot \\mathbf{\\hat n} \\,d\\Gamma = -\\int_{\\Gamma_{N}} \\mathbf h \\cdot \\mathbf v \\,d\\Gamma,\n" }, { "math_id": 57, "text": " t \\in (0,T] " }, { "math_id": 58, "text": " (\\mathbf u,p)\\in \\{ \\mathcal{V} \\times \\mathcal{Q}\\} " }, { "math_id": 59, "text": " \\left( \\frac{\\partial \\mathbf u}{\\partial t},\\mathbf v \\right)+c(\\mathbf u,\\mathbf u,\\mathbf v)+b(\\mathbf u,q)-b(\\mathbf v,p) + a(\\mathbf u,\\mathbf v) = f(\\mathbf v)" }, { "math_id": 60, "text": " \\mathbf{u}|_{t=0}=\\mathbf{u}_0 " }, { "math_id": 61, "text": "\\begin{align}\n\\left( \\frac{\\partial \\mathbf u}{\\partial t},\\mathbf v \\right)&:=\\int_{\\Omega}\\frac{\\partial \\mathbf u}{\\partial t}\\cdot \\mathbf v\\,d\\Omega,\n\\\\ \nb(\\mathbf u,q)&:=\\int_{\\Omega} \\nabla \\cdot \\mathbf u \\, q \\,d\\Omega,\n\\\\\na(\\mathbf u,\\mathbf v)&:=\\int_{\\Omega}2\\nu \\mathbf S(\\mathbf u) : \\nabla \\mathbf v \\,d\\Omega,\n\\\\\nc(\\mathbf w,\\mathbf u,\\mathbf v)&:=\\int_{\\Omega}(\\mathbf w \\cdot \\nabla ) \\mathbf u \\cdot \\mathbf v \\,d\\Omega,\n\\\\\nf(\\mathbf v)&:=-\\int_{\\Gamma_{N}} \\mathbf h \\cdot \\mathbf v \\,d\\Gamma.\n\\end{align} " }, { "math_id": 62, "text": " \\Omega_h " }, { "math_id": 63, "text": " \\mathcal{T}_i " }, { "math_id": 64, "text": " i = 1, \\ldots, N_{\\mathcal{T}} " }, { "math_id": 65, "text": " N_{\\mathcal{T}} " }, { "math_id": 66, "text": " h " }, { "math_id": 67, "text": " \\mathcal{V}_h " }, { "math_id": 68, "text": " \\mathcal{Q}_h " }, { "math_id": 69, "text": " \\dim \\mathcal{V}_h = N_V " }, { "math_id": 70, "text": " \\dim \\mathcal{Q}_h = N_Q " }, { "math_id": 71, "text": "\n\\mathcal{V}_h \\subset \\mathcal{V} \\;\\;\\;\\;\\;\\;\\;\\;\\; \\mathcal{Q}_h \\subset \\mathcal{Q},\n" }, { "math_id": 72, "text": " (\\mathbf u_h,p_h)\\in \\{ \\mathcal{V}_{h} \\times \\mathcal{Q}_h\\} " }, { "math_id": 73, "text": "\\begin{align}\n& \\left( \\frac{\\partial \\mathbf u_h}{\\partial t},\\mathbf v_h \\right)+c(\\mathbf u_h,\\mathbf u_h,\\mathbf v_h)+b(\\mathbf u_h,q_h)-b(\\mathbf v_h,p_h)+a(\\mathbf u_h,\\mathbf v_h)=f(\\mathbf v_h)\n\\\\\n& \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\forall \\mathbf v_h \\in \\mathcal{V}_{0h} \\;\\;,\\;\\; \\forall q_h \\in \\mathcal{Q}_h, \n\\end{align}" }, { "math_id": 74, "text": " \\mathbf{u}_h|_{t=0}=\\mathbf{u}_{h,0} " }, { "math_id": 75, "text": " \\mathbf{g}_{h} " }, { "math_id": 76, "text": " \\mathbf{g} " }, { "math_id": 77, "text": "\\mathcal{V}_{0h}=\\{ \\mathbf v_h \\in \\mathcal{V}_h \\text{ s.t. } \\mathbf v_h|_{\\Gamma_D}=\\mathbf 0\\}." }, { "math_id": 78, "text": " [0,T] " }, { "math_id": 79, "text": " N_t " }, { "math_id": 80, "text": " \\delta t" }, { "math_id": 81, "text": "t_n=n\\delta t, \\;\\;\\; n=0,1,2,\\ldots,N_t \\;\\;\\;\\;\\; N_t=\\frac{T}{\\delta t}." }, { "math_id": 82, "text": " z " }, { "math_id": 83, "text": " z^n " }, { "math_id": 84, "text": " z(t_n) " }, { "math_id": 85, "text": "\n\\left( \\frac{\\partial \\mathbf u_h}{\\partial t} \\right)^{n+1} \\simeq \\frac{ 3 \\mathbf u_h^{n+1} -4 \\mathbf u_h^{n}+ \\mathbf u_h^{n-1}}{2\\delta t} \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\text{for } n \\geq 1.\n" }, { "math_id": 86, "text": " n = 0, 1, \\ldots, N_t-1 " }, { "math_id": 87, "text": " (\\mathbf u^{n+1}_h,p^{n+1}_h)\\in \\{ \\mathcal{V}_{h} \\times \\mathcal{Q}_h\\} " }, { "math_id": 88, "text": "\\begin{align}\n\\left( \\frac{ 3 \\mathbf u_h^{n+1} -4 \\mathbf u_h^{n}+ \\mathbf u_h^{n-1}}{2\\delta t},\\mathbf v_h \\right) & + c(\\mathbf u^*_h,\\mathbf u^{n+1}_h,\\mathbf v_h)+b(\\mathbf u^{n+1}_h,q_h)-b(\\mathbf v_h,p_h^{n+1})+ a(\\mathbf u^{n+1}_h,\\mathbf v_h)=f(\\mathbf v_h),\n\\\\\n& \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\forall \\mathbf v_h \\in \\mathcal{V}_{0h} \\;\\;,\\;\\; \\forall q_h \\in \\mathcal{Q}_h, \n\\end{align}" }, { "math_id": 89, "text": " \\mathbf{u}^{0}_h = \\mathbf{u}_{h,0} " }, { "math_id": 90, "text": " \\mathbf{u}^*_h " }, { "math_id": 91, "text": " c(\\mathbf u^*_h,\\mathbf u^{n+1}_h,\\mathbf v_h)" }, { "math_id": 92, "text": " \\mathbf{u}^*_h=\\mathbf{u}^{n+1}_h " }, { "math_id": 93, "text": " \\mathbf u^*_h " }, { "math_id": 94, "text": "\\mathbf u^*_h=2\\mathbf u^{n}_h-\\mathbf u^{n-1}_h." }, { "math_id": 95, "text": " X_h^r " }, { "math_id": 96, "text": " r " }, { "math_id": 97, "text": "\nX_h^r= \\left\\{ v_h \\in C^0(\\overline{\\Omega}) : v_h|_{\\mathcal{T}_i} \\in \\mathbb{P}_r \\ \\forall \\mathcal{T}_i \\in \\Omega_h \\right\\} \\;\\;\\;\\;\\;\\;\\;\\;\\; r=0,1,2,\\ldots,\n" }, { "math_id": 98, "text": " \\mathbb{P}_r " }, { "math_id": 99, "text": "\n\\mathcal{V}_h\\equiv [X_h^r]^3 \\;\\;\\;\\;\\;\\;\\;\\; \\mathcal{Q}_h\\equiv X_h^s \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; r,s \\in \\mathbb{N}.\n" }, { "math_id": 100, "text": "\n\\exists \\beta_h >0 \\;\\text{ s.t. } \\; \\inf_{q_h \\in \\mathcal{Q}_h}\\sup_{\\mathbf v_h \\in \\mathcal{V}_h} \\frac{b(q_h,\\mathbf v_h)}{\\Vert \\mathbf v_h \\Vert_{H^1} \\Vert q_h \\Vert_{L^2}} \\geq \\beta_h \\;\\;\\;\\;\\;\\;\\;\\; \\forall h>0,\n" }, { "math_id": 101, "text": " \\beta_h >0 " }, { "math_id": 102, "text": " h." }, { "math_id": 103, "text": " \\mathbb{P}_{k+1}-\\mathbb{P}_k " }, { "math_id": 104, "text": " k \\geq 1 " }, { "math_id": 105, "text": " \\mathcal{Q}_h. " }, { "math_id": 106, "text": " \\mathbb{P}_{k}-\\mathbb{P}_k " }, { "math_id": 107, "text": " \\forall k " }, { "math_id": 108, "text": "\\{ \\boldsymbol{\\phi}_i(\\mathbf x) \\}_{i=1}^{N_V} \\;\\;\\;\\;\\;\\; \\{ \\psi_k(\\mathbf x) \\}_{k=1}^{N_Q}," }, { "math_id": 109, "text": "\\mathbf u^n_h = \\sum_{j=1}^{N_V} U^n_j \\boldsymbol{\\phi}_j(\\mathbf x), \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; q^n_h=\\sum_{l=1}^{N_Q} P^n_l \\psi_l(\\mathbf x)." }, { "math_id": 110, "text": " U^n_j " }, { "math_id": 111, "text": " j=1,\\ldots,N_V " }, { "math_id": 112, "text": " P^n_l " }, { "math_id": 113, "text": " l=1,\\ldots,N_Q " }, { "math_id": 114, "text": " N_V " }, { "math_id": 115, "text": " N_Q " }, { "math_id": 116, "text": " N_{d.o.f} " }, { "math_id": 117, "text": " N_{d.o.f}=N_V+N_Q " }, { "math_id": 118, "text": " a(\\cdot,\\cdot) " }, { "math_id": 119, "text": " b(\\cdot,\\cdot) " }, { "math_id": 120, "text": " c(\\cdot,\\cdot,\\cdot) " }, { "math_id": 121, "text": "\n\\begin{cases}\n\\displaystyle M \\frac{ 3 \\mathbf U^{n+1} -4 \\mathbf U^{n}+ \\mathbf U^{n-1}}{2\\delta t} + A\\mathbf U^{n+1} +C(\\mathbf U^*)\\mathbf U^{n+1}+\\displaystyle{B^T \\mathbf P^{n+1} = \\mathbf F^{n}}\n\\\\\n\\displaystyle{B \\mathbf U^{n+1} = \\mathbf 0}\n\\end{cases}\n" }, { "math_id": 122, "text": " M \\in \\mathbb{R}^{N_V \\times N_V} " }, { "math_id": 123, "text": " A \\in \\mathbb{R}^{N_V \\times N_V} " }, { "math_id": 124, "text": " C(\\mathbf U^*) \\in \\mathbb{R}^{N_V \\times N_V} " }, { "math_id": 125, "text": " B \\in \\mathbb{R}^{N_Q \\times N_V} " }, { "math_id": 126, "text": " F \\in \\mathbb{R}^{N_V} " }, { "math_id": 127, "text": "\n\\begin{align}\n& M_{ij}=\\int_{\\Omega} \\boldsymbol{\\phi}_j \\cdot \\boldsymbol{\\phi}_i d\\Omega\n\\\\\n& A_{ij}=a(\\boldsymbol{\\phi}_j,\\boldsymbol{\\phi}_i)\n\\\\\n& C_{ij}(\\mathbf u^*)=c(\\mathbf u^*,\\boldsymbol{\\phi}_j,\\boldsymbol{\\phi}_i),\n\\\\\n& B_{kj}=b(\\boldsymbol{\\phi}_j,\\psi_k),\n\\\\\n& F_{i}=f(\\boldsymbol{\\phi}_i)\n\\end{align}\n" }, { "math_id": 128, "text": " \\mathbf U " }, { "math_id": 129, "text": " \\mathbf P " }, { "math_id": 130, "text": "\n\\mathbf U^n=\\Big( U^n_1,\\ldots,U^n_{N_V} \\Big)^T, \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\mathbf P^n=\\Big( P^n_1,\\ldots,P^n_{N_Q} \\Big)^T.\n" }, { "math_id": 131, "text": " \\mathbf U(0)=\\mathbf U_0 " }, { "math_id": 132, "text": " \\mathbf U^{*}=2\\mathbf U^{n}-\\mathbf U^{n-1} " }, { "math_id": 133, "text": "\nC_{ij}=c(\\mathbf u^*,\\boldsymbol{\\phi}_j,\\boldsymbol{\\phi}_i)=\\int_{\\Omega}(\\mathbf u^* \\cdot \\nabla ) \\boldsymbol{\\phi}_j \\cdot \\boldsymbol{\\phi}_i \\,d\\Omega,\n" }, { "math_id": 134, "text": " \\Sigma " }, { "math_id": 135, "text": "\n\\begin{bmatrix} K & B^T \\\\ B & 0 \\end{bmatrix}\n\\begin{bmatrix} \\mathbf U^{n+1} \\\\ \\mathbf P^{n+1} \\end{bmatrix}\n= \\begin{bmatrix} \\mathbf F^n + \\frac{1}{2\\delta t}M(4 \\mathbf U^n -\\mathbf U^{n-1}) \\\\ \\mathbf 0 \\end{bmatrix} , \\;\\;\\;\\;\\;\n\\Sigma = \\begin{bmatrix} K & B^T \\\\ B & 0 \\end{bmatrix}.\n" }, { "math_id": 136, "text": " K=\\frac{3}{2\\delta t}M+A+C(U^*) " }, { "math_id": 137, "text": " \\mathbb{P}_k-\\mathbb{P}_k (\\forall k) " }, { "math_id": 138, "text": "\ns(\\mathbf u^{n+1}_h, p^{n+1}_h ;\\mathbf v_h, q_h)=\\gamma \\sum_{\\mathcal{T} \\in \\Omega_h} \\tau_{\\mathcal{T}} \\int_{\\mathcal{T}}\\left[ \\mathcal{L}(\\mathbf u^{n+1}_h, p^{n+1}) \\right]^T \\mathcal{L}_{ss}(\\mathbf v_h, q_h)d\\mathcal{T},\n" }, { "math_id": 139, "text": " \\gamma>0 " }, { "math_id": 140, "text": " \\tau_{\\mathcal{T}} " }, { "math_id": 141, "text": " \\mathcal{T} " }, { "math_id": 142, "text": " \\mathcal{L}(\\mathbf u, p) " }, { "math_id": 143, "text": "\n\\mathcal{L}(\\mathbf u, p) = \\begin{bmatrix}\n\\frac{\\partial \\mathbf u}{\\partial t}+ (\\mathbf u \\cdot \\nabla ) \\mathbf u + \\nabla p -2\\nu \\nabla \\cdot \\mathbf S(\\mathbf u) \\\\ \\nabla \\cdot \\mathbf u \\end{bmatrix}, \n" }, { "math_id": 144, "text": " \\mathcal{L}_{ss}(\\mathbf u, p) " }, { "math_id": 145, "text": "\n\\mathcal{L}_{ss}(\\mathbf u, p) = \\begin{bmatrix}\n(\\mathbf u \\cdot \\nabla ) \\mathbf u + \\nabla p \\\\ \\mathbf 0 \\end{bmatrix} . \n" }, { "math_id": 146, "text": " \\Bigl( \\mathcal{L}(\\mathbf u, p),(\\mathbf v, q) \\Bigr ) = -\\Bigl( (\\mathbf v, q), \\mathcal{L}(\\mathbf u, p) \\Bigr )." }, { "math_id": 147, "text": " t = 0, 1, \\ldots, N_t-1, " }, { "math_id": 148, "text": "(\\mathbf u^{n+1}_h,p^{n+1}_h)\\in \\{ \\mathcal{V}_{h} \\times \\mathcal{Q}_h\\} " }, { "math_id": 149, "text": "\n\\begin{align}\n&\\left( \\frac{ 3 \\mathbf u_h^{n+1} -4 \\mathbf u_h^{n}+ \\mathbf u_h^{n-1}}{2\\delta t},\\mathbf v_h \\right) + c(\\mathbf u^*_h,\\mathbf u^{n+1}_h,\\mathbf v_h)+b(\\mathbf u^{n+1}_h,q_h)-b(\\mathbf v_h,p^{n+1}_h) \n\\\\\n& \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; +a(\\mathbf u^{n+1}_h,\\mathbf v_h)+s(\\mathbf u^{n+1}_h, p^{n+1}_h ;\\mathbf v_h, q_h)=0\n\\\\\n\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\forall \\mathbf v_h \\in \\mathcal{V}_{0h} \\;\\;,\\;\\; \\forall q_h \\in \\mathcal{Q}_h, \n\\end{align}\n" }, { "math_id": 150, "text": " \\mathbf{u}^{0}_h=\\mathbf{u}_{h,0} " }, { "math_id": 151, "text": "\n\\begin{align}\ns(\\mathbf u^{n+1}_h, p^{n+1}_h ;\\mathbf v_h, q_h) &=\\gamma \\sum_{\\mathcal{T} \\in \\Omega_h} \\tau_{M,\\mathcal{T}} \\left( \\frac{3 \\mathbf u_h^{n+1} -4 \\mathbf u_h^{n}+ \\mathbf u_h^{n-1}}{2\\delta t} \n+ (\\mathbf u_h^* \\cdot \\nabla ) \\mathbf u_h^{n+1}+\\nabla p^{n+1}_{h}+ \\right.\n\\\\\n& \\left. -2\\nu \\nabla \\cdot \\mathbf S(\\mathbf u^{n+1}_h) \\; \\boldsymbol{,} \\; u_h^* \\cdot \\nabla \\mathbf v_h + \\frac{\\nabla q_h}{\\rho} \n\\right)_{\\mathcal{T}}+ \\gamma \\sum_{\\mathcal{T} \\in \\Omega_h} \\tau_{C,\\mathcal{T}}\\left( \\nabla \\cdot \\mathbf u^{n+1}_h \\boldsymbol{,} \\; \\nabla \\cdot \\mathbf v_h \\right)_{\\mathcal{T}}, \n\\end{align} \n" }, { "math_id": 152, "text": " \\tau_{M,\\mathcal{T}} " }, { "math_id": 153, "text": " \\tau_{C,\\mathcal{T}} " }, { "math_id": 154, "text": " \\left( a \\boldsymbol{,} \\; b \\right)_{\\mathcal{T}}=\\int_{\\mathcal{T}}ab \\; d\\mathcal{T} " }, { "math_id": 155, "text": " s \\left( \\cdot \\, ; \\cdot \\right) " }, { "math_id": 156, "text": " \\sum_{\\mathcal{T} \\in \\Omega_h} \\tau_{M,\\mathcal{T}} \\left( \n\\nabla p^{n+1}_{h} \\boldsymbol{,} \\; \\frac{\\nabla q_h}{\\rho} \\right)_{\\mathcal{T}}, " }, { "math_id": 157, "text": " \\sum_{\\mathcal{T} \\in \\Omega_h} \\tau_{M,\\mathcal{T}} \\left( \nu_h^* \\cdot \\nabla \\mathbf u^{n+1}_h \\boldsymbol{,} \\; u_h^* \\cdot \\nabla \\mathbf v_h \\right)_{\\mathcal{T}}, " }, { "math_id": 158, "text": "\n\\tau_{M,\\mathcal{T}}=\\left( \\frac{\\sigma_{BDF}^2}{\\delta t^2} + \\frac{\\Vert \\mathbf u \\Vert^2}{h_{\\mathcal{T}}^2} + C_k\\frac{\\nu^2}{h_{\\mathcal{T}}^4} \\right)^{-1/2}, \\;\\;\\;\\;\\; \\tau_{C,\\mathcal{T}}=\\frac{h_{\\mathcal{T}}^2}{\\tau_{M,\\mathcal{T}}},\n" }, { "math_id": 159, "text": " C_k=60 \\cdot 2^{k-2} " }, { "math_id": 160, "text": " k " }, { "math_id": 161, "text": " \\sigma_{BDF} " }, { "math_id": 162, "text": " \\delta t " }, { "math_id": 163, "text": " h_{\\mathcal{T}} " }, { "math_id": 164, "text": "\n\\begin{align}\ns^{(1)}_{11}=\\biggl( \\frac{3}{2} \\frac{\\mathbf u^{n+1}_h}{\\delta t} \n\\; \\boldsymbol{,} \\; \n\\mathbf u^*_h \\cdot \\nabla \\mathbf v_h \\biggr)_\\mathcal{T}, \n\\;\\;\\;\\;&\\;\\;\\;\\; \ns^{(1)}_{21}=\\biggl( \\frac{3}{2} \\frac{\\mathbf u^{n+1}_h}{\\delta t} \n\\; \\boldsymbol{,} \\; \n\\frac{\\nabla q_h}{\\rho} \\biggr)_\\mathcal{T},\n\\\\\ns^{(2)}_{11}=\\biggl( \\mathbf u^*_h \\cdot \\nabla \\mathbf u_h^{n+1} \n\\; \\boldsymbol{,} \\; \n\\mathbf u^*_h \\cdot \\nabla \\mathbf v_h \\biggr)_\\mathcal{T}, \n\\;\\;\\;\\;&\\;\\;\\;\\;\ns^{(2)}_{21}=\\biggl( \\mathbf u^*_h \\cdot \\nabla \\mathbf u_h^{n+1} \n\\; \\boldsymbol{,} \\; \n\\frac{\\nabla q_h}{\\rho} \\biggr)_\\mathcal{T},\n\\\\\ns^{(3)}_{11}=\\biggl( -2\\nu \\nabla \\cdot \\mathbf S & (\\mathbf u^{n+1}_h) \n\\; \\boldsymbol{,} \\; \n\\mathbf u^*_h \\cdot \\nabla \\mathbf v_h \\biggr)_\\mathcal{T}, \n\\\\\ns^{(3)}_{21}=\\biggl( -2\\nu \\nabla \\cdot \\mathbf S & (\\mathbf u^{n+1}_h) \n\\; \\boldsymbol{,} \\; \n\\frac{\\nabla q_h}{\\rho} \\biggr)_\\mathcal{T},\n\\\\\ns^{(3)}_{11}=\\biggl( -2\\nu \\nabla \\cdot \\mathbf S & (\\mathbf u^{n+1}_h) \n\\; \\boldsymbol{,} \\; \n\\mathbf u^*_h \\cdot \\nabla \\mathbf v_h \\biggr)_\\mathcal{T}, \n\\\\\ns^{(3)}_{21}=\\biggl( -2\\nu \\nabla \\cdot \\mathbf S & (\\mathbf u^{n+1}_h) \n\\; \\boldsymbol{,} \\; \n\\frac{\\nabla q_h}{\\rho} \\biggr)_\\mathcal{T},\n\\\\\ns^{(4)}_{11}=\\biggl( \\nabla \\cdot \\mathbf u_h^{n+1} \n\\; \\boldsymbol{,} & \\; \n\\nabla \\mathbf \\cdot \\mathbf v_h \\biggr)_\\mathcal{T},\n\\end{align}\n" }, { "math_id": 165, "text": "\n\\begin{align}\ns_{12}=\\biggl( \\nabla p_h \n\\; \\boldsymbol{,} \\; \n\\mathbf u^*_h \\cdot \\nabla \\mathbf v_h \\biggr)_\\mathcal{T}, \n\\;\\;\\;\\;&\\;\\;\\;\\;\ns_{22}=\\biggl( \\nabla p_h \n\\; \\boldsymbol{,} \\; \n\\frac{\\nabla q_h}{\\rho} \\biggr)_\\mathcal{T},\n\\\\\nf_v=\\biggl( \\frac{4\\mathbf u^{n}_h-\\mathbf u^{n-1}_h}{2\\delta t} \n\\; \\boldsymbol{,} \\; \n\\mathbf u^*_h \\cdot \\nabla \\mathbf v_h \\biggr)_\\mathcal{T}, \n\\;\\;\\;\\;&\\;\\;\\;\\; \nf_q=\\biggl( \\frac{4\\mathbf u^{n}_h-\\mathbf u^{n-1}_h}{2\\delta t}\n\\; \\boldsymbol{,} \\; \n\\frac{\\nabla q_h}{\\rho} \\biggr)_\\mathcal{T},\n\\end{align}\n" }, { "math_id": 166, "text": " s^{(n)}_{(I,J)} = \\sum_{\\mathcal{T} \\in \\Omega_h} \\tau_{\\mathcal{T}}\\left(\\cdot \\, , \\cdot \\right)_\\mathcal{T} " }, { "math_id": 167, "text": " I,J " }, { "math_id": 168, "text": " s^{(n)}_{(I,J)} " }, { "math_id": 169, "text": " n " }, { "math_id": 170, "text": "\n\\begin{bmatrix}\n\\Sigma_{11} & \\Sigma_{12} \\\\ \\Sigma_{21} & \\Sigma_{22} \\end{bmatrix} \\Longrightarrow \n\\begin{bmatrix}\ns^{(1)}_{(11)} + s^{(2)}_{(11)} + s^{(3)}_{(11)} + s^{(4)}_{(11)} & s_{(12)} \\\\ s^{(1)}_{(21)}+s^{(2)}_{(21)}+s^{(3)}_{(21)} & s_{(22)} \\end{bmatrix} , \n" }, { "math_id": 171, "text": "\n\\begin{bmatrix}\n\\ \\tilde{K} & B^T+S_{12}^T \\\\ \\widetilde{B} & S_{22} \\end{bmatrix} \\begin{bmatrix}\n\\mathbf U^{n+1} \\\\ \\mathbf P^{n+1} \\end{bmatrix} = \\begin{bmatrix}\n\\ \\mathbf F^n + \\frac{1}{2\\delta t}M(4 \\mathbf U^n -\\mathbf U^{n-1})+\\mathbf F_v \\\\ \\mathbf F_q \\end{bmatrix},\n" }, { "math_id": 172, "text": " \\tilde{K}=K+\\sum\\limits_{i=1}^4 S^{(i)}_{11} " }, { "math_id": 173, "text": " \\tilde{B}=B+\\sum\\limits_{i=1}^3S^{(i)}_{21} " }, { "math_id": 174, "text": " \\mathbb{P}_2-\\mathbb{P}_1 " } ]
https://en.wikipedia.org/wiki?curid=61186824